Multi-view-angle feature-fused road surface water detection and identification methodTechnical Field
The invention relates to the technical field of automobile detection, in particular to a method for detecting and identifying surface water through multi-view-angle feature fusion.
Background
With the continuous popularization of AI intelligent technology in vehicle-mounted scene applications, automatic driving and auxiliary driving technologies based on camera detection and recognition functions are also gradually applied to passenger cars. After the sunken ponding on road surface of traveling, have to the outstanding influence of on-vehicle vision perception: causing the false recognition to lead to the misoperation of the vehicle and bringing potential safety hazard to the driving. Therefore, accurate identification of surface water is essential to achieve vehicle vision intelligent perception more safely and reliably.
At present, the main methods for identifying the accumulated water are as follows: firstly, the method needs a large amount of data training and a large amount of calculation power for an operating platform based on the target classification of machine learning, and the method is applied to multiple application and security monitoring scenes and has poor performance aiming at driving roads with dynamically changeable backgrounds; secondly, based on a method of polarized light measurement, the vehicle-mounted lens cannot realize the optical structure at present, and then the concave and smooth ground reflection also presents polarized interference; thirdly, the defects of the ponding detection based on the reflection optical model and the white point prior hypothesis make the method easy to detect the lane line as the ponding reflection surface, resulting in a plurality of false detections.
Disclosure of Invention
The invention provides a method for detecting and identifying surface water by multi-view-angle feature fusion, and aims to solve the problems in the existing surface water detection.
According to the embodiment of the application, the method for detecting and identifying the multiple-view-angle feature fused surface water is provided, and comprises the following steps:
step S1: performing offline all-around calibration to obtain image coordinates corresponding to ground public areas with different viewing angles;
step S2: detecting the surface water of the public area based on the color histogram and the information entropy;
step S5: outputting a result;
step S3: inputting a candidate area, and judging whether the candidate area is a public area;
if the candidate area is the common area, performing step S2;
if the candidate area is not the common area, performing step S4;
step S4, detecting the surface water of the non-public area based on the HOG characteristics;
step S5: and outputting the result.
Preferably, the step S1 includes:
step S11: based on a calibration field, the calibration of the two-dimensional overlook is realized, and the corresponding relation between the ground space point and the original image coordinate is obtained based on the calibration:
step S12: based on the two-dimensional top view, corresponding ground coordinates of different viewing angles in the common area are obtained, based on the corresponding relation between the ground space point and the original image coordinate obtained through calibration, the inverse operation of the step S11 is realized, and the image coordinate corresponding to the common area is output;
in step S11, F represents the conversion relationship from the image coordinates to the ground coordinates, i is 0, 1, 2, and 3 and represents four different viewing angles, i.e., right, rear, left, and front.
Preferably, the step S2 includes:
step S21: extracting a target image of a first visual angle and extracting a target image of a second visual angle based on the common area coordinates;
step S22: respectively calculating color histograms of the first visual angle target image and the second visual angle target image;
step S23: matching the color histogram and calculating the similarity of the color histogram;
step S24: calculating information entropy deviation;
step S25: and judging whether the road surface is accumulated with water or not based on the similarity of the color histogram and the information entropy deviation.
Preferably, the step S21 includes: extracting a target image I-left of a first visual angle and extracting a target image I-right of a second visual angle; the step S22 includes a step S221: color reduction I-left color I-left/15, I-right color I-right/15; step S222: respectively counting RGB channel gray level histograms H rgbleft and H rgbright of the I-left color and the I-right color after the color reduction;
preferably, the step S23 includes:
step S231: the dominant color with the highest statistical value, C left, C right, is retrieved and the difference is calculated:
d C=abs(C left–C right);
step S232: if d C is greater than 30, S hist is 0, and the output returns the similarity of the histogram;
step S233: calculating a histogram sequence which accounts for not less than 70% of all pixels by taking C left as a center, wherein the total occupancy is p left, and calculating the proportion p right of all pixels in H rgbright based on a sequence range;
step S234: calculating the similarity, wherein shift is 1-abs (p left-p right);
the step S24 includes: respectively calculating the information entropies of I-left and I-right to obtain the information entropy En
leftAnd En
rightCalculating the information entropy deviation
In step S25, the conditions for determining whether water is accumulated on the road surface are: shist less than 0.4 and DEnLess than 0.4.
Preferably, the step S4 includes:
step S41: calculating HOG characteristic HOG cur of the current frame and the target region Rcur;
step S42: calculating a target area Rpre corresponding to the first 5 frame sequences based on motion compensation;
step S43: calculating HOG characteristics, HOG prei, of the first 5 frames corresponding to the Rprei;
step S44: respectively calculating the difference of the HOG cur and HOG prei characteristic vectors, and calculating the mean value Dhogman of the difference;
step S45: calculating the variance delta of the feature vector group composed of HOG cur and HOG preihog;
Step S46: judging whether water is accumulated on the road surface of the area, if the Dhogman is more than 0.7 and deltahogAnd if the water content is more than 0.2, accumulated water exists on the road surface of the target area.
The technical scheme provided by the embodiment of the application can have the following beneficial effects: compared with the traditional scheme, the scheme is based on the defect that the existing vehicle running road accumulated water detection method is applied under the vehicle-mounted environment. The invention designs a multi-view-angle feature-fused detection and identification method for the surface water accumulation based on the characteristics of an optical sensing system consisting of vehicle-mounted panoramic vision. The method is simple to implement and stable in performance. After water is accumulated on the road surface, mirror reflection is formed for illumination, the same water accumulation road surface is observed at different angles, and corresponding changes (different gray scales, colors and textures at different viewing angles; different gray scales, colors and textures at the same viewing angle) can be presented in an image video due to the dynamic change of the water surface: based on the prior knowledge, the method designs a method for efficiently measuring the difference of the ground image characteristics at different visual angles and the change of the image characteristics in the video stream at the same visual angle, and combines the difference and the change to distinguish the normal ground and the seeper ground so as to realize the rapid detection of the seeper on the road surface.
Because the four lenses are different in color and brightness caused by illumination, the consistency of the chromaticity and the brightness is repaired by the all-round looking system, and the color and the brightness of the same ground area observed at different visual angles are basically similar under normal conditions. The invention aims at different viewing angles to observe public areas, and judges whether the surface water is accumulated or not by calculating local color histograms and information entropies and fusing two characteristics. And detecting a non-public area, calculating the change amplitude of the inter-frame HOG characteristic based on extracting local gradient Histogram (HOG) characteristics of the same ground area in different video frames, and judging whether water is accumulated on the corresponding ground or not based on the size of the change amplitude.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
FIG. 1 is a schematic flow chart of a method for detecting and identifying surface water through multi-view feature fusion according to the present invention;
fig. 2 is a schematic flow chart of step S1 in the method for detecting and identifying surface water with multi-view feature fusion according to the present invention;
fig. 3 is a schematic flowchart of step S2 in the method for detecting and identifying surface water with multi-view feature fusion according to the present invention;
fig. 4 is a schematic flowchart of step S23 in the method for detecting and identifying surface water with multi-view feature fusion according to the present invention;
fig. 5 is a schematic flow chart of step S4 in the method for detecting and identifying surface water with multi-view feature fusion according to the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It is also to be understood that the terminology used in the description of the invention herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used in this specification and the appended claims, the singular forms "a", "an", and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise.
It should be further understood that the term "and/or" as used in this specification and the appended claims refers to and includes any and all possible combinations of one or more of the associated listed items.
Referring to fig. 1, the invention discloses amethod 10 for detecting and identifying surface water with multi-view feature fusion, which comprises the following steps:
step S1: performing offline all-around calibration to obtain image coordinates corresponding to ground public areas with different viewing angles;
step S2: detecting the surface water of the public area based on the color histogram and the information entropy;
step S5: outputting a result;
step S3: inputting a candidate area, and judging whether the candidate area is a public area;
if the candidate area is the common area, performing step S2;
if the candidate area is not the common area, performing step S4;
step S4, detecting the surface water of the non-public area based on the HOG characteristics;
step S5: and outputting the result.
By adopting the design, the scheme is based on the defect that the current detection method for the vehicle running road accumulated water is applied in the vehicle-mounted environment. The invention designs a multi-view-angle feature-fused detection and identification method for the surface water accumulation based on the characteristics of an optical sensing system consisting of vehicle-mounted panoramic vision. The method is simple to implement and stable in performance. After water is accumulated on the road surface, mirror reflection is formed for illumination, the same water accumulation road surface is observed at different angles, and corresponding changes (different gray scales, colors and textures at different viewing angles; different gray scales, colors and textures at the same viewing angle) can be presented in an image video due to the dynamic change of the water surface: based on the prior knowledge, the method designs a method for efficiently measuring the difference of the ground image characteristics at different visual angles and the change of the image characteristics in the video stream at the same visual angle, and combines the difference and the change to distinguish the normal ground and the seeper ground so as to realize the rapid detection of the seeper on the road surface.
Because the four lenses are different in color and brightness caused by illumination, the consistency of the chromaticity and the brightness is repaired by the all-round looking system, and the color and the brightness of the same ground area observed at different visual angles are basically similar under normal conditions. The invention aims at different viewing angles to observe public areas, and judges whether the surface water is accumulated or not by calculating local color histograms and information entropies and fusing two characteristics. And detecting a non-public area, calculating the change amplitude of the inter-frame HOG characteristic based on extracting local gradient Histogram (HOG) characteristics of the same ground area in different video frames, and judging whether water is accumulated on the corresponding ground or not based on the size of the change amplitude.
Referring to fig. 2, the step S1 includes:
step S11: based on a calibration field, the calibration of the two-dimensional overlook is realized, and the corresponding relation between the ground space point and the original image coordinate is obtained based on the calibration:
step S12: based on the two-dimensional top view, corresponding ground coordinates of different viewing angles in the common area are obtained, based on the corresponding relation between the ground space point and the original image coordinate obtained through calibration, the inverse operation of the step S11 is realized, and the image coordinate corresponding to the common area is output;
in step S11, F represents the conversion relationship from the image coordinates to the ground coordinates, i is 0, 1, 2, and 3 and represents four different viewing angles, i.e., right, rear, left, and front.
Referring to fig. 3, the step S2 includes:
step S21: extracting a target image of a first visual angle and extracting a target image of a second visual angle based on the common area coordinates;
step S22: respectively calculating color histograms of the first visual angle target image and the second visual angle target image;
step S23: matching the color histogram and calculating the similarity of the color histogram;
step S24: calculating information entropy deviation;
step S25: and judging whether the road surface is accumulated with water or not based on the similarity of the color histogram and the information entropy deviation.
Wherein the step S21 includes: extracting a target image I-left of a first visual angle and extracting a target image I-right of a second visual angle; the step S22 includes a step S221: color reduction I-left color I-left/15, I-right color I-right/15; in the present embodiment, the reference gray interval is 15; step S222: respectively counting RGB channel gray level histograms H rgbleft and H rgbright of the I-left color and the I-right color after the color reduction;
referring to fig. 4, the step S23 includes:
step S231: the dominant color with the highest statistical value, C left, C right, is retrieved and the difference is calculated:
d C=abs(C left–C right);
step S232: if d C is greater than 30, S hist is 0, and the output returns the similarity of the histogram;
step S233: calculating a histogram sequence which accounts for not less than 70% of all pixels by taking C left as a center, wherein the total occupancy is p left, and calculating the proportion p right of all pixels in H rgbright based on a sequence range;
step S234: calculating the similarity, wherein shift is 1-abs (p left-p right);
the step S24 includes: respectively calculating the information entropies of I-left and I-right to obtain the information entropy En
leftAnd En
rightCalculating the information entropy deviation
In step S25, the conditions for determining whether water is accumulated on the road surface are: shist less than 0.4 and DEnLess than 0.4.
Referring to fig. 5, the step S4 includes:
step S41: calculating HOG characteristic HOG cur of the current frame and the target region Rcur;
step S42: calculating a target area Rpre corresponding to the first 5 frame sequences based on motion compensation;
step S43: calculating HOG characteristics, HOG prei, of the first 5 frames corresponding to the Rprei;
step S44: respectively calculating the difference of the HOG cur and HOG prei characteristic vectors, and calculating the mean value Dhogman of the difference;
step S45: calculating the variance delta of the feature vector group composed of HOG cur and HOG preihog;
Step S46: judging whether water is accumulated on the road surface of the area, if the Dhogman is more than 0.7 and deltahogAnd if the water content is more than 0.2, accumulated water exists on the road surface of the target area.
In this embodiment, the motion compensation of step S42 is not the innovative point of the present invention, and will not be described herein again. In step S46, the threshold of the present invention is selected from 0.7 and 2.0 as the reference values.
While the invention has been described with reference to specific embodiments, the invention is not limited thereto, and various equivalent modifications and substitutions can be easily made by those skilled in the art within the technical scope of the invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.