Road target fusion sensing method and system under low-light conditionTechnical Field
The invention relates to the technical field of image recognition processing, in particular to a road target fusion perception method and system under a low-light condition.
Background
The front-view camera is usually installed behind a front windshield, and various traffic target information such as front vehicles, vehicle-meeting vehicles, front pedestrians, traffic signs, lane lines and the like can be acquired through the front-view camera. The image information acquired by the front-view camera is identified and processed by the chip. The boundary between the background of the image and the target image is the image edge.
At present, in the image acquisition process of a front-view camera, due to the influence of electronic elements and electronic circuits of the camera, the acquired picture has noise, and the detection and identification results are influenced. Due to the fact that environmental illumination conditions are variable, illumination of a front target is uneven, the recognition effect and the application value of the front-looking camera are affected, and traffic accidents are caused when the situation is serious.
The traditional low-illumination image enhancement algorithm needs complex mathematical skill and complex mathematical derivation, the whole process is complex, and the method is not beneficial to practical application. With the successive birth of large-scale data sets, a low-light image enhancement algorithm based on deep learning comes along. The algorithm can enhance images under various illumination conditions, does not depend on pairing data, and has strong generalization capability.
Meanwhile, aiming at the problems of noise and uneven illumination existing in the shooting of a front-view camera of an automobile, a method and a system for sensing the fusion of road targets under the condition of low light are provided.
Disclosure of Invention
In order to solve the above mentioned problems, the present invention provides a method and a system for sensing road target fusion under low light conditions.
In a first aspect, the present invention provides a method for sensing road target fusion under low light conditions, which adopts the following technical scheme:
a road target fusion perception method under a weak light condition comprises the following steps:
acquiring an original image of a road target under weak light;
carrying out image preprocessing on an original image to obtain a preprocessed image;
performing marginalization processing on the preprocessed image to obtain an edge feature image;
inputting the edge characteristic image into a Zero-DCE network for illumination enhancement;
and aiming at the enhanced image output by the Zero-DCE network, obtaining a final road target detection result by utilizing the improved YoloV4 network.
Further, the image processing, including filtering and denoising, is performed on the original image to obtain a denoised image.
Further, the filtering and denoising of the original image to obtain the denoised image comprises the step of respectively filtering and denoising 3 spatial components of the color image by adopting a wavelet threshold algorithm.
Further, the image processing of the original image further includes performing weighted average graying processing on the denoised image to obtain a grayscale image.
Further, the weighted average graying processing is performed on the denoised image to obtain a grayscale image, and the weighted average graying processing is performed on the recombined 3 components.
Further, performing marginalization processing on the preprocessed image to obtain an edge feature image, wherein top-hat conversion is performed on the gray image to obtain a top-hat converted image; and carrying out edge detection and extraction on the top hat transformation image to obtain an edge characteristic image.
Further, the step of obtaining a final road target detection result by using the improved yoolov 4 network includes improving a network structure of a yoolov 4 algorithm to obtain a yoolov 4 improved algorithm suitable for road target detection.
In a second aspect, a system for sensing road target fusion under low light conditions includes:
the system comprises an image acquisition module, a data processing module and a data processing module, wherein the image acquisition module is configured to acquire an original image of a road target under weak light;
the preprocessing module is configured to carry out image preprocessing on the original image to obtain a preprocessed image;
the edge module is configured to perform marginalization processing on the preprocessed image to obtain an edge feature image;
the enhancement module is configured to input the edge feature image into a Zero-DCE network for illumination enhancement;
and the detection module is configured to obtain a final road target detection result by utilizing the improved YoloV4 network aiming at the enhanced image output by the Zero-DCE network.
In a third aspect, the present invention provides a computer-readable storage medium, wherein a plurality of instructions are stored, and the instructions are adapted to be loaded by a processor of a terminal device and execute the road target fusion perception method under the weak light condition.
In a fourth aspect, the present invention provides a terminal device, comprising a processor and a computer-readable storage medium, the processor being configured to implement instructions; the computer readable storage medium is used for storing a plurality of instructions, and the instructions are suitable for being loaded by a processor and executing the road target fusion perception method under the weak light condition.
In summary, the invention has the following beneficial technical effects:
based on the existing Yolov4 algorithm and the Zero-DCE weak illumination enhancement algorithm, the invention ensures that the detection algorithm provided by the invention has good detection performance and higher detection speed, and can solve the detection problem caused by weak illumination in night scenes. A more effective feature fusion network is provided, the detection difficulty caused by insufficient information circulation among feature maps of different layers is solved, and the detection performance of the YoloV4 algorithm on a road target in a low-light environment is further improved.
Drawings
Fig. 1 is a schematic diagram of a road target fusion sensing method under a low-light condition in embodiment 1 of the present invention.
Detailed Description
The present invention will be described in further detail with reference to the accompanying drawings.
Embodiment 1, referring to fig. 1, a method for sensing fusion of road targets under low-light conditions in this embodiment includes:
acquiring an original image of a road target under weak light;
carrying out image preprocessing on an original image to obtain a preprocessed image;
performing marginalization processing on the preprocessed image to obtain an edge feature image;
inputting the edge characteristic image into a Zero-DCE network for illumination enhancement;
aiming at the enhanced image output by the Zero-DCE network, a final road target detection result is obtained by utilizing the improved YooloV 4 network. And carrying out image processing on the original image, including filtering and denoising, to obtain a denoised image. The filtering and denoising of the original image to obtain the denoised image comprises the step of respectively filtering and denoising 3 space components of the color image by adopting a wavelet threshold algorithm. The image processing of the original image also comprises the weighted average graying processing of the denoised image to obtain a grayscale image. The weighted average graying processing is carried out on the denoised image to obtain a grayscale image, and the weighted average method is utilized to carry out graying processing on the recombined 3 components. Performing marginalization processing on the preprocessed image to obtain an edge characteristic image, wherein top-hat conversion is performed on the gray level image to obtain a top-hat conversion image; and carrying out edge detection and extraction on the top hat transformation image to obtain an edge characteristic image. The method for obtaining the final road target detection result by using the improved YoloV4 network comprises the step of improving the network structure of the YoloV4 algorithm to obtain the YoloV4 improved algorithm suitable for road target detection.
The method specifically comprises the following steps:
1. acquiring and inputting an original image G of the image;
2. carrying out wavelet soft threshold filtering denoising on the original image G to obtain a denoised image G'; dividing an original image G into three spatial components of y, u and v, wherein the three spatial components are represented as Gy, gu and Gv, y is a luminance signal, and u and v are two chrominance signals; and carrying out wavelet soft threshold filtering denoising on the three components Gy, gu and Gv of the original image G to obtain new components. And recombining the new three spatial components to form a new denoised image G'.
3. Carrying out weighted average graying processing on the denoised image G ', reducing the dimension of the image G' and obtaining a grayscale image G (i, j); the de-noised RGB image is grayed, and dimensionality is reduced; and performing weighted average calculation on three components R (i, j), G (i, j) and B (i, j) of the Red, green and Blue of the image G' according to G (i, j) =0.30R (i, j) +0.59G (i, j) +0.11B (i, j) to obtain a reasonable gray image G (i, j).
4. Carrying out top hat transformation on the gray level image g (i, j) to obtain an image W TH (g); wherein, selecting proper structural elements; and performing top-hat transformation according to the transformed image T hat (f) = f- (fob), and extracting a new target to obtain an image W TH (g), wherein f is a reduced gray-scale image, and b is a template in the top-hat transformation.
5. And carrying out edge detection and extraction on the image W TH (g) to obtain edge characteristics. The method comprises the steps that a xoy rectangular coordinate system is established on an image W TH (g), the coordinate of each pixel point is (x, y), a Gaussian filter is used for conducting smoothing processing on the image W TH (g), and the error detection probability caused by noise is reduced; calculating the direction and gradient strength of pixel points in the image, recording the information on the x axis as G x, recording the y axis as G y, and setting the direction and gradient G of each pixel point as
。
Spurious responses due to edge detection are suppressed and eliminated by using non-maximum values.
6. The input image size is set to 416 x 416 and the input image is illuminated enhanced using the Zero-DCE algorithm. Zero-DCE is a low-illumination image enhancement algorithm that takes a low-illumination image as an input, takes the resulting high-order curves as an output, and then these curves are used as pixel-level adjustments to the varying range of the input, thereby obtaining an enhanced image.
7. Aiming at the enhanced image output by the Zero-DCE network, outputting a final road target detection result by adopting an improved YooloV 4 network, wherein the detection result comprises the position of a pedestrian target in the image to be classified, road vehicles and other obstacles.
For different application scenes, video information acquired by a camera in real time can be adopted, an image to be detected is intercepted according to frames, and the obtained image is cut or filled, so that the image is zoomed to 416 × 416, and the image is used as the input of the detection algorithm provided by the invention.
Embodiment 2 this embodiment provides a road target fusion perception system under the low light condition, including:
the system comprises an image acquisition module, a data processing module and a data processing module, wherein the image acquisition module is configured to acquire an original image of a road target under weak light;
the system comprises a preprocessing module, a storage module and a processing module, wherein the preprocessing module is configured to carry out image preprocessing on an original image to obtain a preprocessed image;
the edge module is configured to perform marginalization processing on the preprocessed image to obtain an edge feature image;
the enhancement module is configured to input the edge feature image into a Zero-DCE network for illumination enhancement;
and the detection module is configured to obtain a final road target detection result by utilizing the improved YoloV4 network aiming at the enhanced image output by the Zero-DCE network.
A computer readable storage medium having stored therein a plurality of instructions adapted to be loaded by a processor of a terminal device and to execute a method for road target fusion awareness in low light conditions.
A terminal device comprising a processor and a computer readable storage medium, the processor being configured to implement instructions; the computer readable storage medium is used for storing a plurality of instructions, and the instructions are suitable for being loaded by a processor and executing the road target fusion perception method under the weak light condition.
The above are all preferred embodiments of the present invention, and the protection scope of the present invention is not limited thereby, so: all equivalent changes made according to the structure, shape and principle of the invention are covered by the protection scope of the invention.