Movatterモバイル変換


[0]ホーム

URL:


CN115830567A - Road target fusion sensing method and system under low-light condition - Google Patents

Road target fusion sensing method and system under low-light condition
Download PDF

Info

Publication number
CN115830567A
CN115830567ACN202310120144.XACN202310120144ACN115830567ACN 115830567 ACN115830567 ACN 115830567ACN 202310120144 ACN202310120144 ACN 202310120144ACN 115830567 ACN115830567 ACN 115830567A
Authority
CN
China
Prior art keywords
image
road target
network
road
zero
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310120144.XA
Other languages
Chinese (zh)
Inventor
陈雪梅
肖龙
韩欣彤
杨宏伟
赵小萱
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shandong Weichuang Information Technology Co ltd
Advanced Technology Research Institute of Beijing Institute of Technology
Original Assignee
Shandong Weichuang Information Technology Co ltd
Advanced Technology Research Institute of Beijing Institute of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shandong Weichuang Information Technology Co ltd, Advanced Technology Research Institute of Beijing Institute of TechnologyfiledCriticalShandong Weichuang Information Technology Co ltd
Priority to CN202310120144.XApriorityCriticalpatent/CN115830567A/en
Publication of CN115830567ApublicationCriticalpatent/CN115830567A/en
Pendinglegal-statusCriticalCurrent

Links

Images

Landscapes

Abstract

The invention relates to the technical field of image recognition processing, in particular to a road target fusion perception method and system under a low-light condition. The method comprises the steps of obtaining an original image of a road target under weak light; carrying out image preprocessing on an original image to obtain a preprocessed image; performing marginalization processing on the preprocessed image to obtain an edge feature image; inputting the edge characteristic image into a Zero-DCE network for illumination enhancement; aiming at the enhanced image output by the Zero-DCE network, a final road target detection result is obtained by utilizing the improved YooloV 4 network. Based on the existing Yolov4 algorithm and the Zero-DCE weak illumination enhancement algorithm, the invention ensures that the detection algorithm provided by the invention has good detection performance and high detection speed, and can solve the detection problem caused by weak illumination in night scenes.

Description

Road target fusion sensing method and system under low-light condition
Technical Field
The invention relates to the technical field of image recognition processing, in particular to a road target fusion perception method and system under a low-light condition.
Background
The front-view camera is usually installed behind a front windshield, and various traffic target information such as front vehicles, vehicle-meeting vehicles, front pedestrians, traffic signs, lane lines and the like can be acquired through the front-view camera. The image information acquired by the front-view camera is identified and processed by the chip. The boundary between the background of the image and the target image is the image edge.
At present, in the image acquisition process of a front-view camera, due to the influence of electronic elements and electronic circuits of the camera, the acquired picture has noise, and the detection and identification results are influenced. Due to the fact that environmental illumination conditions are variable, illumination of a front target is uneven, the recognition effect and the application value of the front-looking camera are affected, and traffic accidents are caused when the situation is serious.
The traditional low-illumination image enhancement algorithm needs complex mathematical skill and complex mathematical derivation, the whole process is complex, and the method is not beneficial to practical application. With the successive birth of large-scale data sets, a low-light image enhancement algorithm based on deep learning comes along. The algorithm can enhance images under various illumination conditions, does not depend on pairing data, and has strong generalization capability.
Meanwhile, aiming at the problems of noise and uneven illumination existing in the shooting of a front-view camera of an automobile, a method and a system for sensing the fusion of road targets under the condition of low light are provided.
Disclosure of Invention
In order to solve the above mentioned problems, the present invention provides a method and a system for sensing road target fusion under low light conditions.
In a first aspect, the present invention provides a method for sensing road target fusion under low light conditions, which adopts the following technical scheme:
a road target fusion perception method under a weak light condition comprises the following steps:
acquiring an original image of a road target under weak light;
carrying out image preprocessing on an original image to obtain a preprocessed image;
performing marginalization processing on the preprocessed image to obtain an edge feature image;
inputting the edge characteristic image into a Zero-DCE network for illumination enhancement;
and aiming at the enhanced image output by the Zero-DCE network, obtaining a final road target detection result by utilizing the improved YoloV4 network.
Further, the image processing, including filtering and denoising, is performed on the original image to obtain a denoised image.
Further, the filtering and denoising of the original image to obtain the denoised image comprises the step of respectively filtering and denoising 3 spatial components of the color image by adopting a wavelet threshold algorithm.
Further, the image processing of the original image further includes performing weighted average graying processing on the denoised image to obtain a grayscale image.
Further, the weighted average graying processing is performed on the denoised image to obtain a grayscale image, and the weighted average graying processing is performed on the recombined 3 components.
Further, performing marginalization processing on the preprocessed image to obtain an edge feature image, wherein top-hat conversion is performed on the gray image to obtain a top-hat converted image; and carrying out edge detection and extraction on the top hat transformation image to obtain an edge characteristic image.
Further, the step of obtaining a final road target detection result by using the improved yoolov 4 network includes improving a network structure of a yoolov 4 algorithm to obtain a yoolov 4 improved algorithm suitable for road target detection.
In a second aspect, a system for sensing road target fusion under low light conditions includes:
the system comprises an image acquisition module, a data processing module and a data processing module, wherein the image acquisition module is configured to acquire an original image of a road target under weak light;
the preprocessing module is configured to carry out image preprocessing on the original image to obtain a preprocessed image;
the edge module is configured to perform marginalization processing on the preprocessed image to obtain an edge feature image;
the enhancement module is configured to input the edge feature image into a Zero-DCE network for illumination enhancement;
and the detection module is configured to obtain a final road target detection result by utilizing the improved YoloV4 network aiming at the enhanced image output by the Zero-DCE network.
In a third aspect, the present invention provides a computer-readable storage medium, wherein a plurality of instructions are stored, and the instructions are adapted to be loaded by a processor of a terminal device and execute the road target fusion perception method under the weak light condition.
In a fourth aspect, the present invention provides a terminal device, comprising a processor and a computer-readable storage medium, the processor being configured to implement instructions; the computer readable storage medium is used for storing a plurality of instructions, and the instructions are suitable for being loaded by a processor and executing the road target fusion perception method under the weak light condition.
In summary, the invention has the following beneficial technical effects:
based on the existing Yolov4 algorithm and the Zero-DCE weak illumination enhancement algorithm, the invention ensures that the detection algorithm provided by the invention has good detection performance and higher detection speed, and can solve the detection problem caused by weak illumination in night scenes. A more effective feature fusion network is provided, the detection difficulty caused by insufficient information circulation among feature maps of different layers is solved, and the detection performance of the YoloV4 algorithm on a road target in a low-light environment is further improved.
Drawings
Fig. 1 is a schematic diagram of a road target fusion sensing method under a low-light condition in embodiment 1 of the present invention.
Detailed Description
The present invention will be described in further detail with reference to the accompanying drawings.
Embodiment 1, referring to fig. 1, a method for sensing fusion of road targets under low-light conditions in this embodiment includes:
acquiring an original image of a road target under weak light;
carrying out image preprocessing on an original image to obtain a preprocessed image;
performing marginalization processing on the preprocessed image to obtain an edge feature image;
inputting the edge characteristic image into a Zero-DCE network for illumination enhancement;
aiming at the enhanced image output by the Zero-DCE network, a final road target detection result is obtained by utilizing the improved YooloV 4 network. And carrying out image processing on the original image, including filtering and denoising, to obtain a denoised image. The filtering and denoising of the original image to obtain the denoised image comprises the step of respectively filtering and denoising 3 space components of the color image by adopting a wavelet threshold algorithm. The image processing of the original image also comprises the weighted average graying processing of the denoised image to obtain a grayscale image. The weighted average graying processing is carried out on the denoised image to obtain a grayscale image, and the weighted average method is utilized to carry out graying processing on the recombined 3 components. Performing marginalization processing on the preprocessed image to obtain an edge characteristic image, wherein top-hat conversion is performed on the gray level image to obtain a top-hat conversion image; and carrying out edge detection and extraction on the top hat transformation image to obtain an edge characteristic image. The method for obtaining the final road target detection result by using the improved YoloV4 network comprises the step of improving the network structure of the YoloV4 algorithm to obtain the YoloV4 improved algorithm suitable for road target detection.
The method specifically comprises the following steps:
1. acquiring and inputting an original image G of the image;
2. carrying out wavelet soft threshold filtering denoising on the original image G to obtain a denoised image G'; dividing an original image G into three spatial components of y, u and v, wherein the three spatial components are represented as Gy, gu and Gv, y is a luminance signal, and u and v are two chrominance signals; and carrying out wavelet soft threshold filtering denoising on the three components Gy, gu and Gv of the original image G to obtain new components. And recombining the new three spatial components to form a new denoised image G'.
3. Carrying out weighted average graying processing on the denoised image G ', reducing the dimension of the image G' and obtaining a grayscale image G (i, j); the de-noised RGB image is grayed, and dimensionality is reduced; and performing weighted average calculation on three components R (i, j), G (i, j) and B (i, j) of the Red, green and Blue of the image G' according to G (i, j) =0.30R (i, j) +0.59G (i, j) +0.11B (i, j) to obtain a reasonable gray image G (i, j).
4. Carrying out top hat transformation on the gray level image g (i, j) to obtain an image W TH (g); wherein, selecting proper structural elements; and performing top-hat transformation according to the transformed image T hat (f) = f- (fob), and extracting a new target to obtain an image W TH (g), wherein f is a reduced gray-scale image, and b is a template in the top-hat transformation.
5. And carrying out edge detection and extraction on the image W TH (g) to obtain edge characteristics. The method comprises the steps that a xoy rectangular coordinate system is established on an image W TH (g), the coordinate of each pixel point is (x, y), a Gaussian filter is used for conducting smoothing processing on the image W TH (g), and the error detection probability caused by noise is reduced; calculating the direction and gradient strength of pixel points in the image, recording the information on the x axis as G x, recording the y axis as G y, and setting the direction and gradient G of each pixel point as
Figure SMS_1
Spurious responses due to edge detection are suppressed and eliminated by using non-maximum values.
6. The input image size is set to 416 x 416 and the input image is illuminated enhanced using the Zero-DCE algorithm. Zero-DCE is a low-illumination image enhancement algorithm that takes a low-illumination image as an input, takes the resulting high-order curves as an output, and then these curves are used as pixel-level adjustments to the varying range of the input, thereby obtaining an enhanced image.
7. Aiming at the enhanced image output by the Zero-DCE network, outputting a final road target detection result by adopting an improved YooloV 4 network, wherein the detection result comprises the position of a pedestrian target in the image to be classified, road vehicles and other obstacles.
For different application scenes, video information acquired by a camera in real time can be adopted, an image to be detected is intercepted according to frames, and the obtained image is cut or filled, so that the image is zoomed to 416 × 416, and the image is used as the input of the detection algorithm provided by the invention.
Embodiment 2 this embodiment provides a road target fusion perception system under the low light condition, including:
the system comprises an image acquisition module, a data processing module and a data processing module, wherein the image acquisition module is configured to acquire an original image of a road target under weak light;
the system comprises a preprocessing module, a storage module and a processing module, wherein the preprocessing module is configured to carry out image preprocessing on an original image to obtain a preprocessed image;
the edge module is configured to perform marginalization processing on the preprocessed image to obtain an edge feature image;
the enhancement module is configured to input the edge feature image into a Zero-DCE network for illumination enhancement;
and the detection module is configured to obtain a final road target detection result by utilizing the improved YoloV4 network aiming at the enhanced image output by the Zero-DCE network.
A computer readable storage medium having stored therein a plurality of instructions adapted to be loaded by a processor of a terminal device and to execute a method for road target fusion awareness in low light conditions.
A terminal device comprising a processor and a computer readable storage medium, the processor being configured to implement instructions; the computer readable storage medium is used for storing a plurality of instructions, and the instructions are suitable for being loaded by a processor and executing the road target fusion perception method under the weak light condition.
The above are all preferred embodiments of the present invention, and the protection scope of the present invention is not limited thereby, so: all equivalent changes made according to the structure, shape and principle of the invention are covered by the protection scope of the invention.

Claims (10)

1. A road target fusion perception method under the condition of weak light is characterized by comprising the following steps:
acquiring an original image of a road target under weak light;
carrying out image preprocessing on an original image to obtain a preprocessed image;
performing marginalization processing on the preprocessed image to obtain an edge feature image;
inputting the edge characteristic image into a Zero-DCE network for illumination enhancement;
aiming at the enhanced image output by the Zero-DCE network, a final road target detection result is obtained by utilizing the improved YooloV 4 network.
2. The method for sensing fusion of road targets under the weak light condition as claimed in claim 1, wherein the image processing including filtering and denoising is performed on the original image to obtain a denoised image.
3. The method for road target fusion perception according to claim 2, wherein the filtering and denoising of the original image to obtain a denoised image includes filtering and denoising 3 spatial components of the color image by using a wavelet threshold algorithm.
4. The method for fusion perception of road targets under weak light conditions as claimed in claim 3, wherein the image processing of the original image further includes performing weighted average graying processing of the denoised image to obtain a grayscale image.
5. The method as claimed in claim 4, wherein the graying processing of the de-noised image by weighted average includes graying the 3 reconstructed components by weighted average.
6. The method for fusion perception of road objects under the weak light condition of claim 5, wherein the marginalizing of the preprocessed image to obtain the edge feature image includes top-hat transforming the gray image to obtain a top-hat transformed image; and carrying out edge detection and extraction on the top hat transformation image to obtain an edge characteristic image.
7. The method as claimed in claim 6, wherein the obtaining of the final road target detection result by using the improved yoolov 4 network comprises improving a network structure of a yoolov 4 algorithm to obtain a yoolov 4 improved algorithm suitable for road target detection.
8. A road target fusion perception system under the condition of weak light is characterized by comprising:
the image acquisition module is configured to acquire an original image of the road target under weak light;
the preprocessing module is configured to carry out image preprocessing on the original image to obtain a preprocessed image;
the edge module is configured to perform marginalization processing on the preprocessed image to obtain an edge feature image;
the enhancement module is configured to input the edge feature image into a Zero-DCE network for illumination enhancement;
and the detection module is configured to obtain a final road target detection result by utilizing the improved YoloV4 network aiming at the enhanced image output by the Zero-DCE network.
9. A computer-readable storage medium having stored thereon a plurality of instructions adapted to be loaded by a processor of a terminal device and to execute a method for road object fusion awareness in low light conditions according to claim 1.
10. A terminal device comprising a processor and a computer readable storage medium, the processor being configured to implement instructions; a computer readable storage medium storing a plurality of instructions adapted to be loaded by a processor and to perform a method for road target fusion awareness in low light conditions according to claim 1.
CN202310120144.XA2023-02-162023-02-16Road target fusion sensing method and system under low-light conditionPendingCN115830567A (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
CN202310120144.XACN115830567A (en)2023-02-162023-02-16Road target fusion sensing method and system under low-light condition

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN202310120144.XACN115830567A (en)2023-02-162023-02-16Road target fusion sensing method and system under low-light condition

Publications (1)

Publication NumberPublication Date
CN115830567Atrue CN115830567A (en)2023-03-21

Family

ID=85521511

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN202310120144.XAPendingCN115830567A (en)2023-02-162023-02-16Road target fusion sensing method and system under low-light condition

Country Status (1)

CountryLink
CN (1)CN115830567A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN117893880A (en)*2024-01-252024-04-16西南科技大学 An object detection method based on adaptive feature learning in low-light images
CN119559086A (en)*2025-01-242025-03-04北京开拓航宇导控科技有限公司 A method and system for detecting small targets in a dark environment

Citations (8)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN102184552A (en)*2011-05-112011-09-14上海理工大学Moving target detecting method based on differential fusion and image edge information
CN111833367A (en)*2020-06-242020-10-27中国第一汽车股份有限公司 An image processing method, device, vehicle and storage medium
CN112200742A (en)*2020-10-102021-01-08北京享云智汇科技有限公司Filtering and denoising method applied to edge detection
CN113643345A (en)*2021-07-272021-11-12数量级(上海)信息技术有限公司Multi-view road intelligent identification method based on double-light fusion
CN114241340A (en)*2021-12-162022-03-25北京工业大学Image target detection method and system based on double-path depth residual error network
CN114708250A (en)*2022-04-242022-07-05上海人工智能创新中心Image processing method, device and storage medium
CN115019340A (en)*2022-05-112022-09-06成都理工大学 A nighttime pedestrian detection algorithm based on deep learning
CN115465182A (en)*2022-08-302022-12-13广州广日电气设备有限公司 Automatic high and low beam switching method and system based on night target detection

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN102184552A (en)*2011-05-112011-09-14上海理工大学Moving target detecting method based on differential fusion and image edge information
CN111833367A (en)*2020-06-242020-10-27中国第一汽车股份有限公司 An image processing method, device, vehicle and storage medium
CN112200742A (en)*2020-10-102021-01-08北京享云智汇科技有限公司Filtering and denoising method applied to edge detection
CN113643345A (en)*2021-07-272021-11-12数量级(上海)信息技术有限公司Multi-view road intelligent identification method based on double-light fusion
CN114241340A (en)*2021-12-162022-03-25北京工业大学Image target detection method and system based on double-path depth residual error network
CN114708250A (en)*2022-04-242022-07-05上海人工智能创新中心Image processing method, device and storage medium
CN115019340A (en)*2022-05-112022-09-06成都理工大学 A nighttime pedestrian detection algorithm based on deep learning
CN115465182A (en)*2022-08-302022-12-13广州广日电气设备有限公司 Automatic high and low beam switching method and system based on night target detection

Cited By (3)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN117893880A (en)*2024-01-252024-04-16西南科技大学 An object detection method based on adaptive feature learning in low-light images
CN119559086A (en)*2025-01-242025-03-04北京开拓航宇导控科技有限公司 A method and system for detecting small targets in a dark environment
CN119559086B (en)*2025-01-242025-05-30北京开拓航宇导控科技有限公司Dim light environment small target detection method and system

Similar Documents

PublicationPublication DateTitle
CN109785291B (en)Lane line self-adaptive detection method
CN113298810B (en)Road line detection method combining image enhancement and depth convolution neural network
CN103605977B (en)Extracting method of lane line and device thereof
CN110782477A (en)Moving target rapid detection method based on sequence image and computer vision system
CN103578083B (en)Single image defogging method based on associating average drifting
CN105354865A (en)Automatic cloud detection method and system for multi-spectral remote sensing satellite image
CN115830567A (en)Road target fusion sensing method and system under low-light condition
CN102665034A (en)Night effect removal method for camera-collected video
CN103400150A (en)Method and device for road edge recognition based on mobile platform
CN107066952A (en)A kind of method for detecting lane lines
CN112200742A (en)Filtering and denoising method applied to edge detection
CN113723432B (en)Intelligent identification and positioning tracking method and system based on deep learning
CN104933728A (en)Mixed motion target detection method
CN111311503A (en) A low-brightness image enhancement system at night
CN116757949A (en)Atmosphere-ocean scattering environment degradation image restoration method and system
CN111027564A (en)Low-illumination imaging license plate recognition method and device based on deep learning integration
CN110197465B (en)Foggy image enhancement method
CN106485663A (en)A kind of lane line image enchancing method and system
CN111833384B (en)Method and device for rapidly registering visible light and infrared images
CN111028184B (en)Image enhancement method and system
CN110633705A (en)Low-illumination imaging license plate recognition method and device
CN112528994A (en)Free-angle license plate detection method, license plate identification method and identification system
CN106780541A (en)A kind of improved background subtraction method
CN115546074A (en)Image target detection method and related equipment
CN115115546A (en)Image processing method, system, electronic equipment and readable storage medium

Legal Events

DateCodeTitleDescription
PB01Publication
PB01Publication
SE01Entry into force of request for substantive examination
SE01Entry into force of request for substantive examination
RJ01Rejection of invention patent application after publication
RJ01Rejection of invention patent application after publication

Application publication date:20230321


[8]ページ先頭

©2009-2025 Movatter.jp