Movatterモバイル変換


[0]ホーム

URL:


CN108983219B - Fusion method and system for image information and radar information of traffic scene - Google Patents

Fusion method and system for image information and radar information of traffic scene
Download PDF

Info

Publication number
CN108983219B
CN108983219BCN201810939902.XACN201810939902ACN108983219BCN 108983219 BCN108983219 BCN 108983219BCN 201810939902 ACN201810939902 ACN 201810939902ACN 108983219 BCN108983219 BCN 108983219B
Authority
CN
China
Prior art keywords
information
image information
radar
target
fusion
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810939902.XA
Other languages
Chinese (zh)
Other versions
CN108983219A (en
Inventor
余贵珍
张思佳
王章宇
张艳飞
吴新开
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Tage Idriver Technology Co Ltd
Original Assignee
Beihang University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beihang UniversityfiledCriticalBeihang University
Priority to CN201810939902.XApriorityCriticalpatent/CN108983219B/en
Publication of CN108983219ApublicationCriticalpatent/CN108983219A/en
Application grantedgrantedCritical
Publication of CN108983219BpublicationCriticalpatent/CN108983219B/en
Activelegal-statusCriticalCurrent
Anticipated expirationlegal-statusCritical

Links

Images

Classifications

Landscapes

Abstract

Translated fromChinese

本发明提供一种交通场景的图像信息和雷达信息的融合方法,包括以下步骤:对摄像头所获得的车辆前方图像信息进行预处理;提取图像信息中的特征信息,与预存的交通场景信息进行对比判断;依据对比判断的结果,对当前交通场景进行分类,依照预设的与当前交通场景类别相适配的图像信息和雷达信息的融合方法执行相应融合算法,并将融合算法的结果进行输出。本发明依据所采集的图像信息判断场景,在不同的融合算法间进行切换,有效利用资源,提高场景适应能力;本发明充分利用不同传感器数据间的冗余和互补特性,提高系统的鲁棒性和可靠性;本发明在图像信息处理方面,采用了深度学习算法,实时性更高,识别目标也更准确。

Figure 201810939902

The invention provides a method for fusing image information and radar information of a traffic scene, comprising the following steps: preprocessing the image information in front of a vehicle obtained by a camera; extracting feature information in the image information, and comparing it with pre-stored traffic scene information Judgment; according to the results of the comparison and judgment, classify the current traffic scene, execute the corresponding fusion algorithm according to the preset fusion method of image information and radar information suitable for the current traffic scene category, and output the result of the fusion algorithm. The present invention judges the scene according to the collected image information, switches between different fusion algorithms, effectively utilizes resources, and improves the scene adaptability; the present invention makes full use of the redundancy and complementarity between different sensor data to improve the robustness of the system and reliability; in the aspect of image information processing, the present invention adopts a deep learning algorithm, which has higher real-time performance and more accurate target recognition.

Figure 201810939902

Description

Fusion method and system for image information and radar information of traffic scene
Technical Field
The invention relates to the field of safe driving, in particular to a fusion method and system of image information and radar information of a traffic scene.
Background
With the continuous improvement of the electric, intelligent and networking degree of automobiles, advanced vehicle Assistant Driving (ADAS) has become a key research direction of various enterprises, colleges and research institutes at present, and environmental awareness is the most basic key technology in the ADAS system. The method can accurately acquire the effective target information in front of the road, can provide powerful technical support for active safety technologies such as an adaptive cruise system (ACC) and an automatic emergency braking system (AEB), and has important significance for the development of an ADAS (adaptive cruise control system). The existing environment perception technology mostly adopts a single sensor or simply superposes data of multiple sensors. The method cannot meet the requirements of high precision and all-weather perception of the intelligent vehicle and is difficult to comprehensively reflect the detected object. The multi-sensor fusion technology is a method for synthesizing information collected by a plurality of sensors such as a visual sensor, a radar sensor and the like to form comprehensive description of environmental characteristics, and can make full use of redundancy and complementary characteristics among data of the plurality of sensors to obtain complete and sufficient information required by the intelligent vehicle. The camera and the millimeter wave radar realize good complementation with rich information content and good weather adaptability, and become two sensors which are most applied in the field of information fusion.
In early researches, multi-sensor information fusion is only to simply overlay multi-sensor information on a data level, and with continuous progress of radar information processing algorithms, vision algorithms and the like, fusion methods based on features and decisions are concerned by more and more scholars. In 2012, R.Omar Chavez-Garcia et al proposed that radar and monocular vision were used for forward target perception, that the original data of radar and camera were used as input to detect moving objects, and further that the information of these moving objects was fused using D-S evidence fusion theory; wu et al propose to obtain a target contour through three-dimensional depth information, find a point closest to a visual sensor, then fuse the point with information detected by a radar to obtain a fused closest point, and further determine a fused contour. Although the detection accuracy of the methods is improved compared with that of a single sensor, the images need to be subjected to traversal processing, and the traditional image processing algorithm is adopted, so that the visual calculation intensity is high, and the requirement of people on real-time performance is difficult to meet. Liu et al have proposed a method for classifying road vehicles using an SVM-based classifier; Chavez-Garcia et al and Vu et al use HOG features and Boosting classifiers to classify vehicles. These machine learning approaches, while improving the accuracy of detection while reducing computational intensity, rely heavily on the training data set of the experimental environment. In 2015, Alencar uses a millimeter wave radar and a camera to perform data fusion to realize identification and classification of multiple targets of a road, performs comprehensive analysis on camera data and millimeter wave radar data by using k-means clustering analysis, a support vector machine and kernel principal component analysis to obtain the road targets, and is high in accuracy but only suitable for identification of close-range targets in good weather.
In summary, most of the existing researches are only discussed for the detection problem in a specific traffic scene, the advantages of how to utilize different sensors in different traffic scenes are not considered, the fusion method has poor adaptability to application scene changes, and the construction of the fusion structure and the optimization of the overall performance are not noticed.
Disclosure of Invention
The invention aims to provide a method and a system for fusing image information and radar information of a traffic scene, which have the advantages of high accuracy, high processing speed and strong adaptability.
In order to achieve the above object, a technical method of the present invention is to provide a method for fusing image information and radar information of a traffic scene, comprising the following steps: preprocessing the image information in front of the vehicle obtained by the camera; extracting characteristic information in the image information, and comparing and judging the characteristic information with prestored traffic scene information; and classifying the current traffic scene according to the comparison and judgment result, executing a corresponding fusion algorithm according to a preset fusion method of the image information and the radar information which are matched with the current traffic scene category, and outputting the result of the fusion algorithm.
Further, before the step of preprocessing the image information in front of the vehicle obtained by the camera, the method further comprises the following steps: and classifying traffic scenes by using a deep learning method, and establishing a corresponding fusion method of image information and radar information aiming at different classifications.
Further, before the step of preprocessing the image information in front of the vehicle obtained by the camera, the method further comprises the following steps: and installing the two sensors on the vehicle according to the installation criteria of the camera and the millimeter wave radar, and respectively calibrating and jointly calibrating the two sensors to obtain the related parameters.
Furthermore, the camera is installed at a position 1-3 cm below the base of the rearview mirror inside the vehicle, and the millimeter wave radar is installed at the center of the license plate at the front end of the vehicle.
Further, the step of executing a corresponding fusion algorithm according to a preset fusion method of image information and radar information adapted to the current traffic scene specifically includes: and processing the acquired image information and radar information according to the scene classification result, including matrix conversion between coordinate systems, effective target screening, target identification, monocular distance measurement and the like, and simultaneously executing a corresponding fusion algorithm.
Further, the fusion method of the image information and the radar information comprises the following steps: a fusion method based on radar information, a fusion method based on image information, and a fusion method based on a common decision.
Specifically, the fusion method mainly based on radar information comprises the following steps: the position information of the effective target obtained by the radar is converted into a pixel coordinate system of an image through projection transformation, an interested area is formed in the image, target identification is carried out by using a deep learning method, effective target information is processed by using an information fusion algorithm, and information such as the position, the speed, the type and the like of the fused target is output.
Specifically, the fusion method mainly based on image information comprises the following steps: the method comprises the steps of performing target identification by using a deep learning algorithm from image information, performing matching judgment on the image information of a target and radar information of the target, fusing the image information of the target and the radar information of the target if the image information of the target is matched with the radar information of the target, outputting information such as position, speed and type of the fused target, rejecting the radar information if the image information of the target is not matched with the radar information of the target, and outputting information such as the position, the speed and the type of the target according to the image information.
Specifically, the fusion method of the common decisions comprises the following steps: the method comprises the steps of finishing primary selection on radar targets by using a target screening algorithm, outputting effective target information, finishing target identification in images returned by a camera by using a deep learning algorithm, obtaining transverse and longitudinal distance position information of the targets by using a monocular distance measuring algorithm, finishing observation value matching of the radar information and the image information by using the Mahalanobis distance, finishing data fusion by using a joint probability density algorithm after finishing matching, and outputting information such as the position, the speed, the type and the like of the targets.
In order to achieve the above object, a technical method of the present invention is to provide a system for fusing image information and radar information of a traffic scene, including: a processor, a memory, and communication circuitry, the processor coupling the memory and the communication circuitry; the memory stores communication data information, image information, traffic scene classification information and working program data of the processor, the communication circuit is used for information transmission, and the processor executes the program data when working so as to realize any one of the fusion methods of the image information and the radar information of the traffic scene.
The invention has the following beneficial effects:
(1) the invention provides a fusion method and a fusion system of image information and radar information of a traffic scene, which are used for judging the scene according to the acquired image information and switching among different fusion algorithms, thereby effectively utilizing resources and improving the scene adaptability.
(2) The invention provides a fusion method and a fusion system of image information and radar information of a traffic scene, which fully utilize the redundancy and complementary characteristics among different sensor data and improve the robustness and reliability of the system.
(3) The invention provides a fusion method and a fusion system of image information and radar information of a traffic scene, which adopt a deep learning algorithm in the aspect of image information processing, and have higher real-time performance and more accurate target identification compared with the traditional image processing algorithm.
Drawings
In order to more clearly illustrate the embodiments or technical measures of the invention, reference will now be made briefly to the attached drawings which are needed in the description of the embodiments, it being apparent that the drawings in the description below are only some embodiments of the invention and that, to a person skilled in the art, other drawings can be derived therefrom without inventive effort, wherein:
FIG. 1 is a schematic diagram of a schematic framework of an embodiment of a method and system for fusing image information and radar information of a traffic scene according to the present invention;
FIG. 2 is a flowchart of a fusion algorithm based on millimeter wave radar information in an embodiment of a fusion method of image information and radar information of a traffic scene according to the present invention;
FIG. 3 is a flowchart of a fusion algorithm based on image information in an embodiment of a fusion method of image information and radar information of a traffic scene according to the present invention;
fig. 4 is a flowchart of a fusion algorithm for making a decision by a millimeter wave radar and a camera in the embodiment of the fusion method for image information and radar information of a traffic scene.
Detailed Description
The technical solutions in the embodiments of the present invention will be described clearly and completely below, and it should be understood that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Referring to fig. 1, fig. 1 is a schematic diagram of a principle framework of an embodiment of a method and a system for fusing image information and radar information of a traffic scene according to the present invention. Firstly, an extracted typical traffic scene in life is processed by a deep learning method, such as: and classifying the straight road in the sunny day, the straight road in the rainy day, the ramp in the sunny day, the curve in the sunny day at night, the curve in the rainy day at night and the like, and establishing a corresponding fusion method of the image information and the radar information according to the classification information. In the present embodiment, three fusion methods are mainly included: the fusion method mainly comprises a fusion method mainly based on radar information, a fusion method mainly based on image information and a fusion method for jointly deciding the radar information and the image information. The method mainly comprises the steps that an algorithm flow chart is shown in figure 2, a region of interest (ROI) is preliminarily determined according to detection target information of a millimeter wave radar, then projection transformation is carried out, and target classification detection and feature extraction are carried out by applying an image processing algorithm according to the ROI region. The fusion method mainly based on image information is characterized in that an algorithm flow chart is shown in fig. 3, a target recognition algorithm based on CNN is established, relevant information of effective targets in an image is extracted, and the relevant target information is supplemented by combining radar information. The fusion method of the common decision is characterized in that an algorithm flow chart is shown in 4, a camera and a radar respectively make decisions, observation value matching is completed by using the Mahalanobis distance after space-time joint calibration is completed, then weight distribution of a sensor is determined by using a joint probability density algorithm, data fusion is completed, and therefore information such as speed, type and position of a forward dangerous target is determined. The most appropriate fusion method is selected to carry out environment detection under different corresponding scenes, so that the detection precision and reliability of the forward object are improved, and the adaptability to different scenes is improved.
In a more specific embodiment, the present invention comprises the steps of:
(1) and installing the two sensors on the vehicle according to the installation criteria of the camera and the millimeter wave radar, and respectively calibrating and jointly calibrating the two sensors to obtain the related parameters. The millimeter wave radar is arranged at the center of the front end of the vehicle, the height from the ground is ensured to be between 35cm and 65cm, the mounting plane of the millimeter wave radar is perpendicular to the ground as much as possible and is perpendicular to the longitudinal plane of the vehicle body, and the pitch angle and the yaw angle are close to 0 degrees. The camera is arranged at a position 1-3 cm under a rearview mirror base in the vehicle, the pitch angle of the camera is adjusted, and when the scene is a straight road and the vehicle body is parallel to the road, the 2/3 area under the picture is the road. Calibrating the internal parameters of the camera by using a checkerboard calibration method, and jointly calibrating the two sensors by combining the respective position information of the camera and the radar and the angle information of the checkerboard calibration plate to obtain the required parameters.
(2) The method comprises the steps of preprocessing image information in front of a vehicle obtained by a camera, extracting characteristic information in the image information, comparing and judging the characteristic information with prestored traffic scene information, and classifying the current traffic scene according to the result of comparison and judgment. And information is acquired by taking the sensor with a small sampling frame rate as a reference, so that the time of the information acquired by the sensor is uniform. For the image information collected by the camera, image preprocessing is carried out, and the image preprocessing comprises the following steps: filtering, graying, normalizing and the like, and after preprocessing, inputting the images into a SEnet for classification. Compared with a general convolutional neural network, the SENET adopts a brand-new characteristic recalibration strategy: the importance degree of each feature channel is automatically acquired through a learning mode, and then useful features are promoted according to the importance degree and the features which are not useful for the current task are suppressed. The recalibration mainly comprises the following three steps: firstly, the process of Squeeze is carried out, the feature compression is carried out along the space dimension, each two-dimensional feature channel is changed into a real number, the real number has a global receptive field to some extent, and the output dimension is matched with the number of the input feature channels. It characterizes the global distribution of responses over the feature channels so that the superficial layer can also obtain a global receptive field. The second is the Excitation operation, which is a mechanism similar to the gate in the recurrent neural network. Weights are generated for each feature channel by parameters that are learned to explicitly model the correlation between feature channels. And finally, a Reweight operation, wherein the weight of the output of the Excitation is regarded as the importance of each feature channel after feature selection, and then the feature channels are weighted to the previous feature channel by channel through multiplication, so that the original feature is recalibrated in the channel dimension. Before an input image is tested, a large number of pictures are needed to be trained to obtain a corresponding network structure, because the core of the SEnet is an SE module, the SE module can be embedded into almost all the existing network structures, so that the SE module is embedded into building block units of structures of ResNet, BN-inclusion and inclusion-ResNet-v 2 during training, model results are compared, and an optimal model is reserved. For the setting of parameters in the network, the adjustment can be carried out according to the training result until a satisfactory result is obtained, and a final model is output. After the pictures are input into the trained model, the network can automatically extract the picture characteristics to complete scene classification.
(3) And (4) combining the scene classification result, executing a corresponding fusion algorithm according to a preset fusion method of the image information and the radar information which are matched with the current traffic scene category, and outputting the result of the fusion algorithm. More specifically, the acquired image information and radar information are processed, including matrix conversion between coordinate systems, effective target screening, target identification, monocular distance measurement and the like, corresponding fusion algorithms are executed simultaneously, and finally fusion algorithm results are output.
Scene one
For severe environments such as haze, rainstorm, snowstorm and the like or environments with poor illumination such as night, the performance of the camera is affected, the detection reliability is reduced, and at the moment, a multi-sensor fusion method mainly based on radar is adopted. With reference to fig. 3, the fusion method based on radar information includes the following steps: the position information of the effective target obtained by the radar is converted into a pixel coordinate system of an image through projection transformation, an interested area is formed in the image, target identification is carried out by using a deep learning method, effective target information is processed by using an information fusion algorithm, and information such as the position, the speed, the type and the like of the fused target is output.
The method comprises the steps of firstly, judging information output by a radar according to the detection range of the vehicle-mounted radar in combination with technical parameters such as measurement accuracy and resolution of the vehicle-mounted radar, and removing unreasonable target information, secondly, when an automobile runs, the number of nearby targets is relatively small, effective obstacle targets cannot be detected in more channels of the radar, the returned target signals are the most original signals of the radar, for the signals, corresponding conditions are set according to the definition of each type of radar to remove the signals, and meanwhile, false signals are generated when echo energy is uneven due to radar vibration.
The projective transformation matrix involved in the method is:
Figure GDA0001804904070000081
wherein (x)w,yw,zw) As world coordinate system coordinates, (u, v) as image pixel coordinate system coordinates, (x)c,yc,zc) For the coordinates of the camera coordinate system, R represents a rotation matrix, t represents a translation matrix, f represents a focal length, dx and dy represent the length units occupied by one pixel in the x direction and the y direction of the image physical coordinate system, and u represents0,v0Representing the center pixel coordinate (O) of the image1) And image origin pixel coordinates (O)0) The number of horizontal and vertical pixels of the phase difference therebetween.
The size of the region of interest involved in the method is not fixed, and is inversely proportional to the distance of the vehicle relative to the millimeter wave radar. The coordinates acquired by the radar are generally vehicle centroid coordinates, the vehicle centroid coordinates are used as the center of the region of interest, and the region of interest is drawn by adopting a self-adaptive threshold value method.
The deep learning algorithm related to the method considers the characteristics of traffic scenes: the target characteristics are obvious, mutual shielding may exist between targets, a Caffe-Net model is selected, and the model is finely adjusted according to the recognition result during training.
The information fusion algorithm related to the method considers that the confidence coefficient of radar information is high, a simpler weighted average information fusion algorithm can be adopted, and a high weight is given to the radar information and a low weight is given to the image information.
Scene two
Because the radar detection plane is a horizontal plane and the azimuth angle is small, the detection function is limited to a certain extent for scenes such as uphill and downhill roads, curves and the like, and at the moment, a fusion method mainly based on image information is adopted, and the method mainly comprises the following steps: the method comprises the steps of performing target identification by using a deep learning algorithm from image information, performing matching judgment on the image information of a target and radar information of the target, fusing the image information of the target and the radar information of the target if the image information of the target is matched with the radar information of the target, outputting information such as position, speed and type of the fused target, rejecting the radar information if the image information of the target is not matched with the radar information of the target, and outputting information such as the position, the speed and the type of the target according to the image information.
Scene three
In general, the performance of both radar and camera can be maintained in a better state, and a fusion method of common decision of radar information and image information is adopted. The method specifically comprises the following steps: and (4) finishing primary selection on the radar target by using a target screening algorithm, and outputting effective target information. And completing target identification in the image returned by the camera by using a deep learning algorithm, and acquiring the transverse and longitudinal distance position information of the target by using a monocular distance measuring algorithm. Matching the radar information and the image information by applying the mahalanobis distance to finish the observation value, and specifically, defining Vk as the most likely area of the observation value of the current target:
Figure GDA0001804904070000091
according to the statistical data, when c is 3, the probability of the observed value in the effective area is 99.8%. And after matching is finished, data fusion is finished by using a joint probability density algorithm, and information such as the position, the speed, the type and the like of the target is output.
The more detailed procedure is as follows:
A. establishing a system state equation and an observation equation xi,k=Fkxi,k-1+vk,i=1,2,3,...,zij,k=Hjxi,k+wj,k,j=1,2,
Wherein xi,kRepresenting the state vector of the ith target in the kth state. v. ofkIs white Gaussian noise with mean value of 0 and covariance matrix E (v)kvkT)=Qk,zij,kIndicating the observed value of the ith target detected and output by the jth sensor at time k. Wherein HjTo convert the matrix, wj,kIs white Gaussian noise, the average value is also 0, and the covariance thereof satisfies
Figure GDA0001804904070000101
Depending on the type of sensor.
B. And predicting the state value and the observed value of the last step by using Kalman filtering:
Figure GDA0001804904070000102
the state of this cycle (time k) is updated as:
x′ij,k=x′ij,k|k-1+Kij(zij,k-z′k|k-1)
Figure GDA0001804904070000103
wherein x'ij,kRepresents the ith objective according toAnd updating the observed value output by the jth sensor. Kij is the kalman gain matrix of the present system,
C. updating the covariance matrix of the predicted value and the observed value as follows:
Figure GDA0001804904070000104
D. updating a Kalman gain matrix:
Figure GDA0001804904070000105
E. updating the estimated value by adopting a weighted average method:
Figure GDA0001804904070000106
β thereinijThe probability of the jth sensor observation being generated for the ith target, then the state covariance is updated as follows:
Figure GDA0001804904070000111
η thereinijFor hypothetical deviations, define:
ηij=(x′ij-x′i,k|k)(x′ij-x′i,k|k)T
F. solving β according to Poisson distribution theoryij
Figure GDA0001804904070000112
Wherein gamma isij=zij,k-z′k|k-1Residual vectors that are observations and predictors.
The invention relates to a scene-based vision and millimeter wave radar information fusion system and a scene-based vision and millimeter wave radar information fusion method, which mainly comprise three blocks: sensor installation and calibration, scene classification, fusion algorithm selection and result output. The method comprises the following steps of finishing the installation and calibration of a camera and a millimeter wave radar by combining the characteristics of a sensor and a checkerboard marking method, classifying scenes by using an optimal model in ResNet, BN-inclusion and inclusion-ResNet-v 2 embedded in an SE module, and selecting a proper fusion algorithm by combining scene classification results: a fusion algorithm based on radar information, a fusion algorithm based on image information and a fusion algorithm of common decision, and outputting a final fusion result.
The invention also provides a system for fusing the image information and the radar information of the traffic scene, which comprises the following steps: a processor, a memory, and communication circuitry, the processor coupling the memory and the communication circuitry; the memory stores communication data information, image information, traffic scene classification information and working program data of the processor, the communication circuit is used for information transmission, and the processor executes the program data when working so as to realize any one of the fusion methods of the image information and the radar information of the traffic scene. For a detailed description of related contents, please refer to the above method section, which is not described herein again.
The invention further provides a device with a storage function, wherein program data are stored on the device, and when the program data are executed by a processor, the method for fusing the image information and the radar information of the traffic scene is implemented.
The device with storage function may be at least one of a server, a floppy disk drive, a hard disk drive, a CD-ROM reader, a magneto-optical disk reader, and the like.
The invention has the following beneficial effects:
(1) the invention provides a fusion method and a fusion system of image information and radar information of a traffic scene, which are used for judging the scene according to the acquired image information and switching among different fusion algorithms, thereby effectively utilizing resources and improving the scene adaptability.
(2) The invention provides a fusion method and a fusion system of image information and radar information of a traffic scene, which fully utilize the redundancy and complementary characteristics among different sensor data and improve the robustness and reliability of the system.
(3) The invention provides a fusion method and a fusion system of image information and radar information of a traffic scene, which adopt a deep learning algorithm in the aspect of image information processing, and have higher real-time performance and more accurate target identification compared with the traditional image processing algorithm.
The above description is only an embodiment of the present invention, and not intended to limit the scope of the present invention, and all modifications of equivalent structures and equivalent processes, which are made by the present specification, or directly or indirectly applied to other related technical fields, are included in the scope of the present invention.

Claims (6)

1. A fusion method of image information and radar information based on traffic scenes is characterized by comprising the following steps:
preprocessing the image information in front of the vehicle obtained by the camera;
extracting characteristic information in the image information, and comparing and judging the characteristic information with prestored traffic scene information;
classifying the current traffic scene according to the comparison and judgment result,
executing a corresponding fusion algorithm according to a preset fusion method of image information and radar information which is matched with the current traffic scene category, and outputting the result of the fusion algorithm;
the step of executing the corresponding fusion algorithm according to the preset fusion method of the image information and the radar information which are matched with the current traffic scene category specifically comprises the following steps: processing the collected image information and radar information according to the scene classification result, including matrix conversion between coordinate systems, effective target screening, target identification and monocular distance measurement, and simultaneously executing a corresponding fusion algorithm;
the fusion method of the image information and the radar information comprises the following steps: a fusion method mainly based on radar information, a fusion method mainly based on image information and a fusion method for jointly deciding radar information and image information;
the fusion method mainly based on image information comprises the following steps:
starting from image information, the target is identified by applying a deep learning algorithm,
the image information of the target and the radar information of the target are subjected to matching judgment,
if the image information of the target is matched with the radar information of the target, the information of the image information of the target and the radar information of the target is fused, the position, the speed and the type information of the fused target are output,
and if the image information of the target is not matched with the radar information of the target, after the radar information is removed, outputting the position, the speed and the type information of the target only according to the image information.
2. The method for fusing image information and radar information based on traffic scenes according to claim 1, wherein the step of preprocessing the image information in front of the vehicle obtained by the camera further comprises the following steps:
and classifying traffic scenes by using a deep learning method, and establishing a corresponding fusion method of image information and radar information aiming at different classifications.
3. The method for fusing image information and radar information based on traffic scenes according to claim 2, wherein the step of preprocessing the image information in front of the vehicle obtained by the camera further comprises the following steps: and installing the two sensors on the vehicle according to the installation criteria of the camera and the millimeter wave radar, and respectively calibrating and jointly calibrating the two sensors to obtain the related parameters.
4. The fusion method of image information and radar information based on traffic scenes according to claim 3, wherein the camera is installed at a position 1-3 cm under the base of the rearview mirror inside the vehicle, and the millimeter wave radar is installed at the center of the front license plate of the vehicle.
5. The fusion method of image information and radar information based on traffic scene according to claim 1, wherein the fusion method based on radar information comprises the following steps:
converting the position information of the effective target obtained by the radar into a pixel coordinate system of an image through projection transformation to form an interested area in the image,
the target recognition is carried out by a deep learning method,
and processing the effective target information by using an information fusion algorithm, and outputting the position, speed and type information of the fused target.
6. The fusion method of image information and radar information based on traffic scene as claimed in claim 1, wherein the fusion method of radar information and image information decision making together comprises the following steps:
the radar target is primarily selected by applying a target screening algorithm, effective target information is output,
the recognition of the target in the image returned by the camera is finished by applying a deep learning algorithm, the information of the transverse and longitudinal distance position of the target is obtained by applying a monocular distance measuring algorithm,
matching the radar information with the image information by applying the Mahalanobis distance to finish the observed value,
and after matching is finished, data fusion is finished by applying a joint probability density algorithm, and the position, speed and type information of the target is output.
CN201810939902.XA2018-08-172018-08-17Fusion method and system for image information and radar information of traffic sceneActiveCN108983219B (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
CN201810939902.XACN108983219B (en)2018-08-172018-08-17Fusion method and system for image information and radar information of traffic scene

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN201810939902.XACN108983219B (en)2018-08-172018-08-17Fusion method and system for image information and radar information of traffic scene

Publications (2)

Publication NumberPublication Date
CN108983219A CN108983219A (en)2018-12-11
CN108983219Btrue CN108983219B (en)2020-04-07

Family

ID=64553993

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN201810939902.XAActiveCN108983219B (en)2018-08-172018-08-17Fusion method and system for image information and radar information of traffic scene

Country Status (1)

CountryLink
CN (1)CN108983219B (en)

Families Citing this family (42)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN109633621A (en)*2018-12-262019-04-16杭州奥腾电子股份有限公司A kind of vehicle environment sensory perceptual system data processing method
CN109693672B (en)*2018-12-282020-11-06百度在线网络技术(北京)有限公司Method and device for controlling an unmanned vehicle
CN109613537A (en)*2019-01-162019-04-12南京奥杰智能科技有限公司A kind of hologram radar
CN109871385B (en)*2019-02-282021-07-27北京百度网讯科技有限公司 Method and apparatus for processing data
CN109720280A (en)*2019-03-012019-05-07山东华宇信息空间技术有限公司A kind of exact image information transmission system combined based on radar with camera
CN110095770A (en)*2019-04-262019-08-06东风柳州汽车有限公司The detection method of vehicle-surroundings object
CN110068818A (en)*2019-05-052019-07-30中国汽车工程研究院股份有限公司The working method of traffic intersection vehicle and pedestrian detection is carried out by radar and image capture device
CN110135387B (en)*2019-05-242021-03-02李子月Image rapid identification method based on sensor fusion
CN110217271A (en)*2019-05-302019-09-10成都希格玛光电科技有限公司Fast railway based on image vision invades limit identification monitoring system and method
CN110378946B (en)*2019-07-112021-10-01Oppo广东移动通信有限公司 Depth map processing method, device and electronic device
CN110532896B (en)*2019-08-062022-04-08北京航空航天大学Road vehicle detection method based on fusion of road side millimeter wave radar and machine vision
CN110428626A (en)*2019-08-132019-11-08舟山千眼传感技术有限公司A kind of wagon detector and its installation method of microwave and video fusion detection
CN110412986A (en)*2019-08-192019-11-05中车株洲电力机车有限公司A kind of vehicle barrier detection method and system
CN110568437A (en)*2019-09-272019-12-13中科九度(北京)空间信息技术有限责任公司Precise environment modeling method based on radar assistance
CN110987463B (en)*2019-11-082020-12-01东南大学 Multi-scene-oriented autonomous lane change performance test method for intelligent driving
CN113257021B (en)*2020-02-132022-12-23宁波吉利汽车研究开发有限公司 A vehicle safety early warning method and system
CN111401208B (en)2020-03-112023-09-22阿波罗智能技术(北京)有限公司Obstacle detection method and device, electronic equipment and storage medium
CN111090096B (en)*2020-03-192020-07-10南京兆岳智能科技有限公司Night vehicle detection method, device and system
CN111327790B (en)*2020-03-272022-02-08武汉烛照科技有限公司Video processing chip
CN111582130B (en)*2020-04-302023-04-28长安大学Traffic behavior perception fusion system and method based on multi-source heterogeneous information
CN111780981B (en)*2020-05-212022-02-18东南大学Intelligent vehicle formation lane change performance evaluation method
CN111666989A (en)*2020-05-262020-09-15三一专用汽车有限责任公司Construction vehicle and object recognition method
CN111568437B (en)*2020-06-012021-07-09浙江大学Non-contact type bed leaving real-time monitoring method
CN113759363B (en)*2020-06-022023-09-19杭州海康威视数字技术股份有限公司Target positioning method, device, monitoring system and storage medium
CN111856441B (en)*2020-06-092023-04-25北京航空航天大学Train positioning method based on vision and millimeter wave radar fusion
CN111753757B (en)*2020-06-282021-06-18浙江大华技术股份有限公司Image recognition processing method and device
CN111953934B (en)*2020-07-032022-06-10北京航空航天大学杭州创新研究院 Target marking method and device
CN111845709B (en)*2020-07-172021-09-10燕山大学Road adhesion coefficient estimation method and system based on multi-information fusion
CN111967525B (en)*2020-08-202025-01-10广州小鹏汽车科技有限公司 A data processing method and device, server, and storage medium
CN112085952B (en)*2020-09-072022-06-03平安科技(深圳)有限公司Method and device for monitoring vehicle data, computer equipment and storage medium
CN112880672B (en)*2021-01-142024-11-26武汉元生创新科技有限公司 AI-based adaptive method and device for inertial sensor fusion strategy
CN113033684A (en)*2021-03-312021-06-25浙江吉利控股集团有限公司Vehicle early warning method, device, equipment and storage medium
CN113269121B (en)*2021-06-082023-02-10兰州大学 A Fishing State Recognition Method of Fishing Vessels Based on Fusion CNN Model
CN113487529B (en)*2021-07-122022-07-26吉林大学Cloud map target detection method for meteorological satellite based on yolk
CN113658427A (en)*2021-08-062021-11-16深圳英飞拓智能技术有限公司Road condition monitoring method, system and equipment based on vision and radar
CN114280611B (en)*2021-11-082025-08-05上海智能网联汽车技术中心有限公司 A roadside perception method integrating millimeter-wave radar and camera
CN113807471B (en)*2021-11-182022-03-15浙江宇视科技有限公司 Vehicle identification method, device, equipment and medium based on radar and vision fusion
EP4424565A4 (en)*2021-11-192024-11-27Huawei Technologies Co., Ltd. INFORMATION PROCESSING METHOD AND DEVICE
US12299997B1 (en)*2022-09-262025-05-13Zoox, Inc.Multi-attention machine learning for object detection and classification
CN115379408B (en)*2022-10-262023-01-13斯润天朗(北京)科技有限公司Scene perception-based V2X multi-sensor fusion method and device
CN116092290A (en)*2022-12-312023-05-09武汉光庭信息技术股份有限公司 A method and system for automatically correcting and supplementing collected data
CN118409308B (en)*2024-07-032024-09-20陕西省水利电力勘测设计研究院 A method for positioning a working vehicle

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN102231205A (en)*2011-06-242011-11-02北京戎大时代科技有限公司Multimode monitoring device and method
US9429650B2 (en)*2012-08-012016-08-30Gm Global Technology OperationsFusion of obstacle detection using radar and camera
US9568611B2 (en)*2014-08-202017-02-14Nec CorporationDetecting objects obstructing a driver's view of a road
CN105205805A (en)*2015-08-192015-12-30奇瑞汽车股份有限公司Vision-based intelligent vehicle transverse control method
CN108062864A (en)*2016-11-092018-05-22奥迪股份公司A kind of traffic scene visualization system and method and vehicle for vehicle
CN107202983B (en)*2017-05-192020-11-13深圳佑驾创新科技有限公司Automatic braking method and system based on image recognition and millimeter wave radar fusion
CN107235044B (en)*2017-05-312019-05-28北京航空航天大学A kind of restoring method realized based on more sensing datas to road traffic scene and driver driving behavior
CN107807355A (en)*2017-10-182018-03-16轩辕智驾科技(深圳)有限公司It is a kind of based on infrared and millimetre-wave radar technology vehicle obstacle-avoidance early warning system

Also Published As

Publication numberPublication date
CN108983219A (en)2018-12-11

Similar Documents

PublicationPublication DateTitle
CN108983219B (en)Fusion method and system for image information and radar information of traffic scene
CN112215306B (en)Target detection method based on fusion of monocular vision and millimeter wave radar
CN114114312B (en) A 3D target detection method based on the fusion of multi-focal length camera and lidar
CN108909624B (en) A real-time obstacle detection and localization method based on monocular vision
JP4723582B2 (en) Traffic sign detection method
EP2574958B1 (en)Road-terrain detection method and system for driver assistance systems
CN103176185B (en)Method and system for detecting road barrier
US20210089794A1 (en)Vehicle system and method for detecting objects and object distance
KR101569919B1 (en)Apparatus and method for estimating the location of the vehicle
CN111369541A (en)Vehicle detection method for intelligent automobile under severe weather condition
CN107463890B (en)A kind of Foregut fermenters and tracking based on monocular forward sight camera
CN113449650B (en) Lane line detection system and method
CN113822221B (en) A target detection method based on adversarial neural network and multi-sensor fusion
CN108645375B (en)Rapid vehicle distance measurement optimization method for vehicle-mounted binocular system
CN104881645A (en)Vehicle front target detection method based on characteristic-point mutual information content and optical flow method
CN117334040B (en) A cross-domain roadside perception multi-vehicle association method and system
CN119312276B (en) A human-vehicle interaction identification method and system
CN117710918A (en)Lane line detection method and system
CN113065478A (en)Complex weather road vehicle target detection method
Kühnl et al.Visual ego-vehicle lane assignment using spatial ray features
JP2018124963A (en)Image processing device, image recognition device, image processing program, and image recognition program
CN115457507A (en)Method for detecting front end of autonomous vehicle based on multi-sensor fusion
CN108319906B (en)Pedestrian detection method and system based on vehicle-mounted infrared video
JP4969359B2 (en) Moving object recognition device
WO2018143278A1 (en)Image processing device, image recognition device, image processing program, and image recognition program

Legal Events

DateCodeTitleDescription
PB01Publication
PB01Publication
SE01Entry into force of request for substantive examination
SE01Entry into force of request for substantive examination
GR01Patent grant
GR01Patent grant
TR01Transfer of patent right

Effective date of registration:20211123

Address after:100176 901, 9th floor, building 2, yard 10, KEGU 1st Street, Beijing Economic and Technological Development Zone, Daxing District, Beijing

Patentee after:BEIJING TAGE IDRIVER TECHNOLOGY CO.,LTD.

Address before:100191 No. 37, Haidian District, Beijing, Xueyuan Road

Patentee before:BEIHANG University

TR01Transfer of patent right

[8]ページ先頭

©2009-2025 Movatter.jp