Movatterモバイル変換


[0]ホーム

URL:


CN115128634B - A multi-eye 3D laser radar imaging detection method and device - Google Patents

A multi-eye 3D laser radar imaging detection method and device
Download PDF

Info

Publication number
CN115128634B
CN115128634BCN202210812844.0ACN202210812844ACN115128634BCN 115128634 BCN115128634 BCN 115128634BCN 202210812844 ACN202210812844 ACN 202210812844ACN 115128634 BCN115128634 BCN 115128634B
Authority
CN
China
Prior art keywords
target
resolution
laser
imaging
dimensional image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210812844.0A
Other languages
Chinese (zh)
Other versions
CN115128634A (en
Inventor
王智勇
栾晓旭
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing University of Technology
Original Assignee
Beijing University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing University of TechnologyfiledCriticalBeijing University of Technology
Priority to CN202210812844.0ApriorityCriticalpatent/CN115128634B/en
Publication of CN115128634ApublicationCriticalpatent/CN115128634A/en
Application grantedgrantedCritical
Publication of CN115128634BpublicationCriticalpatent/CN115128634B/en
Activelegal-statusCriticalCurrent
Anticipated expirationlegal-statusCritical

Links

Classifications

Landscapes

Abstract

Translated fromChinese

本发明公开了一种多眼3D激光雷达成像探测方法及装置,包括:向目标发射激光,控制接收系统接收目标反射后的发射光,形成不同景深图像;根据不同景深图像生成点云数据,并基于卷积神经网络对点云数据进行特征提取,生成高分辨率三维图像;基于LDA主题模型对高分辨率三维图像进行目标检测,生成目标的高分辨率三维图像。本发明采用多CMOS的三维成像技术,多眼同时进行数据检测,减少了数据收集时间,提高了接收阶段的时效性,为高速目标清晰成像提供了基础。

The present invention discloses a multi-eye 3D laser radar imaging detection method and device, including: emitting laser to a target, controlling a receiving system to receive the emitted light reflected by the target, and forming images with different depths of field; generating point cloud data according to the images with different depths of field, and extracting features from the point cloud data based on a convolutional neural network to generate a high-resolution three-dimensional image; and performing target detection on the high-resolution three-dimensional image based on an LDA topic model to generate a high-resolution three-dimensional image of the target. The present invention adopts multi-CMOS three-dimensional imaging technology, and multiple eyes perform data detection simultaneously, which reduces the data collection time, improves the timeliness of the receiving stage, and provides a basis for high-speed target clear imaging.

Description

Multi-eye 3D laser radar imaging detection method and device
Technical Field
The invention relates to the technical field of laser radar imaging detection, in particular to a multi-eye 3D laser radar imaging detection method and device.
Background
Most of imaging lidars currently adopt mechanical scanning or flash/snapshot, mechanical scanning (linear array/point-by-point scanning) is used for replacing space with time cost to obtain high resolution, flash/snapshot is divided into direct detection and indirect detection, the direct detection is limited by the scale of a device, mature device indexes are 64 x 64, a certain gap exists between the high resolution imaging and the indirect detection, the indirect detection is usually based on CCD type sensors, the resolution can be improved, but the imaging is usually modulated, multi-frame imaging and signal resolving are needed, and the reliability and timeliness are difficult to guarantee.
In order to solve the problems of reliability and timeliness of the flash/snapshot imaging laser radar, in specific use, a mode of installing 2 or more laser imaging radars is generally adopted on target application, and a final high-resolution laser image is formed through a mode of equipment redundancy and target multi-equipment imaging comparison and the like, but the mode increases the investment of application and lacks economy.
Meanwhile, in the traditional imaging laser radar, tens of slice images are generally required to obtain a three-dimensional target image, which is not beneficial to high-speed target detection. Meanwhile, processing of several tens of slice images requires a great deal of computation effort, and increases the cost investment of the device.
In addition, in the conventional image recognition method of the imaging laser radar, the shape, the length-width ratio or the area and other specific characteristics of the communication area are extracted from the sample image, then the characteristics are input into a neural network to train a network, the test image is processed in the same way, and then the trained network is used for judging the test image. In order not to lose the detail characteristics of the image, the process of extracting the artificial preset characteristics is sometimes ignored, and all pixels in the image are directly input into the network. However, the former method cannot guarantee that effective important features are extracted, and the latter method is too cumbersome, so that a large part of redundant information is brought.
Disclosure of Invention
In order to solve the problems of high precision, reliability and timeliness of imaging lidar in the prior art, the invention provides a multi-eye 3D lidar imaging detection method and device, which can acquire the relative speed and the relative distance of a detected target more accurately and more rapidly.
The invention discloses a multi-eye 3D laser radar imaging detection method, which comprises the following steps:
step 1, emitting laser to a target;
step 2, receiving the emitted light reflected by the target to form different depth images;
Step 3, generating point cloud data according to different depth images, and performing feature extraction on the point cloud data based on a convolutional neural network to generate a high-resolution three-dimensional image;
And 4, performing target detection on the high-resolution three-dimensional image based on the LDA theme model to generate the high-resolution three-dimensional image of the target.
As a further improvement of the present invention, the step 1 specifically includes:
emitting pulse laser to a target by adopting a flash/snapshot type high-power VCSEL laser;
The pulse laser is amplified to form emitted light to reach the target position.
As a further improvement of the present invention, the step 2 specifically includes:
receiving a plurality of beams of emitted light reflected by a target;
the laser receiver and the gated imaging detector in the receiving system are adjusted as required to obtain a plurality of different depth images.
As a further improvement of the invention, the target is a moving target, and the target detection is carried out on the formed high-resolution three-dimensional image based on the LDA theme model, so as to obtain the relative speed and the relative distance of the moving target.
The invention further improves the method, adopts the large depth of field ranging to realize the low resolution target ranging in the high resolution three-dimensional image imaging process, adopts the small depth of field three-dimensional imaging on the basis of initial measurement to obtain the high resolution three-dimensional data of the target, and simultaneously estimates the relative speed of the target according to the time differentiation of the distance in the continuous imaging process to realize the real-time high-precision three-dimensional imaging.
The invention also discloses a multi-eye 3D laser radar imaging detection device, which is used for realizing the multi-eye 3D laser radar imaging detection method, and comprises the following steps:
An emission system for emitting laser light toward a target;
the receiving system is used for receiving the emitted light reflected by the target to form different depth images;
The data processing system is used for generating point cloud data according to different depth images, extracting features of the point cloud data based on a convolutional neural network to generate a high-resolution three-dimensional image, and detecting targets of the high-resolution three-dimensional image based on the LDA theme model to generate the high-resolution three-dimensional image of the targets.
As a further improvement of the present invention, the transmitting system includes:
the pulse laser power supply is used for controlling the flash/snapshot type high-power VCSEL laser;
the flash/snapshot type high-power VCSEL laser is used for emitting pulse laser to a target under the control of a pulse laser power supply;
And an emitter for amplifying the beam diameter of the pulse laser light to form emitted light, and making the emitted light reach the target position.
As a further improvement of the present invention, the receiving system includes:
the laser receivers are used for adjusting according to requirements so as to receive a plurality of beams of emitted light reflected by the target;
And a plurality of gating imaging detectors corresponding to the plurality of laser receivers for forming depth images for the corresponding emitted light to obtain a plurality of different depth images.
As a further improvement of the present invention, the data processing system includes:
The image processing module is used for generating point cloud data according to the position of the laser receiver in the receiving system and different formed depth images, processing the point cloud data based on the convolutional neural network, completing feature extraction and forming a high-resolution three-dimensional image;
and the target detection module is used for carrying out target detection on the high-resolution three-dimensional image based on the LDA theme model and generating the high-resolution three-dimensional image of the target.
As a further improvement of the invention, the device also comprises a controller;
The controller is connected with the transmitting system, the receiving system and the data processing system and is used for controlling the transmitting system, the receiving system and the data processing system to work cooperatively.
Compared with the prior art, the invention has the beneficial effects that:
1. The invention adopts the three-dimensional imaging technology of VCSEL and multiple CMOS, solves the contradiction between high resolution imaging, reliability and timeliness of the flash/snapshot laser radar adopting the indirect detection method, and simultaneously, the whole device only increases multiple sets of optical imaging systems, thereby having better economic value than the traditional solution;
2. The invention adopts the large and small depth-of-field cooperative detection technology, firstly adopts the large depth-of-field ranging to realize the low resolution target ranging, adopts the small depth-of-field three-dimensional imaging on the basis of initial measurement to obtain the high resolution three-dimensional data of the target, and simultaneously estimates the relative speed of the target according to the time differentiation of the distance in the continuous imaging process, reduces the links of rough measurement, thereby realizing real-time high-precision three-dimensional imaging;
3. The convolutional neural network adopted by the invention has the characteristics of Local RECEPTIVE FIELDS, weight sharing (SHARED WEIGHTS) and the like, compared with other neural networks, the parameters required to be trained of the convolutional neural network are greatly reduced, when high-speed target detection is carried out, the delay and result deviation caused by data analysis can be effectively reduced by using the convolutional neural network to extract target features, and the convolutional neural network algorithm is introduced in the process of constructing a high-precision three-dimensional image, so that the data processing amount and complexity are reduced on the premise of keeping the effective important features of the image.
Drawings
FIG. 1 is a flow chart of a multi-eye 3D lidar imaging detection method disclosed in an embodiment of the present invention;
FIG. 2 is a block diagram of a multi-eye 3D lidar imaging detection device according to an embodiment of the present invention;
FIG. 3 is a diagram of an optical signal transmission path of a multi-eye 3D lidar imaging detection device according to an embodiment of the present invention;
fig. 4 is a schematic diagram of physical detection of a multi-eye 3D lidar imaging detection device according to an embodiment of the present invention.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present invention more apparent, the technical solutions of the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention, and it is apparent that the described embodiments are some embodiments of the present invention, but not all embodiments of the present invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
The invention is described in further detail below with reference to the attached drawing figures:
as shown in fig. 1, the present invention provides a multi-eye 3D lidar imaging detection method, which includes:
S101, emitting laser to a target;
The method specifically comprises the following steps:
1) Emitting pulse laser or conventional laser to the target by adopting a flash/snapshot type high-power VCSEL laser;
2) The pulse laser or the conventional laser is amplified by the emitter to form emitted light, and the emitted light reaches the target position.
S102, controlling a receiving system to receive the emitted light reflected by the target to form different depth images;
The method specifically comprises the following steps:
1) Adjusting a receiver according to requirements, and receiving a plurality of beams of emitted light reflected by a target;
2) The laser receiver and the gated imaging detector in the receiving system are adjusted as required to obtain a plurality of different depth images.
S103, generating point cloud data according to different depth images, and performing feature extraction on the point cloud data based on a convolutional neural network to generate a high-resolution three-dimensional image;
The method specifically comprises the following steps:
Generating point cloud data according to the position of a laser receiver in a receiving system and different formed depth images, processing the point cloud data based on a convolutional neural network to finish feature extraction and form a high-resolution three-dimensional image, wherein,
The convolutional neural network has the characteristics of Local receptive field (Local RECEPTIVE FIELDS), weight sharing (SHARED WEIGHTS) and the like, compared with other neural networks, parameters required to be trained by the convolutional neural network are greatly reduced, and when high-speed target detection is carried out, the convolutional neural network is used for extracting target features, so that delay and result deviation caused by data analysis can be effectively reduced.
S104, performing target detection on the high-resolution three-dimensional image based on the LDA theme model, acquiring the relative speed and the relative distance of the moving target, and generating the high-resolution three-dimensional image of the target;
Wherein,
Taking the class of the target to be detected as a subject, constructing a model of the class of the target and the characteristics, wherein the class comprises two different classes of the target and the non-target, and also comprises different directions and different spectrum intensities, and the characteristics are mapped by a Bag-of-words model.
The Bag-of-words model is described as follows:
① For an image, a distribution θ of object classes is generated by a Dirichlet distribution with a being a superparameter, ② for each region, a target class zi;③ is given by sampling a polynomial distribution with θ being a parameter, and the word wi is sampled by a Dirichlet distribution with β being a superparameter, wherein the word is mapped from image features by the Bag-of-Words model.
In the imaging process of the high-resolution three-dimensional image, the large depth-of-field ranging is adopted to realize the low-resolution target ranging, the small depth-of-field three-dimensional imaging is adopted on the basis of initial measurement to obtain the high-resolution three-dimensional data of the target, meanwhile, the relative speed of the target is estimated according to the time differentiation of the distance in the continuous imaging process, the rough measurement link is reduced, and the real-time high-precision three-dimensional imaging is realized.
The traditional flash/snapshot laser radar adopting the indirect detection method cannot effectively balance the contradiction between high precision, timeliness and reliability of an image on the high-precision radar imaging, and a mode of constructing the high-precision image by adopting a double-radar mode brings great economic cost, and the traditional image processing and recognition process cannot guarantee to extract effective important characteristics or needs to process a large amount of redundant information. Therefore, the three-dimensional imaging technology of multiple CMOS is adopted, multiple eyes simultaneously perform data detection, data collection time is reduced, timeliness of a receiving stage is improved, and a foundation is provided for high-speed object clear imaging.
As shown in fig. 2-4, the present invention provides a multi-eye 3D lidar imaging detection apparatus for implementing the multi-eye 3D lidar imaging detection method, which includes a transmitting system 1, a receiving system 2, a data processing system 3 and a controller 4, wherein,
The emitting system 1 is used for emitting laser light to a target 5, wherein the emitting system 1 comprises a pulse laser power supply 13, a flash/snapshot type high-power VCSEL laser 11 and an emitter 12, the pulse laser power supply 13 controls the flash/snapshot type high-power VCSEL laser 11 to emit pulse laser light to the target, and the emitter 12 amplifies the beam diameter of the pulse laser light to form emitted light and enables the emitted light to reach a target position.
The receiving system 2 is used for receiving the emitted light reflected by the target to form different depth-of-field images, wherein the receiving system 2 comprises a plurality of laser receivers 21 and a plurality of gating imaging detectors 22 corresponding to the laser receivers 21, the laser receivers 21 are adjusted according to requirements to receive the plurality of emitted light reflected by the target, and the gating imaging detectors 22 form the depth-of-field images for the corresponding emitted light to obtain a plurality of different depth-of-field images. Further, the gated imaging detector may also be a CMOS image sensor.
The data processing system 3 comprises an image processing module 31 and a target detection module 32, wherein the image processing module 31 is used for generating point cloud data according to the position of a laser receiver in a receiving system and different formed depth images, processing the point cloud data based on a convolutional neural network to finish feature extraction and form a high-resolution three-dimensional image, and the target detection module 32 is used for carrying out target detection on the high-resolution three-dimensional image based on an LDA theme model and generating a target high-resolution three-dimensional image.
The controller 4 of the present invention is connected to the transmitting system 1, the receiving system 2 and the data processing system 3, and is used for controlling the transmitting system 1, the receiving system 2 and the data processing system 3 to work cooperatively.
The above is only a preferred embodiment of the present invention, and is not intended to limit the present invention, but various modifications and variations can be made to the present invention by those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (5)

CN202210812844.0A2022-07-112022-07-11 A multi-eye 3D laser radar imaging detection method and deviceActiveCN115128634B (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
CN202210812844.0ACN115128634B (en)2022-07-112022-07-11 A multi-eye 3D laser radar imaging detection method and device

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN202210812844.0ACN115128634B (en)2022-07-112022-07-11 A multi-eye 3D laser radar imaging detection method and device

Publications (2)

Publication NumberPublication Date
CN115128634A CN115128634A (en)2022-09-30
CN115128634Btrue CN115128634B (en)2025-03-28

Family

ID=83383716

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN202210812844.0AActiveCN115128634B (en)2022-07-112022-07-11 A multi-eye 3D laser radar imaging detection method and device

Country Status (1)

CountryLink
CN (1)CN115128634B (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN109214986A (en)*2017-07-032019-01-15百度(美国)有限责任公司High-resolution 3-D point cloud is generated from the low resolution LIDAR 3-D point cloud and camera review of down-sampling
CN109215067A (en)*2017-07-032019-01-15百度(美国)有限责任公司High-resolution 3-D point cloud is generated based on CNN and CRF model

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US6988660B2 (en)*1999-06-072006-01-24Metrologic Instruments, Inc.Planar laser illumination and imaging (PLIIM) based camera system for producing high-resolution 3-D images of moving 3-D objects
US9041915B2 (en)*2008-05-092015-05-26Ball Aerospace & Technologies Corp.Systems and methods of scene and action capture using imaging system incorporating 3D LIDAR
US11768292B2 (en)*2018-03-142023-09-26Uatc, LlcThree-dimensional object detection
US10739438B2 (en)*2018-06-202020-08-11Matthew Paul HarrisonSuper-resolution radar for autonomous vehicles
KR102269750B1 (en)*2019-08-302021-06-25순천향대학교 산학협력단Method for Real-time Object Detection Based on Lidar Sensor and Camera Using CNN
CN111090103B (en)*2019-12-252021-03-02河海大学Three-dimensional imaging device and method for dynamically and finely detecting underwater small target

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN109214986A (en)*2017-07-032019-01-15百度(美国)有限责任公司High-resolution 3-D point cloud is generated from the low resolution LIDAR 3-D point cloud and camera review of down-sampling
CN109215067A (en)*2017-07-032019-01-15百度(美国)有限责任公司High-resolution 3-D point cloud is generated based on CNN and CRF model

Also Published As

Publication numberPublication date
CN115128634A (en)2022-09-30

Similar Documents

PublicationPublication DateTitle
CN108229366B (en) Deep Learning Vehicle Obstacle Detection Method Based on Radar and Image Data Fusion
CN110765894B (en) Object detection method, apparatus, device, and computer-readable storage medium
CN110230998B (en)Rapid and precise three-dimensional measurement method and device based on line laser and binocular camera
CN111216124B (en)Robot vision guiding method and device based on integration of global vision and local vision
CN111339977B (en)Small target intelligent recognition system based on remote video monitoring and recognition method thereof
CN109856643B (en)Movable type non-inductive panoramic sensing method based on 3D laser
CN113160327A (en)Method and system for realizing point cloud completion
CN101408618B (en) Wide-beam illumination three-dimensional gating imaging system and imaging method for airborne lidar
CN114782626A (en) Substation scene mapping and positioning optimization method based on fusion of laser and vision
CN110992410B (en)Robot vision guiding method and device based on RGB-D data fusion
CN113450374B (en) An automatic real-time three-dimensional measurement method for underwater targets based on laser imaging
US20170287152A1 (en)Distance measurement apparatus, distance measurement method, and non-transitory computer-readable storage medium
CN112748443A (en)Dynamic target three-dimensional imaging device and method
CN117706942B (en)Environment sensing and self-adaptive driving auxiliary electronic control method and system
Ding et al.Long-distance vehicle dynamic detection and positioning based on Gm-APD LiDAR and LiDAR-YOLO
CN115081240A (en) A point cloud data processing method to improve the authenticity of simulated lidar data
CN113792645A (en)AI eyeball fusing image and laser radar
CN115128634B (en) A multi-eye 3D laser radar imaging detection method and device
CN114862957B (en) A 3D Lidar-based Subway Vehicle Bottom Positioning Method
CN110310371B (en)Method for constructing three-dimensional contour of object based on vehicle-mounted monocular focusing sequence image
CN113160292A (en)Laser radar point cloud data three-dimensional modeling device and method based on intelligent mobile terminal
CN118485771A (en) A method and system for three-dimensional reconstruction of multi-source data with environmental recognition capability
CN116091493B (en)Distance measurement method for hidden danger of tree obstacle of power transmission line
Song et al.Unsupervised Deep Learning-Based Point Cloud Detection for Railway Foreign Object Intrusion
Hou et al.Research on GDR obstacle detection method based on stereo vision

Legal Events

DateCodeTitleDescription
PB01Publication
PB01Publication
SE01Entry into force of request for substantive examination
SE01Entry into force of request for substantive examination
GR01Patent grant
GR01Patent grant

[8]ページ先頭

©2009-2025 Movatter.jp