Movatterモバイル変換


[0]ホーム

URL:


CN118570765B - Obstacle detection and tracking method, device, computer equipment and storage medium - Google Patents

Obstacle detection and tracking method, device, computer equipment and storage medium
Download PDF

Info

Publication number
CN118570765B
CN118570765BCN202410648162.XACN202410648162ACN118570765BCN 118570765 BCN118570765 BCN 118570765BCN 202410648162 ACN202410648162 ACN 202410648162ACN 118570765 BCN118570765 BCN 118570765B
Authority
CN
China
Prior art keywords
detection
data
point cloud
obstacle
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202410648162.XA
Other languages
Chinese (zh)
Other versions
CN118570765A (en
Inventor
吴嘉琦
邓俊坚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Agmage Technology Co ltd
Original Assignee
Shenzhen Agmage Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Agmage Technology Co ltdfiledCriticalShenzhen Agmage Technology Co ltd
Priority to CN202410648162.XApriorityCriticalpatent/CN118570765B/en
Publication of CN118570765ApublicationCriticalpatent/CN118570765A/en
Application grantedgrantedCritical
Publication of CN118570765BpublicationCriticalpatent/CN118570765B/en
Activelegal-statusCriticalCurrent
Anticipated expirationlegal-statusCritical

Links

Classifications

Landscapes

Abstract

Translated fromChinese

本发明公开了一种障碍物检测追踪方法、装置、设备及介质,包括:标定相机内参及激光雷达和相机之间的外参,并通过相机和激光雷达分别采集图像数据和点云数据;采用3d目标检测算法对点云数据进行目标检测,得到3d检测框和目标类别,并去除3d检测框内部的点云,提取得到障碍物的3d检测边框;采用2d实例分割算法对图像数据进行实例分割,得到2d检测结果,2d检测结果包括2d检测框、分割多边形和目标类别;根据外参将点云数据映射到图像数据,对应匹配3d检测边框和2d检测结果;根据前后两帧的检测结果进行匹配,使用卡尔曼滤波过滤综合预测值和观测值结果,确定障碍物追踪结果。采用本发明提高了障碍物检测的准确率。

The present invention discloses an obstacle detection and tracking method, device, equipment and medium, including: calibrating camera internal parameters and external parameters between laser radar and camera, and collecting image data and point cloud data through camera and laser radar respectively; using 3D target detection algorithm to perform target detection on point cloud data, obtain 3D detection frame and target category, and remove the point cloud inside the 3D detection frame to extract the 3D detection frame of the obstacle; using 2D instance segmentation algorithm to perform instance segmentation on image data, obtain 2D detection result, 2D detection result includes 2D detection frame, segmentation polygon and target category; mapping point cloud data to image data according to external parameters, correspondingly matching 3D detection frame and 2D detection result; matching according to the detection results of the previous and next two frames, using Kalman filter to filter the comprehensive prediction value and observation value results, and determining the obstacle tracking result. The present invention improves the accuracy of obstacle detection.

Description

Obstacle detection tracking method, obstacle detection tracking device, computer equipment and storage medium
Technical Field
The present invention relates to the field of computer vision, and in particular, to a method and apparatus for detecting and tracking an obstacle, a computer device, and a storage medium.
Background
Along with the development of computer vision technology, more and more intelligent robots, unmanned technologies and the like are appeared in the life of people, and obstacle detection is one of important research directions in the field of computer vision, and has wide application prospects in the fields of automatic driving, intelligent transportation, industrial automation, intelligent security and the like. With the continuous development of artificial intelligence technology, an obstacle detection system plays a vital role in realizing an intelligent perception and autonomous decision-making automation system. An important indicator for measuring the degree of intelligentization of a robot or unmanned aerial vehicle is the ability to process unknown environmental information. In practical application, the obstacle judging capability of a robot or an unmanned automobile is an important index, and in scene obstacle analysis, semantic information and depth information are crucial and indispensable.
In the prior art, the recognition of the obstacle is mainly performed by a semantic segmentation technology of a deep convolutional neural network, but the inventor finds out in the process of realizing the method, but in a dynamic complex scene, the obstacle judging process is affected by a plurality of factors such as a plurality of object types, easy shielding, large light change and the like, and the prior art is not accurate enough for detecting the obstacle.
Disclosure of Invention
The embodiment of the invention provides an obstacle detection tracking method, an obstacle detection tracking device, computer equipment and a storage medium, so as to improve the accuracy of obstacle detection.
In order to solve the above technical problems, an embodiment of the present application provides a method for detecting and tracking an obstacle, including:
calibrating internal parameters of a camera and external parameters between a laser radar and the camera, and respectively acquiring image data and point cloud data through the camera and the laser radar;
performing target detection on the point cloud data by adopting a 3d target detection algorithm to obtain a 3d detection frame and a target class, removing point clouds in the 3d detection frame, and extracting to obtain a 3d detection frame of the obstacle;
Performing instance segmentation on the image data by adopting a 2d instance segmentation algorithm to obtain a 2d detection result, wherein the 2d detection result comprises a 2d detection frame, a segmentation polygon and a target class;
Mapping the point cloud data to the image data according to external parameters, and correspondingly matching the 3d detection frame and the 2d detection result;
and matching according to the detection results of the front frame and the rear frame, and filtering the comprehensive predicted value and the observed value result by using Kalman filtering to determine an obstacle tracking result.
Optionally, the performing object detection on the point cloud data by using a 3d object detection algorithm, and obtaining a 3d detection frame and an object class includes:
acquiring sample data, wherein the sample data is a data set acquired by a laser radar and marked;
performing cluster analysis on the frames in the sample data to obtain prior frames;
Performing data enhancement on the prior frame, retraining by using pre-training weights, and storing the best weights after multiple iterations to obtain a trained 3d target detection model;
and carrying out target detection on the point cloud data by adopting the trained 3d target detection model to obtain a 3d detection frame and a target class.
Optionally, removing the point cloud inside the 3d detection frame, and extracting the 3d detection frame to obtain the obstacle includes:
performing data preprocessing on the point cloud data to obtain preprocessed data;
performing plane fitting on the preprocessed data to obtain fitting data;
removing the ground point cloud and the point cloud in the 3d detection frame from the fitting data to obtain target data;
and performing European clustering on the target data, removing discrete points, and extracting to obtain the 3D detection frame.
Optionally, performing plane fitting on the preprocessed data to obtain fitting data includes:
Dividing the point cloud data into a plurality of sectors having radial and azimuth angles at regular intervals according to a center point;
Each sector carries out regional level ground plane fitting, and then partial ground points are combined;
judging whether the divided ground belongs to the actual ground or not through ground likelihood estimation, and obtaining fitting data containing ground point cloud data and non-ground point cloud data.
Optionally, performing euclidean clustering on the target data, removing discrete points, and extracting the 3D detection frame includes:
Setting different radius thresholds for clustering to obtain point cloud clusters of a target contour, and combining different point cloud clusters;
after the clustering is completed, obtaining the outline of the obstacle, adopting convex polygon fitting, and calculating an inclined rectangle according to the vertex of the convex polygon to obtain the center of the obstacle frame;
and adjusting the length, width, height, center and orientation of the obstacle frame to obtain the 3D detection frame.
Optionally, mapping the point cloud data to the image data according to external parameters, and correspondingly matching the 3d detection frame and the 2d detection result includes:
removing point cloud data of the ground in the opposite direction of the view angle of the camera, and projecting points of the cut point cloud data to the image data plane according to external parameters between the laser radar and the camera;
intercepting point cloud data according to the segmentation polygon to obtain intercepted data;
and acquiring a 3d detection frame with the maximum intercepted data distribution value as a target detection frame successfully matched.
Optionally, the matching is performed according to the detection results of the two frames before and after, and the step of using the kalman filter to filter the comprehensive predicted value and the observed value result, and the step of determining the obstacle tracking result includes:
determining the initial position and speed of the obstacle according to the detection results of the front frame and the rear frame, and constructing a state vector and a covariance matrix;
predicting the next state of the obstacle based on the state vector and the covariance matrix as a prediction result;
measuring the position of the obstacle by using a sensor as a measurement result;
Combining the measurement result with the prediction result by using a Kalman filtering formula to obtain updated state estimation and covariance matrix;
And returning to the step of predicting the next state of the obstacle based on the state vector and the covariance matrix, and continuously executing the step as a prediction result to obtain a continuous track tracking result of the obstacle.
In order to solve the above technical problem, an embodiment of the present application further provides an obstacle detection tracking device, including:
The data acquisition module is used for calibrating the camera internal parameters and the external parameters between the laser radar and the camera, and respectively acquiring image data and point cloud data through the camera and the laser radar;
The 3d detection module is used for carrying out target detection on the point cloud data by adopting a 3d target detection algorithm to obtain a 3d detection frame and a target class, removing point clouds in the 3d detection frame, and extracting to obtain a 3d detection frame of the obstacle;
the 2d detection module is used for carrying out instance segmentation on the image data by adopting a 2d instance segmentation algorithm to obtain a 2d detection result, wherein the 2d detection result comprises a 2d detection frame, a segmentation polygon and a target class;
The mapping matching module is used for mapping the point cloud data to the image data according to external parameters and correspondingly matching the 3d detection frame and the 2d detection result;
and the dynamic tracking module is used for matching according to the detection results of the front frame and the rear frame, and determining an obstacle tracking result by filtering the comprehensive predicted value and the observed value result through Kalman filtering.
Optionally, the 3d detection module includes:
the sample acquisition unit is used for acquiring sample data, wherein the sample data is a data set acquired by the laser radar and marked;
the sample clustering unit is used for carrying out cluster analysis on the frames in the sample data to obtain prior frames;
The iterative training unit is used for carrying out data enhancement on the prior frame, retraining by using the pre-training weight, and storing the best weight after multiple iterations to obtain a trained 3d target detection model;
and the target detection unit is used for carrying out target detection on the point cloud data by adopting the trained 3d target detection model to obtain a 3d detection frame and a target class.
Optionally, the 3d detection module further includes:
The preprocessing unit is used for carrying out data preprocessing on the point cloud data to obtain preprocessed data;
The plane fitting unit is used for carrying out plane fitting on the preprocessed data to obtain fitting data;
The data screening unit is used for removing the ground point cloud and the point cloud in the 3d detection frame from the fitting data to obtain target data;
And the data clustering unit is used for performing European clustering on the target data, removing discrete points and extracting to obtain the 3D detection frame.
Optionally, the plane fitting unit includes:
A segmentation subunit for dividing the point cloud data into a plurality of sectors with radial directions and azimuth angles at regular intervals according to a central point;
A fitting subunit, configured to perform area level ground plane fitting on each sector, and then combine part of the ground points;
And the judging subunit is used for judging whether the divided ground belongs to the actual ground or not through ground likelihood estimation to obtain fitting data containing ground point cloud data and non-ground point cloud data.
Optionally, the data clustering unit includes:
the clustering subunit is used for setting different radius thresholds for clustering to obtain point cloud clusters of the target profile, and combining the different point cloud clusters;
the center calculating subunit is used for obtaining the outline of the obstacle after the clustering is completed, adopting convex polygon fitting, and calculating an inclined rectangle according to the vertex of the convex polygon to obtain the center of the obstacle frame;
and the frame generation subunit is used for adjusting the length, width, height, center and orientation of the obstacle frame to obtain the 3D detection frame.
Optionally, the mapping matching module includes:
The projection unit is used for removing point cloud data from the ground in the opposite direction of the view angle of the camera, and projecting the points of the cut point cloud data to the image data plane according to external parameters between the laser radar and the camera;
The intercepting unit is used for intercepting point cloud data according to the segmentation polygon to obtain intercepted data;
The matching unit is used for acquiring the 3d detection frame with the largest intercepted data distribution value as a target detection frame successfully matched.
Optionally, the dynamic tracking module includes:
the construction unit is used for determining the initial position and speed of the obstacle according to the detection results of the front frame and the rear frame, and constructing a state vector and a covariance matrix;
a prediction unit configured to predict a next state of an obstacle as a prediction result based on the state vector and the covariance matrix;
A measuring unit for measuring a position of the obstacle with the sensor as a measurement result;
The updating unit is used for combining the measurement result and the prediction result by using a Kalman filtering formula to obtain updated state estimation and covariance matrix;
and the tracking unit is used for returning the next state of the predicted obstacle based on the state vector and the covariance matrix, and continuously executing the step serving as a predicted result to obtain a continuous track tracking result of the obstacle.
In order to solve the above technical problem, an embodiment of the present application further provides a computer device, including a memory, a processor, and a computer program stored in the memory and capable of running on the processor, where the processor implements the steps of the above obstacle detection tracking method when executing the computer program.
In order to solve the above technical problem, an embodiment of the present application further provides a computer readable storage medium storing a computer program, which when executed by a processor, implements the steps of the obstacle detection tracking method described above.
The obstacle detection tracking method, device, computer equipment and storage medium provided by the embodiment of the invention are characterized in that camera internal parameters and external parameters between a laser radar and a camera are calibrated, image data and point cloud data are respectively acquired through the camera and the laser radar, a 3d target detection algorithm is adopted to carry out target detection on the point cloud data to obtain a 3d detection frame and a target class, point clouds in the 3d detection frame are removed, a 3d detection frame of an obstacle is extracted, a 2d example segmentation algorithm is adopted to carry out example segmentation on the image data to obtain a 2d detection result, the 2d detection result comprises a 2d detection frame, a segmentation polygon and a target class, the point cloud data is mapped to the image data according to the external parameters, the 3d detection frame and the 2d detection result are correspondingly matched, a Kalman filtering comprehensive predicted value and observed value result are used according to the detection results of the two frames, and the obstacle tracking result is determined. The method has the advantages that different types of data are collected through various sensors, 2d and 3d fusion detection is carried out, and the accuracy of obstacle detection is improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings that are needed in the description of the embodiments of the present invention will be briefly described below, it being obvious that the drawings in the following description are only some embodiments of the present invention, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flow chart of one embodiment of an obstacle detection tracking method of the present application;
FIG. 2 is a schematic diagram of an embodiment of an obstacle detection tracking device according to the application;
FIG. 3 is a schematic structural diagram of one embodiment of a computer device in accordance with the present application.
Detailed Description
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs, the terms used in the description herein are used for the purpose of describing particular embodiments only and are not intended to limit the application, and the terms "comprising" and "having" and any variations thereof in the description of the application and the claims and the above description of the drawings are intended to cover non-exclusive inclusions. The terms first, second and the like in the description and in the claims or in the above-described figures, are used for distinguishing between different objects and not necessarily for describing a sequential or chronological order.
Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment may be included in at least one embodiment of the application. The appearances of such phrases in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. Those of skill in the art will explicitly and implicitly appreciate that the embodiments described herein may be combined with other embodiments.
The following description of the embodiments of the present invention will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are some, but not all embodiments of the invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
Referring to fig. 1, fig. 1 shows an obstacle detection tracking method according to an embodiment of the present invention, and the method is applied to the server in fig. 1 for illustration, and is described in detail as follows:
S201, calibrating camera internal parameters and external parameters between the laser radar and the camera, and respectively acquiring image data and point cloud data through the camera and the laser radar.
The camera and laser radar external parameter calibration method comprises the steps of self-making a checkerboard, calibrating by using a calibration tool of an ROS system to obtain an internal parameter matrix K and a distortion coefficient D of a camera, self-making a calibration plate, observing the positions of the calibration plate by using the calibration plate as a target through the camera and a radar, and detecting errors of the targets in respective coordinate systems to finish estimation of the relative postures between the coordinate systems.
The ROS system (Robot Operating System) is a computer operating system architecture designed specifically for robotic software development. It is an open source meta-level operating system (post-operating system) that provides operating system-like services including hardware abstraction descriptions, underlying driver management, execution of common functions, inter-program messaging, program distribution package management, etc. It also provides tools and libraries for retrieving, building, writing, and executing multi-machine fusion programs.
Wherein point cloud data (point cloud data) refers to a set of vectors in a three-dimensional coordinate system. The scan data is recorded in the form of points, each of which includes three-dimensional coordinates, some of which may include color information (RGB) or reflectance Intensity information (Intensity), and some of which include color information in addition to geometric positions. The color information is typically obtained by capturing a color image with a camera, and then assigning color information (RGB) of pixels at corresponding positions to corresponding points in a point cloud. The intensity information is obtained by the echo intensity collected by the receiving device of the laser scanner, and the intensity information is related to the surface material, roughness, incident angle direction of the target and the emission energy of the instrument, and the laser wavelength.
And S202, performing target detection on the point cloud data by adopting a 3d target detection algorithm to obtain a 3d detection frame and a target class, removing point clouds in the 3d detection frame, and extracting to obtain a 3d detection frame of the obstacle.
In a specific optional embodiment, in step S202, performing target detection on the point cloud data by using a 3d target detection algorithm, where obtaining a 3d detection frame and a target class includes:
acquiring sample data, wherein the sample data is a data set acquired by a laser radar and marked;
performing cluster analysis on frames in sample data to obtain prior frames;
Performing data enhancement on the prior frame, retraining by using the pre-training weight, and storing the best weight after multiple iterations to obtain a trained 3d target detection model;
and carrying out target detection on the point cloud data by adopting the trained 3d target detection model to obtain a 3d detection frame and a target class.
In a specific optional embodiment, in step S202, removing the point cloud inside the 3d detection frame, and extracting the 3d detection frame of the obstacle includes:
carrying out data preprocessing on the point cloud data to obtain preprocessed data;
performing plane fitting on the preprocessed data to obtain fitting data;
Removing the ground point cloud and the point cloud in the 3d detection frame from the fitting data to obtain target data;
and performing European clustering on the target data, removing discrete points, and extracting to obtain the 3D detection frame.
In a specific alternative embodiment, performing a plane fit on the preprocessed data, the obtaining fitted data includes:
dividing the point cloud data into a plurality of sectors having radial and azimuth angles at regular intervals according to the center point;
Each sector carries out regional level ground plane fitting, and then partial ground points are combined;
judging whether the divided ground belongs to the actual ground or not through ground likelihood estimation, and obtaining fitting data containing ground point cloud data and non-ground point cloud data.
In a specific optional implementation manner, performing euclidean clustering on the target data, removing discrete points, and extracting to obtain the 3D detection frame includes:
Setting different radius thresholds for clustering to obtain point cloud clusters of a target contour, and combining different point cloud clusters;
after the clustering is completed, obtaining the outline of the obstacle, adopting convex polygon fitting, and calculating an inclined rectangle according to the vertex of the convex polygon to obtain the center of the obstacle frame;
and adjusting the length, width, height, center and orientation of the obstacle frame to obtain the 3D detection frame.
And S203, carrying out instance segmentation on the image data by adopting a 2d instance segmentation algorithm to obtain a 2d detection result, wherein the 2d detection result comprises a 2d detection frame, a segmentation polygon and a target class.
Specifically, a dataset is made from the data acquired by the camera used by the camera.
And (3) carrying out data enhancement, retraining by using official pre-training weights, storing the best weights after multiple iterations to obtain a trained segmentation model, and carrying out instance segmentation on the image data by adopting the trained segmentation model.
The segmentation polygon is a polygon formed by lines and is used for segmenting an image, the 2d detection frame is the whole outline of the 2d plane object, and the target class is the class of the obstacle.
S204, mapping the point cloud data to image data according to the external parameters, and correspondingly matching the 3d detection frame and the 2d detection result.
In a specific optional embodiment, in step S204, mapping the point cloud data to the image data according to the external parameters, the matching 3d detection frame and the 2d detection result include:
Removing point cloud data of the ground in the opposite direction of the view angle of the camera, and projecting points of the cut point cloud data to an image data plane according to external parameters between the laser radar and the camera;
obtaining interception data according to the segmentation polygon interception point cloud data;
and acquiring a 3d detection frame with the maximum intercepted data distribution value as a target detection frame successfully matched.
And S205, matching according to detection results of the front frame and the rear frame, and determining an obstacle tracking result by using Kalman filtering to filter the comprehensive predicted value and the observed value result.
In a specific optional embodiment, in step S205, matching is performed according to the detection results of the two frames before and after, and the determining the obstacle tracking result includes:
determining the initial position and speed of the obstacle according to the detection results of the front frame and the rear frame, and constructing a state vector and a covariance matrix;
Predicting the next state of the obstacle based on the state vector and the covariance matrix as a prediction result;
measuring the position of the obstacle by using a sensor as a measurement result;
Combining the measurement result and the prediction result by using a Kalman filtering formula to obtain updated state estimation and covariance matrix;
And returning to predict the next state of the obstacle based on the state vector and the covariance matrix, and continuously executing the step serving as a prediction result to obtain a continuous track tracking result of the obstacle.
In the embodiment, camera internal parameters and external parameters between a laser radar and the camera are calibrated, image data and point cloud data are respectively acquired through the camera and the laser radar, target detection is carried out on the point cloud data by adopting a 3d target detection algorithm to obtain a 3d detection frame and a target class, point clouds inside the 3d detection frame are removed, a 3d detection frame of an obstacle is extracted, a 2d example segmentation algorithm is adopted to carry out example segmentation on the image data to obtain a 2d detection result, the 2d detection result comprises the 2d detection frame, a segmentation polygon and the target class, the point cloud data is mapped to the image data according to the external parameters and is correspondingly matched with the 3d detection frame and the 2d detection result, matching is carried out according to the detection results of the two frames before and after, a Kalman filtering comprehensive predicted value and an observed value result are used to determine the obstacle tracking result. The method has the advantages that different types of data are collected through various sensors, 2d and 3d fusion detection is carried out, and the accuracy of obstacle detection is improved.
It should be understood that the sequence number of each step in the foregoing embodiment does not mean that the execution sequence of each process should be determined by the function and the internal logic, and should not limit the implementation process of the embodiment of the present invention.
Fig. 2 shows a schematic block diagram of an obstacle detection tracking device in one-to-one correspondence with the obstacle detection tracking method of the above embodiment. As shown in fig. 2, the obstacle detection tracking device includes a data acquisition module 31, a 3d detection module 32, a 2d detection module 33, a mapping matching module 34, and a dynamic tracking module 35. The functional modules are described in detail as follows:
The data acquisition module 31 is used for calibrating the camera internal parameters and the external parameters between the laser radar and the camera, and respectively acquiring image data and point cloud data through the camera and the laser radar;
The 3d detection module 32 is configured to perform target detection on the point cloud data by using a 3d target detection algorithm to obtain a 3d detection frame and a target class, remove point clouds inside the 3d detection frame, and extract a 3d detection frame of the obstacle;
The 2d detection module 33 is configured to perform instance segmentation on the image data by using a 2d instance segmentation algorithm to obtain a 2d detection result, where the 2d detection result includes a 2d detection frame, a segmentation polygon, and a target class;
The mapping matching module 34 is configured to map the point cloud data to image data according to the external parameters, and correspondingly match the 3d detection frame and the 2d detection result;
The dynamic tracking module 35 is configured to perform matching according to the detection results of the two frames, and determine the obstacle tracking result by filtering the comprehensive predicted value and the observed value result using kalman filtering.
Optionally, the 3d detection module 32 includes:
The sample acquisition unit is used for acquiring sample data, wherein the sample data is a data set acquired by the laser radar and marked;
the sample clustering unit is used for carrying out cluster analysis on frames in sample data to obtain prior frames;
The iterative training unit is used for carrying out data enhancement on the prior frame, retraining by using the pre-training weight, and storing the best weight after multiple iterations to obtain a trained 3d target detection model;
And the target detection unit is used for carrying out target detection on the point cloud data by adopting the trained 3d target detection model to obtain a 3d detection frame and a target class.
Optionally, the 3d detection module 32 further includes:
the preprocessing unit is used for preprocessing the data of the point cloud data to obtain preprocessed data;
The plane fitting unit is used for performing plane fitting on the preprocessed data to obtain fitting data;
The data screening unit is used for removing the ground point cloud and the point cloud in the 3d detection frame from the fitting data to obtain target data;
And the data clustering unit is used for performing European clustering on the target data, removing discrete points and extracting to obtain the 3D detection frame.
Optionally, the plane fitting unit includes:
A segmentation subunit for dividing the point cloud data into a plurality of sectors with radial directions and azimuth angles at regular intervals according to the center point;
A fitting subunit, configured to perform area level ground plane fitting on each sector, and then combine part of the ground points;
And the judging subunit is used for judging whether the divided ground belongs to the actual ground or not through ground likelihood estimation to obtain fitting data containing ground point cloud data and non-ground point cloud data.
Optionally, the data clustering unit includes:
the clustering subunit is used for setting different radius thresholds for clustering to obtain point cloud clusters of the target profile, and combining the different point cloud clusters;
the center calculating subunit is used for obtaining the outline of the obstacle after the clustering is completed, adopting convex polygon fitting, and calculating an inclined rectangle according to the vertex of the convex polygon to obtain the center of the obstacle frame;
And the frame generation subunit is used for adjusting the length, width, height, center and direction of the obstacle frame to obtain the 3D detection frame.
Optionally, the map matching module 34 includes:
the projection unit is used for removing point cloud data from the ground in the opposite direction of the view angle of the camera, and projecting the points of the cut point cloud data to the image data plane according to external parameters between the laser radar and the camera;
the intercepting unit is used for intercepting point cloud data according to the segmentation polygon to obtain intercepted data;
The matching unit is used for acquiring the 3d detection frame with the largest intercepted data distribution value as a target detection frame successfully matched.
Optionally, the dynamic tracking module 35 includes:
the construction unit is used for determining the initial position and speed of the obstacle according to the detection results of the front frame and the rear frame, and constructing a state vector and a covariance matrix;
A prediction unit for predicting a next state of the obstacle based on the state vector and the covariance matrix as a prediction result;
A measuring unit for measuring a position of the obstacle with the sensor as a measurement result;
the updating unit is used for combining the measurement result and the prediction result by using a Kalman filtering formula to obtain updated state estimation and covariance matrix;
and the tracking unit is used for returning to predict the next state of the obstacle based on the state vector and the covariance matrix, and continuously executing the step serving as a prediction result to obtain a continuous track tracking result of the obstacle.
For specific limitations of the obstacle detection tracking device, reference may be made to the above limitations of the obstacle detection tracking method, and no further description is given here. The respective modules in the obstacle detection tracking device described above may be implemented in whole or in part by software, hardware, or a combination thereof. The above modules may be embedded in hardware or may be independent of a processor in the computer device, or may be stored in software in a memory in the computer device, so that the processor may call and execute operations corresponding to the above modules.
In order to solve the technical problems, the embodiment of the application also provides computer equipment. Referring specifically to fig. 3, fig. 3 is a basic structural block diagram of a computer device according to the present embodiment.
The computer device 4 comprises a memory 41, a processor 42, a network interface 43 communicatively connected to each other via a system bus. It is noted that only a computer device 4 having a component connection memory 41, a processor 42, a network interface 43 is shown in the figures, but it is understood that not all of the illustrated components are required to be implemented and that more or fewer components may be implemented instead. It will be appreciated by those skilled in the art that the computer device herein is a device capable of automatically performing numerical calculation and/or information processing according to a preset or stored instruction, and its hardware includes, but is not limited to, a microprocessor, an Application SPECIFIC INTEGRATED Circuit (ASIC), a Programmable gate array (Field-Programmable GATE ARRAY, FPGA), a digital Processor (DIGITAL SIGNAL Processor, DSP), an embedded device, and the like.
The computer equipment can be a desktop computer, a notebook computer, a palm computer, a cloud server and other computing equipment. The computer equipment can perform man-machine interaction with a user through a keyboard, a mouse, a remote controller, a touch pad or voice control equipment and the like.
The memory 41 includes at least one type of readable storage medium including flash memory, a hard disk, a multimedia card, a card type memory (e.g., SD or D interface display memory, etc.), a Random Access Memory (RAM), a Static Random Access Memory (SRAM), a read-only memory (ROM), an electrically erasable programmable read-only memory (EEPROM), a programmable read-only memory (PROM), a magnetic memory, a magnetic disk, an optical disk, etc. In some embodiments, the storage 41 may be an internal storage unit of the computer device 4, such as a hard disk or a memory of the computer device 4. In other embodiments, the memory 41 may also be an external storage device of the computer device 4, such as a plug-in hard disk, a smart memory card (SMART MEDIA CARD, SMC), a Secure Digital (SD) card, a flash memory card (FLASH CARD) or the like, which are provided on the computer device 4. Of course, the memory 41 may also comprise both an internal memory unit of the computer device 4 and an external memory device. In this embodiment, the memory 41 is generally used to store an operating system and various application software installed on the computer device 4, such as program codes of an obstacle detection tracking method. Further, the memory 41 may be used to temporarily store various types of data that have been output or are to be output.
The processor 42 may be a central processing unit (Central Processing Unit, CPU), controller, microcontroller, microprocessor, or other data processing chip in some embodiments. The processor 42 is typically used to control the overall operation of the computer device 4. In this embodiment, the processor 42 is configured to execute a program code stored in the memory 41 or process data, such as a program code for executing an obstacle detection tracking method.
The network interface 43 may comprise a wireless network interface or a wired network interface, which network interface 43 is typically used for establishing a communication connection between the computer device 4 and other electronic devices.
The present application also provides another embodiment, namely, a computer-readable storage medium storing an interface display program executable by at least one processor to cause the at least one processor to perform the steps of the obstacle detection tracking method as described above.
From the above description of the embodiments, it will be clear to those skilled in the art that the above-described embodiment method may be implemented by means of software plus a necessary general hardware platform, but of course may also be implemented by means of hardware, but in many cases the former is a preferred embodiment. Based on such understanding, the technical solution of the present application may be embodied essentially or in a part contributing to the prior art in the form of a software product stored in a storage medium (e.g. ROM/RAM, magnetic disk, optical disk) comprising instructions for causing a terminal device (which may be a mobile phone, a computer, a server, an air conditioner, or a network device, etc.) to perform the method according to the embodiments of the present application.
It is apparent that the above-described embodiments are only some embodiments of the present application, but not all embodiments, and the preferred embodiments of the present application are shown in the drawings, which do not limit the scope of the patent claims. This application may be embodied in many different forms, but rather, embodiments are provided in order to provide a thorough and complete understanding of the present disclosure. Although the application has been described in detail with reference to the foregoing embodiments, it will be apparent to those skilled in the art that modifications may be made to the embodiments described in the foregoing description, or equivalents may be substituted for elements thereof. All equivalent structures made by the content of the specification and the drawings of the application are directly or indirectly applied to other related technical fields, and are also within the scope of the application.

Claims (9)

Translated fromChinese
1.一种障碍物检测追踪方法,应用于多传感器智能机器人,所述多传感器智能机器人包括激光雷达和相机,其特征在于,所述方法包括:1. An obstacle detection and tracking method, applied to a multi-sensor intelligent robot, wherein the multi-sensor intelligent robot includes a laser radar and a camera, wherein the method comprises:标定相机内参及激光雷达和相机之间的外参,并通过相机和激光雷达分别采集图像数据和点云数据;Calibrate the camera's intrinsic parameters and the extrinsic parameters between the LiDAR and the camera, and collect image data and point cloud data through the camera and LiDAR respectively;采用3d目标检测算法对所述点云数据进行目标检测,得到3d检测框和目标类别,并去除所述3d检测框内部的点云,提取得到障碍物的3d检测边框;Using a 3D target detection algorithm to perform target detection on the point cloud data, obtain a 3D detection frame and a target category, and remove the point cloud inside the 3D detection frame to extract a 3D detection frame of the obstacle;采用2d实例分割算法对所述图像数据进行实例分割,得到2d检测结果,所述2d检测结果包括2d检测框、分割多边形和目标类别;Perform instance segmentation on the image data using a 2D instance segmentation algorithm to obtain a 2D detection result, wherein the 2D detection result includes a 2D detection box, a segmentation polygon, and a target category;根据外参将所述点云数据映射到所述图像数据,对应匹配所述3d检测边框和所述2d检测结果;Mapping the point cloud data to the image data according to an external reference, and correspondingly matching the 3D detection bounding box and the 2D detection result;根据前后两帧的检测结果进行匹配,使用卡尔曼滤波过滤综合预测值和观测值结果,确定障碍物追踪结果;Match the detection results of the previous and next two frames, use Kalman filtering to filter the comprehensive prediction value and observation value results, and determine the obstacle tracking result;其中,所述根据外参将所述点云数据映射到所述图像数据,对应匹配所述3d检测边框和所述2d检测结果包括:The step of mapping the point cloud data to the image data according to the external reference and correspondingly matching the 3D detection frame and the 2D detection result includes:将去除地面的点云数据去除相机视角反方向的点云,并根据激光雷达和相机之间的外参,将裁减后的点云数据的点投影到所述图像数据平面;The point cloud data of the ground is removed and the point cloud in the opposite direction of the camera view is removed, and the points of the clipped point cloud data are projected onto the image data plane according to the external parameters between the laser radar and the camera;根据所述分割多边形截取点云数据,得到截取数据;intercepting point cloud data according to the segmented polygon to obtain intercepted data;获取截取数据分布数值最大的3d检测框,作为匹配成功的目标检测框。Get the 3D detection frame with the largest distribution value of the intercepted data as the target detection frame that matches successfully.2.如权利要求1所述的障碍物检测追踪方法,其特征在于,所述采用3d目标检测算法对所述点云数据进行目标检测,得到3d检测框和目标类别包括:2. The obstacle detection and tracking method according to claim 1, wherein the step of performing target detection on the point cloud data using a 3D target detection algorithm to obtain a 3D detection frame and a target category comprises:获取样本数据,所述样本数据为激光雷达采集到并打标制作的数据集;Acquire sample data, where the sample data is a data set collected and marked by a laser radar;对所述样本数据中的边框进行聚类分析,得到先验框;Performing cluster analysis on the borders in the sample data to obtain a priori borders;对所述先验框进行数据增强,使用预训练权重进行重新训练,多次迭代后得到训练好的3d目标检测模型;Performing data enhancement on the prior frame, retraining using pre-trained weights, and obtaining a trained 3D object detection model after multiple iterations;采用所述训练好的3d目标检测模型对所述点云数据进行目标检测,得到3d检测框和目标类别。The trained 3D target detection model is used to perform target detection on the point cloud data to obtain a 3D detection box and a target category.3.如权利要求1所述的障碍物检测追踪方法,其特征在于,所述去除所述3d检测框内部的点云,提取得到障碍物的3d检测边框包括:3. The obstacle detection and tracking method according to claim 1, wherein removing the point cloud inside the 3D detection frame to extract the 3D detection frame of the obstacle comprises:对所述点云数据进行数据预处理,得到预处理数据;Performing data preprocessing on the point cloud data to obtain preprocessed data;对所述预处理数据进行平面拟合,得到拟合数据;Performing plane fitting on the preprocessed data to obtain fitting data;从所述拟合数据中去除地面点云和所述3d检测框内部的点云,得到目标数据;Removing the ground point cloud and the point cloud inside the 3D detection frame from the fitting data to obtain target data;对所述目标数据进行欧式聚类,去除离散点,提取得到所述3d检测边框。The target data is subjected to Euclidean clustering to remove discrete points and extract the 3D detection bounding box.4.如权利要求3所述的障碍物检测追踪方法,其特征在于,所述对所述预处理数据进行平面拟合,得到拟合数据包括:4. The obstacle detection and tracking method according to claim 3, wherein performing plane fitting on the preprocessed data to obtain the fitting data comprises:根据中心点把所述点云数据划分为具有规则间隔的径向和方位角的多个扇区;Dividing the point cloud data into a plurality of sectors having regularly spaced radial and azimuthal angles according to a center point;每个扇区进行区域级地平面拟合,然后合并部分地面点;Each sector performs regional ground plane fitting and then merges some ground points;通过地面似然估计判别划分的地面是否属于实际地面,得到包含地面点云数据和非地面点云数据的拟合数据。Through ground likelihood estimation, it is judged whether the divided ground belongs to the actual ground, and fitting data including ground point cloud data and non-ground point cloud data are obtained.5.如权利要求3所述的障碍物检测追踪方法,其特征在于,所述对所述目标数据进行欧式聚类,去除离散点,提取得到所述3d检测边框包括:5. The obstacle detection and tracking method according to claim 3, characterized in that the step of performing Euclidean clustering on the target data, removing discrete points, and extracting the 3D detection frame comprises:设置不同半径阈值进行聚类,得到目标轮廓的点云簇,并对不同的点云簇进行合并;Set different radius thresholds for clustering to obtain point cloud clusters of target contours, and merge different point cloud clusters;在完成聚类后,得到障碍物的轮廓,采用凸多边形拟合,根据凸多边形的顶点计算斜矩形,得到障碍物框的中心;After clustering, the outline of the obstacle is obtained, and convex polygon fitting is used to calculate the oblique rectangle based on the vertices of the convex polygon to obtain the center of the obstacle frame;调整障碍物框的长宽高和中心和朝向,得到所述3d检测边框。The length, width, height, center and orientation of the obstacle frame are adjusted to obtain the 3D detection frame.6.如权利要求1所述的障碍物检测追踪方法,其特征在于,所述根据前后两帧的检测结果进行匹配,使用卡尔曼滤波过滤综合预测值和观测值结果,确定障碍物追踪结果包括:6. The obstacle detection and tracking method according to claim 1, characterized in that the matching of the detection results of the two preceding and following frames and filtering the comprehensive prediction value and the observation value result using Kalman filtering to determine the obstacle tracking result comprises:所述根据前后两帧的检测结果,确定障碍物的初始位置和速度,并构建状态向量和协方差矩阵;According to the detection results of the previous and next two frames, the initial position and speed of the obstacle are determined, and the state vector and covariance matrix are constructed;基于所述状态向量和所述协方差矩阵,预测障碍物的下一个状态,作为预测结果;Based on the state vector and the covariance matrix, predict a next state of the obstacle as a prediction result;利用传感器测量障碍物的位置,作为测量结果;Using sensors to measure the position of obstacles as a measurement result;利用卡尔曼滤波的公式,将所述测量结果与所述预测结果结合,得到更新后的状态估计和协方差矩阵;Using a Kalman filter formula, the measurement result is combined with the prediction result to obtain an updated state estimate and covariance matrix;返回所述基于所述状态向量和所述协方差矩阵,预测障碍物的下一个状态,作为预测结果的步骤继续执行,得到障碍物的连续轨迹跟踪结果。The step of predicting the next state of the obstacle based on the state vector and the covariance matrix is returned and continued to be executed as the prediction result to obtain a continuous trajectory tracking result of the obstacle.7.一种障碍物检测追踪装置,其特征在于,包括:7. An obstacle detection and tracking device, comprising:数据采集模块,用于标定相机内参及激光雷达和相机之间的外参,并通过相机和激光雷达分别采集图像数据和点云数据;The data acquisition module is used to calibrate the camera's internal parameters and the external parameters between the laser radar and the camera, and to collect image data and point cloud data through the camera and laser radar respectively;3d检测模块,用于采用3d目标检测算法对所述点云数据进行目标检测,得到3d检测框和目标类别,并去除所述3d检测框内部的点云,提取得到障碍物的3d检测边框;A 3D detection module is used to perform target detection on the point cloud data using a 3D target detection algorithm to obtain a 3D detection frame and a target category, and remove the point cloud inside the 3D detection frame to extract a 3D detection frame of the obstacle;2d检测模块,用于采用2d实例分割算法对所述图像数据进行实例分割,得到2d检测结果,所述2d检测结果包括2d检测框、分割多边形和目标类别;A 2D detection module, used to perform instance segmentation on the image data using a 2D instance segmentation algorithm to obtain a 2D detection result, wherein the 2D detection result includes a 2D detection box, a segmentation polygon and a target category;映射匹配模块,用于根据外参将所述点云数据映射到所述图像数据,对应匹配所述3d检测边框和所述2d检测结果;A mapping and matching module, used to map the point cloud data to the image data according to external parameters, and to correspond to the 3D detection frame and the 2D detection result;动态追踪模块,用于根据前后两帧的检测结果进行匹配,使用卡尔曼滤波过滤综合预测值和观测值结果,确定障碍物追踪结果;The dynamic tracking module is used to match the detection results of the previous and next frames, and use Kalman filtering to filter the comprehensive prediction value and observation value results to determine the obstacle tracking result;其中,所述映射匹配模块包括:Wherein, the mapping matching module includes:将去除地面的点云数据去除相机视角反方向的点云,并根据激光雷达和相机之间的外参,将裁减后的点云数据的点投影到所述图像数据平面;The point cloud data of the ground is removed and the point cloud in the opposite direction of the camera view is removed, and the points of the clipped point cloud data are projected onto the image data plane according to the external parameters between the laser radar and the camera;根据所述分割多边形截取点云数据,得到截取数据;intercepting point cloud data according to the segmented polygon to obtain intercepted data;获取截取数据分布数值最大的3d检测框,作为匹配成功的目标检测框。Get the 3D detection frame with the largest distribution value of the intercepted data as the target detection frame that matches successfully.8.一种计算机设备,包括存储器、处理器以及存储在所述存储器中并可在所述处理器上运行的计算机程序,其特征在于,所述处理器执行所述计算机程序时实现如权利要求1至6中任一项所述的障碍物检测追踪方法。8. A computer device, comprising a memory, a processor, and a computer program stored in the memory and executable on the processor, wherein the processor implements the obstacle detection and tracking method as described in any one of claims 1 to 6 when executing the computer program.9.一种计算机可读存储介质,所述计算机可读存储介质存储有计算机程序,其特征在于,所述计算机程序被处理器执行时实现如权利要求1至6中任一项所述的障碍物检测追踪方法。9. A computer-readable storage medium storing a computer program, wherein the computer program, when executed by a processor, implements the obstacle detection and tracking method according to any one of claims 1 to 6.
CN202410648162.XA2024-05-232024-05-23 Obstacle detection and tracking method, device, computer equipment and storage mediumActiveCN118570765B (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
CN202410648162.XACN118570765B (en)2024-05-232024-05-23 Obstacle detection and tracking method, device, computer equipment and storage medium

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN202410648162.XACN118570765B (en)2024-05-232024-05-23 Obstacle detection and tracking method, device, computer equipment and storage medium

Publications (2)

Publication NumberPublication Date
CN118570765A CN118570765A (en)2024-08-30
CN118570765Btrue CN118570765B (en)2025-04-04

Family

ID=92475728

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN202410648162.XAActiveCN118570765B (en)2024-05-232024-05-23 Obstacle detection and tracking method, device, computer equipment and storage medium

Country Status (1)

CountryLink
CN (1)CN118570765B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN119559220B (en)*2024-11-112025-09-16合肥极目行远科技有限公司 Target detection and tracking method, system, device and medium based on laser point cloud
CN119906735B (en)*2025-03-272025-06-20江西省检验检测认证总院食品检验检测研究院(江西省粮食质量检验中心)Food safety detection information transmission method without standard card

Citations (1)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN112487919A (en)*2020-11-252021-03-12吉林大学3D target detection and tracking method based on camera and laser radar

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN110244322B (en)*2019-06-282023-04-18东南大学Multi-source sensor-based environmental perception system and method for pavement construction robot
CN116385997B (en)*2022-12-152025-09-16佛山仙湖实验室Vehicle-mounted obstacle accurate sensing method, system and storage medium

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN112487919A (en)*2020-11-252021-03-12吉林大学3D target detection and tracking method based on camera and laser radar

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
《激光雷达障碍物检测与追踪实战——基于欧几里德聚类的激光雷达障碍物检测》;《CSDN》;《CSDN》;20230328;第1页-第12页*

Also Published As

Publication numberPublication date
CN118570765A (en)2024-08-30

Similar Documents

PublicationPublication DateTitle
Zhou et al.T-LOAM: Truncated least squares LiDAR-only odometry and mapping in real time
CN110988912B (en)Road target and distance detection method, system and device for automatic driving vehicle
EP3581890B1 (en)Method and device for positioning
CN118570765B (en) Obstacle detection and tracking method, device, computer equipment and storage medium
CN114241448B (en)Obstacle course angle acquisition method and device, electronic equipment, vehicle and computer readable storage medium
Hinz et al.Automatic car detection in high resolution urban scenes based on an adaptive 3D-model
WO2022188663A1 (en)Target detection method and apparatus
CN114219770B (en) Ground detection method, device, electronic equipment and storage medium
CN112949366A (en)Obstacle identification method and device
Liu et al.Dynamic vehicle detection with sparse point clouds based on PE-CPD
CN111539994A (en)Particle filter repositioning method based on semantic likelihood estimation
CN114325634B (en) A highly robust method for extracting traversable areas in wild environments based on LiDAR
CN111257892A (en) An obstacle detection method for vehicle autonomous driving
CN116337072A (en)Construction method, construction equipment and readable storage medium for engineering machinery
CN113743385A (en)Unmanned ship water surface target detection method and device and unmanned ship
US20220004740A1 (en)Apparatus and Method For Three-Dimensional Object Recognition
CN116844124A (en) Three-dimensional target detection frame annotation method, device, electronic equipment and storage medium
CN113673288A (en)Idle parking space detection method and device, computer equipment and storage medium
Zelener et al.Cnn-based object segmentation in urban lidar with missing points
CN115457130A (en)Electric vehicle charging port detection and positioning method based on depth key point regression
CN116128883A (en)Photovoltaic panel quantity counting method and device, electronic equipment and storage medium
CN118397492B (en)Monitoring data processing method and device, storage medium and terminal
CN113313765A (en)Positioning method, positioning device, electronic equipment and storage medium
CN119444806A (en) Point cloud registration method, device and intelligent mobile device
CN114236566B (en) Laser system control method, device, electronic device and readable storage medium

Legal Events

DateCodeTitleDescription
PB01Publication
PB01Publication
SE01Entry into force of request for substantive examination
SE01Entry into force of request for substantive examination
GR01Patent grant
GR01Patent grant

[8]ページ先頭

©2009-2025 Movatter.jp