Disclosure of Invention
The embodiment of the invention provides an obstacle detection tracking method, an obstacle detection tracking device, computer equipment and a storage medium, so as to improve the accuracy of obstacle detection.
In order to solve the above technical problems, an embodiment of the present application provides a method for detecting and tracking an obstacle, including:
calibrating internal parameters of a camera and external parameters between a laser radar and the camera, and respectively acquiring image data and point cloud data through the camera and the laser radar;
performing target detection on the point cloud data by adopting a 3d target detection algorithm to obtain a 3d detection frame and a target class, removing point clouds in the 3d detection frame, and extracting to obtain a 3d detection frame of the obstacle;
Performing instance segmentation on the image data by adopting a 2d instance segmentation algorithm to obtain a 2d detection result, wherein the 2d detection result comprises a 2d detection frame, a segmentation polygon and a target class;
Mapping the point cloud data to the image data according to external parameters, and correspondingly matching the 3d detection frame and the 2d detection result;
and matching according to the detection results of the front frame and the rear frame, and filtering the comprehensive predicted value and the observed value result by using Kalman filtering to determine an obstacle tracking result.
Optionally, the performing object detection on the point cloud data by using a 3d object detection algorithm, and obtaining a 3d detection frame and an object class includes:
acquiring sample data, wherein the sample data is a data set acquired by a laser radar and marked;
performing cluster analysis on the frames in the sample data to obtain prior frames;
Performing data enhancement on the prior frame, retraining by using pre-training weights, and storing the best weights after multiple iterations to obtain a trained 3d target detection model;
and carrying out target detection on the point cloud data by adopting the trained 3d target detection model to obtain a 3d detection frame and a target class.
Optionally, removing the point cloud inside the 3d detection frame, and extracting the 3d detection frame to obtain the obstacle includes:
performing data preprocessing on the point cloud data to obtain preprocessed data;
performing plane fitting on the preprocessed data to obtain fitting data;
removing the ground point cloud and the point cloud in the 3d detection frame from the fitting data to obtain target data;
and performing European clustering on the target data, removing discrete points, and extracting to obtain the 3D detection frame.
Optionally, performing plane fitting on the preprocessed data to obtain fitting data includes:
Dividing the point cloud data into a plurality of sectors having radial and azimuth angles at regular intervals according to a center point;
Each sector carries out regional level ground plane fitting, and then partial ground points are combined;
judging whether the divided ground belongs to the actual ground or not through ground likelihood estimation, and obtaining fitting data containing ground point cloud data and non-ground point cloud data.
Optionally, performing euclidean clustering on the target data, removing discrete points, and extracting the 3D detection frame includes:
Setting different radius thresholds for clustering to obtain point cloud clusters of a target contour, and combining different point cloud clusters;
after the clustering is completed, obtaining the outline of the obstacle, adopting convex polygon fitting, and calculating an inclined rectangle according to the vertex of the convex polygon to obtain the center of the obstacle frame;
and adjusting the length, width, height, center and orientation of the obstacle frame to obtain the 3D detection frame.
Optionally, mapping the point cloud data to the image data according to external parameters, and correspondingly matching the 3d detection frame and the 2d detection result includes:
removing point cloud data of the ground in the opposite direction of the view angle of the camera, and projecting points of the cut point cloud data to the image data plane according to external parameters between the laser radar and the camera;
intercepting point cloud data according to the segmentation polygon to obtain intercepted data;
and acquiring a 3d detection frame with the maximum intercepted data distribution value as a target detection frame successfully matched.
Optionally, the matching is performed according to the detection results of the two frames before and after, and the step of using the kalman filter to filter the comprehensive predicted value and the observed value result, and the step of determining the obstacle tracking result includes:
determining the initial position and speed of the obstacle according to the detection results of the front frame and the rear frame, and constructing a state vector and a covariance matrix;
predicting the next state of the obstacle based on the state vector and the covariance matrix as a prediction result;
measuring the position of the obstacle by using a sensor as a measurement result;
Combining the measurement result with the prediction result by using a Kalman filtering formula to obtain updated state estimation and covariance matrix;
And returning to the step of predicting the next state of the obstacle based on the state vector and the covariance matrix, and continuously executing the step as a prediction result to obtain a continuous track tracking result of the obstacle.
In order to solve the above technical problem, an embodiment of the present application further provides an obstacle detection tracking device, including:
The data acquisition module is used for calibrating the camera internal parameters and the external parameters between the laser radar and the camera, and respectively acquiring image data and point cloud data through the camera and the laser radar;
The 3d detection module is used for carrying out target detection on the point cloud data by adopting a 3d target detection algorithm to obtain a 3d detection frame and a target class, removing point clouds in the 3d detection frame, and extracting to obtain a 3d detection frame of the obstacle;
the 2d detection module is used for carrying out instance segmentation on the image data by adopting a 2d instance segmentation algorithm to obtain a 2d detection result, wherein the 2d detection result comprises a 2d detection frame, a segmentation polygon and a target class;
The mapping matching module is used for mapping the point cloud data to the image data according to external parameters and correspondingly matching the 3d detection frame and the 2d detection result;
and the dynamic tracking module is used for matching according to the detection results of the front frame and the rear frame, and determining an obstacle tracking result by filtering the comprehensive predicted value and the observed value result through Kalman filtering.
Optionally, the 3d detection module includes:
the sample acquisition unit is used for acquiring sample data, wherein the sample data is a data set acquired by the laser radar and marked;
the sample clustering unit is used for carrying out cluster analysis on the frames in the sample data to obtain prior frames;
The iterative training unit is used for carrying out data enhancement on the prior frame, retraining by using the pre-training weight, and storing the best weight after multiple iterations to obtain a trained 3d target detection model;
and the target detection unit is used for carrying out target detection on the point cloud data by adopting the trained 3d target detection model to obtain a 3d detection frame and a target class.
Optionally, the 3d detection module further includes:
The preprocessing unit is used for carrying out data preprocessing on the point cloud data to obtain preprocessed data;
The plane fitting unit is used for carrying out plane fitting on the preprocessed data to obtain fitting data;
The data screening unit is used for removing the ground point cloud and the point cloud in the 3d detection frame from the fitting data to obtain target data;
And the data clustering unit is used for performing European clustering on the target data, removing discrete points and extracting to obtain the 3D detection frame.
Optionally, the plane fitting unit includes:
A segmentation subunit for dividing the point cloud data into a plurality of sectors with radial directions and azimuth angles at regular intervals according to a central point;
A fitting subunit, configured to perform area level ground plane fitting on each sector, and then combine part of the ground points;
And the judging subunit is used for judging whether the divided ground belongs to the actual ground or not through ground likelihood estimation to obtain fitting data containing ground point cloud data and non-ground point cloud data.
Optionally, the data clustering unit includes:
the clustering subunit is used for setting different radius thresholds for clustering to obtain point cloud clusters of the target profile, and combining the different point cloud clusters;
the center calculating subunit is used for obtaining the outline of the obstacle after the clustering is completed, adopting convex polygon fitting, and calculating an inclined rectangle according to the vertex of the convex polygon to obtain the center of the obstacle frame;
and the frame generation subunit is used for adjusting the length, width, height, center and orientation of the obstacle frame to obtain the 3D detection frame.
Optionally, the mapping matching module includes:
The projection unit is used for removing point cloud data from the ground in the opposite direction of the view angle of the camera, and projecting the points of the cut point cloud data to the image data plane according to external parameters between the laser radar and the camera;
The intercepting unit is used for intercepting point cloud data according to the segmentation polygon to obtain intercepted data;
The matching unit is used for acquiring the 3d detection frame with the largest intercepted data distribution value as a target detection frame successfully matched.
Optionally, the dynamic tracking module includes:
the construction unit is used for determining the initial position and speed of the obstacle according to the detection results of the front frame and the rear frame, and constructing a state vector and a covariance matrix;
a prediction unit configured to predict a next state of an obstacle as a prediction result based on the state vector and the covariance matrix;
A measuring unit for measuring a position of the obstacle with the sensor as a measurement result;
The updating unit is used for combining the measurement result and the prediction result by using a Kalman filtering formula to obtain updated state estimation and covariance matrix;
and the tracking unit is used for returning the next state of the predicted obstacle based on the state vector and the covariance matrix, and continuously executing the step serving as a predicted result to obtain a continuous track tracking result of the obstacle.
In order to solve the above technical problem, an embodiment of the present application further provides a computer device, including a memory, a processor, and a computer program stored in the memory and capable of running on the processor, where the processor implements the steps of the above obstacle detection tracking method when executing the computer program.
In order to solve the above technical problem, an embodiment of the present application further provides a computer readable storage medium storing a computer program, which when executed by a processor, implements the steps of the obstacle detection tracking method described above.
The obstacle detection tracking method, device, computer equipment and storage medium provided by the embodiment of the invention are characterized in that camera internal parameters and external parameters between a laser radar and a camera are calibrated, image data and point cloud data are respectively acquired through the camera and the laser radar, a 3d target detection algorithm is adopted to carry out target detection on the point cloud data to obtain a 3d detection frame and a target class, point clouds in the 3d detection frame are removed, a 3d detection frame of an obstacle is extracted, a 2d example segmentation algorithm is adopted to carry out example segmentation on the image data to obtain a 2d detection result, the 2d detection result comprises a 2d detection frame, a segmentation polygon and a target class, the point cloud data is mapped to the image data according to the external parameters, the 3d detection frame and the 2d detection result are correspondingly matched, a Kalman filtering comprehensive predicted value and observed value result are used according to the detection results of the two frames, and the obstacle tracking result is determined. The method has the advantages that different types of data are collected through various sensors, 2d and 3d fusion detection is carried out, and the accuracy of obstacle detection is improved.
Detailed Description
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs, the terms used in the description herein are used for the purpose of describing particular embodiments only and are not intended to limit the application, and the terms "comprising" and "having" and any variations thereof in the description of the application and the claims and the above description of the drawings are intended to cover non-exclusive inclusions. The terms first, second and the like in the description and in the claims or in the above-described figures, are used for distinguishing between different objects and not necessarily for describing a sequential or chronological order.
Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment may be included in at least one embodiment of the application. The appearances of such phrases in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. Those of skill in the art will explicitly and implicitly appreciate that the embodiments described herein may be combined with other embodiments.
The following description of the embodiments of the present invention will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are some, but not all embodiments of the invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
Referring to fig. 1, fig. 1 shows an obstacle detection tracking method according to an embodiment of the present invention, and the method is applied to the server in fig. 1 for illustration, and is described in detail as follows:
S201, calibrating camera internal parameters and external parameters between the laser radar and the camera, and respectively acquiring image data and point cloud data through the camera and the laser radar.
The camera and laser radar external parameter calibration method comprises the steps of self-making a checkerboard, calibrating by using a calibration tool of an ROS system to obtain an internal parameter matrix K and a distortion coefficient D of a camera, self-making a calibration plate, observing the positions of the calibration plate by using the calibration plate as a target through the camera and a radar, and detecting errors of the targets in respective coordinate systems to finish estimation of the relative postures between the coordinate systems.
The ROS system (Robot Operating System) is a computer operating system architecture designed specifically for robotic software development. It is an open source meta-level operating system (post-operating system) that provides operating system-like services including hardware abstraction descriptions, underlying driver management, execution of common functions, inter-program messaging, program distribution package management, etc. It also provides tools and libraries for retrieving, building, writing, and executing multi-machine fusion programs.
Wherein point cloud data (point cloud data) refers to a set of vectors in a three-dimensional coordinate system. The scan data is recorded in the form of points, each of which includes three-dimensional coordinates, some of which may include color information (RGB) or reflectance Intensity information (Intensity), and some of which include color information in addition to geometric positions. The color information is typically obtained by capturing a color image with a camera, and then assigning color information (RGB) of pixels at corresponding positions to corresponding points in a point cloud. The intensity information is obtained by the echo intensity collected by the receiving device of the laser scanner, and the intensity information is related to the surface material, roughness, incident angle direction of the target and the emission energy of the instrument, and the laser wavelength.
And S202, performing target detection on the point cloud data by adopting a 3d target detection algorithm to obtain a 3d detection frame and a target class, removing point clouds in the 3d detection frame, and extracting to obtain a 3d detection frame of the obstacle.
In a specific optional embodiment, in step S202, performing target detection on the point cloud data by using a 3d target detection algorithm, where obtaining a 3d detection frame and a target class includes:
acquiring sample data, wherein the sample data is a data set acquired by a laser radar and marked;
performing cluster analysis on frames in sample data to obtain prior frames;
Performing data enhancement on the prior frame, retraining by using the pre-training weight, and storing the best weight after multiple iterations to obtain a trained 3d target detection model;
and carrying out target detection on the point cloud data by adopting the trained 3d target detection model to obtain a 3d detection frame and a target class.
In a specific optional embodiment, in step S202, removing the point cloud inside the 3d detection frame, and extracting the 3d detection frame of the obstacle includes:
carrying out data preprocessing on the point cloud data to obtain preprocessed data;
performing plane fitting on the preprocessed data to obtain fitting data;
Removing the ground point cloud and the point cloud in the 3d detection frame from the fitting data to obtain target data;
and performing European clustering on the target data, removing discrete points, and extracting to obtain the 3D detection frame.
In a specific alternative embodiment, performing a plane fit on the preprocessed data, the obtaining fitted data includes:
dividing the point cloud data into a plurality of sectors having radial and azimuth angles at regular intervals according to the center point;
Each sector carries out regional level ground plane fitting, and then partial ground points are combined;
judging whether the divided ground belongs to the actual ground or not through ground likelihood estimation, and obtaining fitting data containing ground point cloud data and non-ground point cloud data.
In a specific optional implementation manner, performing euclidean clustering on the target data, removing discrete points, and extracting to obtain the 3D detection frame includes:
Setting different radius thresholds for clustering to obtain point cloud clusters of a target contour, and combining different point cloud clusters;
after the clustering is completed, obtaining the outline of the obstacle, adopting convex polygon fitting, and calculating an inclined rectangle according to the vertex of the convex polygon to obtain the center of the obstacle frame;
and adjusting the length, width, height, center and orientation of the obstacle frame to obtain the 3D detection frame.
And S203, carrying out instance segmentation on the image data by adopting a 2d instance segmentation algorithm to obtain a 2d detection result, wherein the 2d detection result comprises a 2d detection frame, a segmentation polygon and a target class.
Specifically, a dataset is made from the data acquired by the camera used by the camera.
And (3) carrying out data enhancement, retraining by using official pre-training weights, storing the best weights after multiple iterations to obtain a trained segmentation model, and carrying out instance segmentation on the image data by adopting the trained segmentation model.
The segmentation polygon is a polygon formed by lines and is used for segmenting an image, the 2d detection frame is the whole outline of the 2d plane object, and the target class is the class of the obstacle.
S204, mapping the point cloud data to image data according to the external parameters, and correspondingly matching the 3d detection frame and the 2d detection result.
In a specific optional embodiment, in step S204, mapping the point cloud data to the image data according to the external parameters, the matching 3d detection frame and the 2d detection result include:
Removing point cloud data of the ground in the opposite direction of the view angle of the camera, and projecting points of the cut point cloud data to an image data plane according to external parameters between the laser radar and the camera;
obtaining interception data according to the segmentation polygon interception point cloud data;
and acquiring a 3d detection frame with the maximum intercepted data distribution value as a target detection frame successfully matched.
And S205, matching according to detection results of the front frame and the rear frame, and determining an obstacle tracking result by using Kalman filtering to filter the comprehensive predicted value and the observed value result.
In a specific optional embodiment, in step S205, matching is performed according to the detection results of the two frames before and after, and the determining the obstacle tracking result includes:
determining the initial position and speed of the obstacle according to the detection results of the front frame and the rear frame, and constructing a state vector and a covariance matrix;
Predicting the next state of the obstacle based on the state vector and the covariance matrix as a prediction result;
measuring the position of the obstacle by using a sensor as a measurement result;
Combining the measurement result and the prediction result by using a Kalman filtering formula to obtain updated state estimation and covariance matrix;
And returning to predict the next state of the obstacle based on the state vector and the covariance matrix, and continuously executing the step serving as a prediction result to obtain a continuous track tracking result of the obstacle.
In the embodiment, camera internal parameters and external parameters between a laser radar and the camera are calibrated, image data and point cloud data are respectively acquired through the camera and the laser radar, target detection is carried out on the point cloud data by adopting a 3d target detection algorithm to obtain a 3d detection frame and a target class, point clouds inside the 3d detection frame are removed, a 3d detection frame of an obstacle is extracted, a 2d example segmentation algorithm is adopted to carry out example segmentation on the image data to obtain a 2d detection result, the 2d detection result comprises the 2d detection frame, a segmentation polygon and the target class, the point cloud data is mapped to the image data according to the external parameters and is correspondingly matched with the 3d detection frame and the 2d detection result, matching is carried out according to the detection results of the two frames before and after, a Kalman filtering comprehensive predicted value and an observed value result are used to determine the obstacle tracking result. The method has the advantages that different types of data are collected through various sensors, 2d and 3d fusion detection is carried out, and the accuracy of obstacle detection is improved.
It should be understood that the sequence number of each step in the foregoing embodiment does not mean that the execution sequence of each process should be determined by the function and the internal logic, and should not limit the implementation process of the embodiment of the present invention.
Fig. 2 shows a schematic block diagram of an obstacle detection tracking device in one-to-one correspondence with the obstacle detection tracking method of the above embodiment. As shown in fig. 2, the obstacle detection tracking device includes a data acquisition module 31, a 3d detection module 32, a 2d detection module 33, a mapping matching module 34, and a dynamic tracking module 35. The functional modules are described in detail as follows:
The data acquisition module 31 is used for calibrating the camera internal parameters and the external parameters between the laser radar and the camera, and respectively acquiring image data and point cloud data through the camera and the laser radar;
The 3d detection module 32 is configured to perform target detection on the point cloud data by using a 3d target detection algorithm to obtain a 3d detection frame and a target class, remove point clouds inside the 3d detection frame, and extract a 3d detection frame of the obstacle;
The 2d detection module 33 is configured to perform instance segmentation on the image data by using a 2d instance segmentation algorithm to obtain a 2d detection result, where the 2d detection result includes a 2d detection frame, a segmentation polygon, and a target class;
The mapping matching module 34 is configured to map the point cloud data to image data according to the external parameters, and correspondingly match the 3d detection frame and the 2d detection result;
The dynamic tracking module 35 is configured to perform matching according to the detection results of the two frames, and determine the obstacle tracking result by filtering the comprehensive predicted value and the observed value result using kalman filtering.
Optionally, the 3d detection module 32 includes:
The sample acquisition unit is used for acquiring sample data, wherein the sample data is a data set acquired by the laser radar and marked;
the sample clustering unit is used for carrying out cluster analysis on frames in sample data to obtain prior frames;
The iterative training unit is used for carrying out data enhancement on the prior frame, retraining by using the pre-training weight, and storing the best weight after multiple iterations to obtain a trained 3d target detection model;
And the target detection unit is used for carrying out target detection on the point cloud data by adopting the trained 3d target detection model to obtain a 3d detection frame and a target class.
Optionally, the 3d detection module 32 further includes:
the preprocessing unit is used for preprocessing the data of the point cloud data to obtain preprocessed data;
The plane fitting unit is used for performing plane fitting on the preprocessed data to obtain fitting data;
The data screening unit is used for removing the ground point cloud and the point cloud in the 3d detection frame from the fitting data to obtain target data;
And the data clustering unit is used for performing European clustering on the target data, removing discrete points and extracting to obtain the 3D detection frame.
Optionally, the plane fitting unit includes:
A segmentation subunit for dividing the point cloud data into a plurality of sectors with radial directions and azimuth angles at regular intervals according to the center point;
A fitting subunit, configured to perform area level ground plane fitting on each sector, and then combine part of the ground points;
And the judging subunit is used for judging whether the divided ground belongs to the actual ground or not through ground likelihood estimation to obtain fitting data containing ground point cloud data and non-ground point cloud data.
Optionally, the data clustering unit includes:
the clustering subunit is used for setting different radius thresholds for clustering to obtain point cloud clusters of the target profile, and combining the different point cloud clusters;
the center calculating subunit is used for obtaining the outline of the obstacle after the clustering is completed, adopting convex polygon fitting, and calculating an inclined rectangle according to the vertex of the convex polygon to obtain the center of the obstacle frame;
And the frame generation subunit is used for adjusting the length, width, height, center and direction of the obstacle frame to obtain the 3D detection frame.
Optionally, the map matching module 34 includes:
the projection unit is used for removing point cloud data from the ground in the opposite direction of the view angle of the camera, and projecting the points of the cut point cloud data to the image data plane according to external parameters between the laser radar and the camera;
the intercepting unit is used for intercepting point cloud data according to the segmentation polygon to obtain intercepted data;
The matching unit is used for acquiring the 3d detection frame with the largest intercepted data distribution value as a target detection frame successfully matched.
Optionally, the dynamic tracking module 35 includes:
the construction unit is used for determining the initial position and speed of the obstacle according to the detection results of the front frame and the rear frame, and constructing a state vector and a covariance matrix;
A prediction unit for predicting a next state of the obstacle based on the state vector and the covariance matrix as a prediction result;
A measuring unit for measuring a position of the obstacle with the sensor as a measurement result;
the updating unit is used for combining the measurement result and the prediction result by using a Kalman filtering formula to obtain updated state estimation and covariance matrix;
and the tracking unit is used for returning to predict the next state of the obstacle based on the state vector and the covariance matrix, and continuously executing the step serving as a prediction result to obtain a continuous track tracking result of the obstacle.
For specific limitations of the obstacle detection tracking device, reference may be made to the above limitations of the obstacle detection tracking method, and no further description is given here. The respective modules in the obstacle detection tracking device described above may be implemented in whole or in part by software, hardware, or a combination thereof. The above modules may be embedded in hardware or may be independent of a processor in the computer device, or may be stored in software in a memory in the computer device, so that the processor may call and execute operations corresponding to the above modules.
In order to solve the technical problems, the embodiment of the application also provides computer equipment. Referring specifically to fig. 3, fig. 3 is a basic structural block diagram of a computer device according to the present embodiment.
The computer device 4 comprises a memory 41, a processor 42, a network interface 43 communicatively connected to each other via a system bus. It is noted that only a computer device 4 having a component connection memory 41, a processor 42, a network interface 43 is shown in the figures, but it is understood that not all of the illustrated components are required to be implemented and that more or fewer components may be implemented instead. It will be appreciated by those skilled in the art that the computer device herein is a device capable of automatically performing numerical calculation and/or information processing according to a preset or stored instruction, and its hardware includes, but is not limited to, a microprocessor, an Application SPECIFIC INTEGRATED Circuit (ASIC), a Programmable gate array (Field-Programmable GATE ARRAY, FPGA), a digital Processor (DIGITAL SIGNAL Processor, DSP), an embedded device, and the like.
The computer equipment can be a desktop computer, a notebook computer, a palm computer, a cloud server and other computing equipment. The computer equipment can perform man-machine interaction with a user through a keyboard, a mouse, a remote controller, a touch pad or voice control equipment and the like.
The memory 41 includes at least one type of readable storage medium including flash memory, a hard disk, a multimedia card, a card type memory (e.g., SD or D interface display memory, etc.), a Random Access Memory (RAM), a Static Random Access Memory (SRAM), a read-only memory (ROM), an electrically erasable programmable read-only memory (EEPROM), a programmable read-only memory (PROM), a magnetic memory, a magnetic disk, an optical disk, etc. In some embodiments, the storage 41 may be an internal storage unit of the computer device 4, such as a hard disk or a memory of the computer device 4. In other embodiments, the memory 41 may also be an external storage device of the computer device 4, such as a plug-in hard disk, a smart memory card (SMART MEDIA CARD, SMC), a Secure Digital (SD) card, a flash memory card (FLASH CARD) or the like, which are provided on the computer device 4. Of course, the memory 41 may also comprise both an internal memory unit of the computer device 4 and an external memory device. In this embodiment, the memory 41 is generally used to store an operating system and various application software installed on the computer device 4, such as program codes of an obstacle detection tracking method. Further, the memory 41 may be used to temporarily store various types of data that have been output or are to be output.
The processor 42 may be a central processing unit (Central Processing Unit, CPU), controller, microcontroller, microprocessor, or other data processing chip in some embodiments. The processor 42 is typically used to control the overall operation of the computer device 4. In this embodiment, the processor 42 is configured to execute a program code stored in the memory 41 or process data, such as a program code for executing an obstacle detection tracking method.
The network interface 43 may comprise a wireless network interface or a wired network interface, which network interface 43 is typically used for establishing a communication connection between the computer device 4 and other electronic devices.
The present application also provides another embodiment, namely, a computer-readable storage medium storing an interface display program executable by at least one processor to cause the at least one processor to perform the steps of the obstacle detection tracking method as described above.
From the above description of the embodiments, it will be clear to those skilled in the art that the above-described embodiment method may be implemented by means of software plus a necessary general hardware platform, but of course may also be implemented by means of hardware, but in many cases the former is a preferred embodiment. Based on such understanding, the technical solution of the present application may be embodied essentially or in a part contributing to the prior art in the form of a software product stored in a storage medium (e.g. ROM/RAM, magnetic disk, optical disk) comprising instructions for causing a terminal device (which may be a mobile phone, a computer, a server, an air conditioner, or a network device, etc.) to perform the method according to the embodiments of the present application.
It is apparent that the above-described embodiments are only some embodiments of the present application, but not all embodiments, and the preferred embodiments of the present application are shown in the drawings, which do not limit the scope of the patent claims. This application may be embodied in many different forms, but rather, embodiments are provided in order to provide a thorough and complete understanding of the present disclosure. Although the application has been described in detail with reference to the foregoing embodiments, it will be apparent to those skilled in the art that modifications may be made to the embodiments described in the foregoing description, or equivalents may be substituted for elements thereof. All equivalent structures made by the content of the specification and the drawings of the application are directly or indirectly applied to other related technical fields, and are also within the scope of the application.