Movatterモバイル変換


[0]ホーム

URL:


CN114185061A - Image obstacle target detection method, device, electronic device and storage medium - Google Patents

Image obstacle target detection method, device, electronic device and storage medium
Download PDF

Info

Publication number
CN114185061A
CN114185061ACN202111280575.XACN202111280575ACN114185061ACN 114185061 ACN114185061 ACN 114185061ACN 202111280575 ACN202111280575 ACN 202111280575ACN 114185061 ACN114185061 ACN 114185061A
Authority
CN
China
Prior art keywords
obstacle target
detection
determining
point cloud
cloud data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202111280575.XA
Other languages
Chinese (zh)
Other versions
CN114185061B (en
Inventor
安建平
王向韬
郝雨萌
程新景
杨睿刚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
International Network Technology Shanghai Co Ltd
Original Assignee
International Network Technology Shanghai Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by International Network Technology Shanghai Co LtdfiledCriticalInternational Network Technology Shanghai Co Ltd
Priority to CN202111280575.XApriorityCriticalpatent/CN114185061B/en
Publication of CN114185061ApublicationCriticalpatent/CN114185061A/en
Application grantedgrantedCritical
Publication of CN114185061BpublicationCriticalpatent/CN114185061B/en
Activelegal-statusCriticalCurrent
Anticipated expirationlegal-statusCritical

Links

Images

Classifications

Landscapes

Abstract

The invention relates to the technical field of image recognition, and provides a method and a device for detecting an obstacle target of an image, electronic equipment and a storage medium, wherein the method comprises the following steps: acquiring 2D image data and radar point cloud data in the current vehicle driving direction; determining a detection frame of an obstacle target in the image according to the 2D image data; determining radar point cloud data belonging to the obstacle target according to the radar point cloud data and the detection frame of the obstacle target; and constructing a 3D frame of the obstacle target according to the radar point cloud data of the obstacle target. By associating the detection frame of the obstacle target in the 2D image with the radar point cloud data, the radar point cloud data belonging to the obstacle target is divided, and the 3D frame of the obstacle target is constructed, so that the required point cloud data can be quickly determined from the radar point cloud data, and the detection and the positioning of the obstacle target are accelerated.

Description

Image obstacle target detection method and device, electronic device and storage medium
Technical Field
The present invention relates to the field of image recognition technologies, and in particular, to a method and a system for detecting an obstacle target of an image, an electronic device, and a storage medium.
Background
Obstacles mainly refer to objects encountered on roads in an automatic driving scene, such as abnormal projections on all grounds, such as vehicles, pedestrians, stones, fallen trees, waste tires and the like. In an automatic driving scene, the existing method for detecting a general obstacle mainly comprises the following steps: the detection of the general obstacles is realized by combining ultrasonic waves, millimeter waves and laser radars, the extraction of the general obstacles is realized by utilizing the segmentation of a travelable region and a traditional CV method in an image.
For radar detection, laser radar detection mainly has higher reflection response to metal objects, and point cloud data of obstacles are difficult to accurately determine, namely the point cloud data are inaccurately divided.
Disclosure of Invention
The invention provides a method, a system, an electronic device and a storage medium for detecting an obstacle target of an image, aiming at the problems in the prior art.
In a first aspect, the present invention provides a method for detecting an obstacle target in an image, including:
acquiring 2D image data and radar point cloud data in the current vehicle driving direction;
determining a detection frame of an obstacle target in an image according to the 2D image data;
determining radar point cloud data belonging to the obstacle target according to the radar point cloud data and the detection frame of the obstacle target;
and constructing a 3D frame of the obstacle target according to the radar point cloud data of the obstacle target.
In one embodiment, the determining radar point cloud data belonging to the obstacle target according to the radar point cloud data and the detection frame of the obstacle target includes:
establishing a 2D coordinate system according to the 2D image data, and determining the position of the detection frame of the obstacle target on the 2D coordinate system;
acquiring a projection point of the radar point cloud data on the 2D image data, determining the position of the projection point, and establishing a corresponding relation between the radar point cloud data and the projection point;
determining a projection point positioned in the detection frame of the obstacle target according to the position of the detection frame of the obstacle target on the 2D coordinate system and the position of the projection point;
determining a central area of a detection frame of the obstacle target based on a preset proportion, taking a projection point of the central area as a seed point, and determining an expansion area by using a region growing algorithm;
and determining radar point cloud data belonging to the obstacle target based on projection points in the central area and the expansion area and the corresponding relation between the radar point cloud data and the projection points.
In one embodiment, the constructing a 3D box of the obstacle target from the radar point cloud data of the obstacle target includes:
determining a central position point according to the radar point cloud data of the obstacle target, and establishing a 3D coordinate system of the central position point;
acquiring radar points which are located in each divinatory diagram of the 3D coordinate system and farthest from the central position point;
and establishing a first three-dimensional frame according to the acquired radar points, performing mirror image complementation based on the first three-dimensional frame, and determining the 3D frame of the obstacle target.
In one embodiment, the determining a detection frame of an obstacle target in an image according to the 2D image data includes:
determining feature points in the 2D image data; the characteristic points are pixel points which are intersected with the ground area in each frame of image;
determining a first detection contour in the 2D image data according to the feature points; the first detection profile is a profile characterizing an obstacle;
determining a second detection contour according to the first detection contour, and displaying the second detection contour on the 2D image data; the second detection profile is a profile that screens and trims the first detection profile.
In one embodiment, the determining a first detected contour in the 2D image data from the feature points comprises:
smoothly connecting all the determined feature points to determine a first detection contour in the 2D image data; and two feature points with the distance between the adjacent feature points larger than the preset distance are not subjected to smooth connection.
In one embodiment, said determining a second detection profile from said first detection profile comprises:
comparing the similarity of the first detection contour with two endpoints with each standard contour in a pre-stored standard contour set, and determining the standard contour with the maximum similarity;
determining a scaling between a first detected contour having two end points and a standard contour having a maximum similarity, and determining a defective contour corresponding to the first detected contour having two end points according to the scaling;
and integrating the incomplete contour and a first detection contour with two end points into a second detection contour.
In a second aspect, the present invention provides an obstacle target detection apparatus for an image, including:
the acquisition module is used for acquiring 2D image data and radar point cloud data in the current vehicle driving direction;
the identification module is used for determining a detection frame of an obstacle target in an image according to the 2D image data;
the dividing module is used for determining radar point cloud data belonging to the obstacle target according to the radar point cloud data and the detection frame of the obstacle target;
and the building module is used for building a 3D frame of the obstacle target according to the radar point cloud data of the obstacle target.
In a third aspect, the present invention provides an electronic device comprising a memory and a memory storing a computer program, the processor implementing the steps of the method for detecting an obstacle object of an image according to the first aspect when executing the program.
In a fourth aspect, the present invention provides a processor-readable storage medium storing a computer program for causing a processor to execute the steps of the method for detecting an obstacle target of an image according to the first aspect.
According to the method, the system, the electronic device and the storage medium for detecting the obstacle target of the image, the radar point cloud data belonging to the obstacle target are divided and the 3D frame of the obstacle target is constructed by associating the detection frame of the obstacle target in the 2D image with the radar point cloud data, so that the required point cloud data can be quickly determined from the radar point cloud data, and the detection and the positioning of the obstacle target are accelerated.
Drawings
In order to more clearly illustrate the technical solutions of the present invention or the prior art, the drawings needed for the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and those skilled in the art can also obtain other drawings according to the drawings without creative efforts.
FIG. 1 is a schematic flow chart of a method for detecting an obstacle target in an image according to the present invention;
fig. 2 is a schematic configuration diagram of an obstacle target detection device of a vehicle image of the invention;
FIG. 3 is a schematic structural diagram of an electronic device provided by the present invention;
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention clearer, the technical solutions of the present invention will be clearly and completely described below with reference to the accompanying drawings, and it is obvious that the described embodiments are some, but not all embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The obstacle target detection method of an image, the system, the electronic device, and the storage medium of the present invention are described below with reference to fig. 1 to 3.
Fig. 1 is a schematic flow chart of an image obstacle target detection method according to the present invention, and referring to fig. 1, the method includes:
11. acquiring 2D image data and radar point cloud data in the current vehicle driving direction;
12. determining a detection frame of an obstacle target in the image according to the 2D image data;
13. determining radar point cloud data belonging to the obstacle target according to the radar point cloud data and the detection frame of the obstacle target;
14. and constructing a 3D frame of the obstacle target according to the radar point cloud data of the obstacle target.
In step 11 tostep 13, it should be noted that, in the present invention, the current vehicle is an intelligent vehicle with an automatic driving function, and the vehicle is provided with a forward-facing image pickup device for acquiring 2D image data of the vehicle in the traveling direction. The 2D image data belongs to a dynamic video, and the video is composed of a plurality of frames of images. For this reason, in the process of detecting an obstacle in 2D image data, the above-described processing of steps 11 to 13 is actually performed for each frame of image.
The current vehicle is also provided with a radar transmitter which can face the front and is used for acquiring radar point cloud data of the vehicle in the driving direction. The radar point cloud data and the camera device are kept synchronous, so that the image data and the radar point cloud data are processed synchronously.
In the invention, the obstacle in the image can be identified according to the image characteristics such as the shape, the color and the size of the obstacle (such as a front vehicle, a pedestrian, a block tree, a fallen boulder and the like on a road) in the image, and the obstacle in the image can be used as a detected obstacle target. And then, marking the obstacle target on the image by using a frame body, namely a detection frame of the obstacle target.
In the invention, the acquired radar point cloud data is data obtained by reflecting all objects in front of the current vehicle, so that part of the radar point cloud data does not substantially influence the detection and positioning of the obstacle in front of the vehicle, and at the moment, the acquired radar point cloud data needs to be divided to obtain the point cloud data suitable for detecting and positioning the obstacle in front of the vehicle. Here, since the above-described acquired detection frame is a detection frame of an obstacle target, the detection frame of the obstacle target and the radar point cloud data are associated, and radar point cloud data belonging to the obstacle target is determined in a combined manner. The method can be used for associating the detection frame of the obstacle target with the radar point cloud data in a manner that the point cloud data is projected on the detection frame, and can also be used for associating the detection frame of the obstacle target with the radar point cloud data in a manner that the detection frame vertically moves to trap the point cloud data.
After the radar point cloud data belonging to the obstacle target is acquired, a 3D frame corresponding to the obstacle target can be constructed according to the radar point cloud data of the obstacle target based on the radar point cloud data being a data point on a three-dimensional layer. The 3D frame is used for simplifying, labeling and positioning the obstacle target in the image.
According to the obstacle target detection method of the image, the radar point cloud data of the obstacle target are divided and the 3D frame of the obstacle target is constructed by associating the detection frame of the obstacle target in the 2D image with the radar point cloud data, so that the required point cloud data can be quickly determined from the radar point cloud data, and the detection and the positioning of the obstacle target are accelerated.
In the further explanation of the above method, the processing procedure of determining the radar point cloud data belonging to the obstacle target according to the radar point cloud data and the detection frame of the obstacle target is mainly explained as follows:
establishing a 2D coordinate system according to the 2D image data, and determining the position of a detection frame of the obstacle target on the 2D coordinate system;
acquiring a projection point of the radar point cloud data in the 2D image data, determining the position of the projection point, and establishing a corresponding relation between the radar point cloud data and the projection point;
determining a projection point in the detection frame of the obstacle target according to the position of the detection frame of the obstacle target on the 2D coordinate system and the position of the projection point;
determining a central area of a detection frame of the obstacle target based on a preset proportion, taking a projection point of the central area as a seed point, and determining an expansion area by using a region growing algorithm;
and determining radar point cloud data belonging to the obstacle target based on the projection points in the central area and the expansion area and the corresponding relation between the radar point cloud data and the projection points.
In this regard, in the present invention, a 2D coordinate system is established on the 2D image, and the positions of the detection frames of the respective obstacle targets on the 2D coordinate system are determined, that is, coordinate regions corresponding to the respective detection frames are divided on the 2D coordinate system.
And projecting the radar point cloud data on the 2D image to obtain corresponding projection points. For each proxel a corresponding position is determined in the 2D coordinate system. Because a plurality of radar points are projected on the same point, the corresponding relation between the radar point cloud data and the projection points is required to be established, so that after the projection points falling into the detection frame are conveniently determined subsequently, the corresponding radar points can be determined based on the corresponding relation.
And acquiring the coordinate area which is divided on the 2D coordinate system and corresponds to each detection frame and the position of each projection point, and determining the projection point which is positioned in the detection frame of the obstacle target.
Due to the shape uncertainty of the obstacle object, the coverage area of the detection frame is larger than the specific outline of the obstacle object on the image. For this purpose, the detection frame includes a background area in the image in addition to the image area of the obstacle target. For example, the image includes a front vehicle, and the detection frame includes a background area of the ground or roadside guardrail in addition to an image area of the front vehicle.
Therefore, the point cloud data corresponding to the projection point positioned in the detection frame of the obstacle target not only comprises the point cloud data of the obstacle target, but also comprises the point cloud data of the background object in the image. Therefore, the point cloud data of the background object is screened out.
In the present invention, the center area of the detection frame of the obstacle target is determined based on a preset ratio (for example, a ratio range of 5 × 5 where one projected point is determined and the projected point is taken as a center point).
And taking the projection point of the central area as a seed point, acquiring pixel characteristics of the position of the seed point on the image, and determining an expansion area by using an area growing algorithm based on the pixel characteristics. The extension area is an area located outside the central area.
And then, obtaining projection points in the central area and the expansion area, and determining radar point cloud data belonging to the obstacle target based on the projection points in the central area and the expansion area and the corresponding relation between the radar point cloud data and the projection points.
The method further comprises the steps of establishing a 2D coordinate system, enabling the detection frame of the obstacle target in the 2D image to be associated with the projection point of the radar point cloud based on the positions of the detection frame and the projection point on the coordinate system, dividing the radar point cloud data belonging to the obstacle target based on a region growing algorithm, and achieving the purpose of rapidly determining the required point cloud data from the radar point cloud data.
In the further explanation of the above method, the process of constructing the 3D frame of the obstacle target according to the radar point cloud data of the obstacle target is mainly explained as follows:
determining a central position point according to radar point cloud data of the obstacle target, and establishing a 3D coordinate system with the central position point;
acquiring radar points which are located in each divinatory diagram of a 3D coordinate system and are farthest from a central position point;
and establishing a first three-dimensional frame according to the acquired radar points, performing mirror image complementation based on the first three-dimensional frame, and determining a 3D frame of the obstacle target.
In this regard, it should be noted that, in the present invention, since the obstacle target is located in front of the current vehicle, the radar transmission point can only perform laser transmission on the obstacle surface of the obstacle target in the latter half of the train traveling direction, and obtain the reflected signal to obtain the point cloud data. It can be seen that the obtained radar point cloud data of the obstacle target corresponds to point cloud data of the entire obstacle surface. It is not possible to directly construct a 3D frame corresponding to an obstacle target by only relying on the obtained radar point cloud data of the obstacle target.
For this reason, the following processing is required to construct a 3D frame corresponding to the obstacle target:
and determining a central position point according to radar point cloud data of the obstacle target, and establishing a 3D coordinate system of the central position point. In this regard, it should be noted that a central position point is determined by using a plurality of radar points, and all radar points can be covered by using the central position point as a center and the shortest distance as a radius.
After the central position point is determined, a 3D coordinate system with the central position point is established. For a 3D coordinate system, there are 8 trigrams. In the invention, radar point cloud data of an obstacle target fall into a plurality of octaves, and then a radar point which is farthest from a central position point in each octave is found.
Then, a three-dimensional frame can be established according to the acquired radar points to serve as a first three-dimensional frame, then mirror image complementation is carried out on the basis of the first three-dimensional frame, the other half of three-dimensional frames can be determined, and the two three-dimensional frames are combined to determine a 3D frame of the obstacle target.
According to the further method, a three-dimensional frame is determined based on the acquired radar point cloud data, and then the 3D frame of the obstacle target is determined based on the three-dimensional frame in a mirror image complementary mode, so that the complete 3D frame of the obstacle target is rapidly determined under the condition that the radar point cloud data are incomplete.
In the further explanation of the above method, the explanation is mainly made on the processing procedure of determining the detection frame of the obstacle target in the image according to the 2D image data, and includes:
determining feature points in the 2D image data; the characteristic points are pixel points which are intersected with the ground area in each frame of image;
determining a first detection contour in the 2D image data according to the feature points; the first detection contour is a contour representing an obstacle target;
determining a second detection contour according to the first detection contour, and displaying the second detection contour on the 2D image data; the second detection profile is a profile that screens and trims the first detection profile.
In contrast, in the present invention, the autonomous intelligent vehicle mainly travels on a predetermined main road, and obstacles such as vehicles, stones, trees, pedestrians, and bicycles appearing on the road have an edge intersecting the ground in the screen. Therefore, pixel points of the image, where the obstacles intersect with the ground area, need to be identified, and these pixel points are used as feature points required in the detection process of the invention. Because each obstacle and the ground are different intersection scenes, the pixel change of the intersection edge of the obstacle and the ground has regularity for different intersection scenes, so the intersection edge of the obstacle and the ground can be identified according to the pixel change, and a pixel point is selected as a feature point on the intersection edge.
In the invention, after the characteristic points are determined in the image, the characteristic points are connected in series to form a contour line which is used as a preliminary detection contour possibly representing an obstacle target obtained by detecting the image. For convenience of the subsequent description of the schemes, these detection profiles are referred to herein as first detection profiles.
For the detected outline which may represent the obstacle target, it should be noted that some obstacle targets on the image are not intersected with the ground area in a whole.
For example, a pedestrian on a road has its lower body intersecting the ground area on the image and its upper body intersecting a building or sky on the image.
For example, a large tree blown down by wind crosses a road, and a part of branches with luxuriant branches intersect with a ground area on an image, while another part intersects with a building or sky on the image.
For example, the tires and lower body of the vehicle in front have an intersecting edge with the ground area on the image, while the upper body has an intersecting edge with the sky on the image.
For this reason, the first detection contour obtained based on the feature points includes a complete closed-loop contour and an incomplete open-loop contour.
In the present invention, the first detection profile is then screened and trimmed to obtain more complete detection profiles, which are referred to as the second detection profile for the convenience of distinguishing from the above detection profiles.
For example, the incomplete open-loop contour is repaired to obtain the complete open-loop contour. And carrying out corner processing on the complete closed-loop profile to obtain a more standard complete open-loop profile. These processed contours are classified as the second detected contours mentioned above.
According to the further method, the pixel points intersecting with the ground area in each frame of image in the image data are determined, and the outline of the obstacle target in the image is rounded and displayed based on the pixel points, so that the aim of accurately positioning the obstacle target in front of the running vehicle is fulfilled.
In the further description of the above method, the process of determining the first detected contour in the 2D image data according to the feature points is mainly explained as follows:
smoothly connecting all the determined feature points to determine a first detection contour in the 2D image data; and two feature points with the distance between the adjacent feature points larger than the preset distance are not subjected to smooth connection.
In contrast, a plurality of obstacle targets appearing on one screen may be separated by a short distance. For example, different vehicles may be gapped on the road. At this time, each obstacle target independently exists at an intersecting edge with the ground area in the image.
There is also a case where a plurality of obstacle targets appearing on one image overlap. For example, a vehicle is "jammed" by changing lanes in a geographic sense, where there are a plurality of vehicles in proximity that are connected in the image to the intersecting edges of the ground area.
In the invention, the contour lines are used for marking the obstacle target, so that all the determined feature points are smoothly connected to obtain individual detection contours. But since many of the obstacles mentioned above are independent of the intersecting edges of the ground area in the image. Therefore, a preset distance needs to be determined, and when the distance between adjacent feature points is greater than the preset distance, the two feature points belong to two obstacles with gaps, and at this time, the two feature points do not need to be smoothly connected. Possible obstacles on the image can be distinguished by the above-mentioned distance limitation conditions.
The further method of the invention can divide a plurality of detection contours in the image by smoothly connecting all the feature points and limiting the distance between the feature points, thereby realizing simple division of the regions which may be obstacles in the image.
In the further description of the above method, the process of determining the second detection profile according to the first detection profile is mainly explained as follows:
comparing the similarity of the first detection contour with two endpoints with each standard contour in a pre-stored standard contour set, and determining the standard contour with the maximum similarity;
determining a scaling ratio between a first detected contour with two end points and a standard contour with the maximum similarity, and determining a defective contour corresponding to the first detected contour with two end points according to the scaling ratio;
and integrating the incomplete contour and the first detection contour with two endpoints into a second detection contour.
In this regard, it should be noted that, in the present invention, the first detection contour having two end points is an incomplete open-loop contour, that is, the contour has an opening, because the contour can correspond to an obstacle target without an intersecting edge with the ground area (e.g., an intersecting edge with the sky).
In the present invention, a standard contour set is provided, in which a large number of standard contours are stored. The standard contours are overall contours in an intersecting state, which are acquired under different angles and different sizes aiming at different obstacles and the ground.
For example, the distance between the vehicle in front of the current vehicle and the current vehicle varies, which causes the intersection profile of the vehicle in front and the ground area photographed by the current vehicle to be different. For this reason, a reasonable amount of contour data suitable for the intersection of the vehicle in front with the ground is stored in the standard contour set.
For example, the pedestrian on the road varies with the distance from the vehicle and the walking posture, which may also cause the intersection profile of the pedestrian and the ground area to be different as photographed by the current vehicle. For this reason, a reasonable amount of contour data suitable for the intersection of the pedestrian and the ground is stored in the standard contour set.
For other types of obstacles, reasonable data of the contour data is stored in the standard contour set, as in the principle of the above-mentioned example.
In the present invention, it is necessary to repair an incomplete open-loop contour, i.e., to complement the open-loop contour to a closed-loop and complete contour. Therefore, the detection contour with the two end points is compared with each standard contour in the pre-stored standard contour set in a similarity manner, so that the standard contour with the maximum similarity is determined as the contour of the same obstacle with the detection contour with the two end points.
In the invention, since the standard contour may be different from the detection contour with two end points in size, for this purpose, based on the size of a part of the contour of the detection contour with two end points, the size of the part of the contour corresponding to the standard contour with the maximum similarity is converted to obtain a scaling ratio between the two end points, and then the incomplete contour corresponding to the first detection contour with two end points is determined according to the scaling ratio, namely the contour of the other part of the detection contour with two end points is restored according to the scaling ratio.
Finally, according to the incomplete contour and the first detection contour with two end points, the complete contour and the second detection contour are integrated. For the sake of convenience, these integrated detection profiles are referred to as second detection profiles, in order to distinguish them from the detection profiles before integration.
In addition, for the first detection profile without two end points, it is actually a complete closed-loop profile, and these profiles are directly determined as the second detection profile.
The further method of the invention realizes the complete trimming of the incomplete detection contour by comparing the detection contour with the standard contour, and obtains the contour mark which is more perfect for the obstacle target.
The following describes the image obstacle target detection device provided by the present invention, and the image obstacle target detection device described below and the image obstacle target detection method described above may be referred to in correspondence with each other.
Fig. 2 shows a schematic structural diagram of an image obstacle target detection apparatus provided by the present invention, referring to fig. 2, the apparatus includes anacquisition module 21, arecognition module 22, a dividingmodule 23, and aconstruction module 24, wherein:
theacquisition module 21 is used for acquiring 2D image data and radar point cloud data in the current vehicle driving direction;
theidentification module 22 is used for determining a detection frame of an obstacle target in an image according to the 2D image data;
the dividingmodule 23 is configured to determine radar point cloud data belonging to the obstacle target according to the radar point cloud data and the detection frame of the obstacle target;
and thebuilding module 24 is used for building a 3D frame of the obstacle target according to the radar point cloud data of the obstacle target.
In further description of the above apparatus, the dividing module is specifically configured to:
establishing a 2D coordinate system according to the 2D image data, and determining the position of the detection frame of the obstacle target on the 2D coordinate system;
establishing a 2D coordinate system according to the 2D image data, and determining the position of a detection frame of the obstacle target on the 2D coordinate system;
acquiring a projection point of the radar point cloud data in the 2D image data, determining the position of the projection point, and establishing a corresponding relation between the radar point cloud data and the projection point;
determining a projection point in the detection frame of the obstacle target according to the position of the detection frame of the obstacle target on the 2D coordinate system and the position of the projection point;
determining a central area of a detection frame of the obstacle target based on a preset proportion, taking a projection point of the central area as a seed point, and determining an expansion area by using a region growing algorithm;
and determining radar point cloud data belonging to the obstacle target based on the projection points in the central area and the expansion area and the corresponding relation between the radar point cloud data and the projection points.
In a further description of the above apparatus, the building block is specifically configured to:
determining a central position point according to the radar point cloud data of the obstacle target, and establishing a 3D coordinate system of the central position point;
acquiring radar points which are located in each divinatory diagram of the 3D coordinate system and farthest from the central position point;
and establishing a first three-dimensional frame according to the acquired radar points, performing mirror image complementation based on the first three-dimensional frame, and determining the 3D frame of the obstacle target.
In a further description of the above apparatus, the identification module is specifically configured to:
determining feature points in the 2D image data; the characteristic points are pixel points which are intersected with the ground area in each frame of image;
determining a first detection contour in the 2D image data according to the feature points; the first detection contour is a contour representing an obstacle target;
determining a second detection contour according to the first detection contour, and displaying the second detection contour on the 2D image data; the second detection profile is a profile that screens and trims the first detection profile.
In a further description of the above apparatus, the identification module, during the process of determining the first detected contour in the 2D image data according to the feature points, is specifically configured to:
smoothly connecting all the determined feature points to determine a first detection contour in the 2D image data; and two feature points with the distance between the adjacent feature points larger than the preset distance are not subjected to smooth connection.
In a further description of the above apparatus, the identification module, during the process of determining the second detected contour according to the first detected contour, is specifically configured to:
comparing the similarity of the first detection contour with two endpoints with each standard contour in a pre-stored standard contour set, and determining the standard contour with the maximum similarity;
determining a scaling between a first detected contour having two end points and a standard contour having a maximum similarity, and determining a defective contour corresponding to the first detected contour having two end points according to the scaling;
and integrating the incomplete contour and a first detection contour with two end points into a second detection contour.
Since the principle of the apparatus according to the embodiment of the present invention is the same as that of the method according to the above embodiment, further details are not described herein for further explanation.
It should be noted that, in the embodiment of the present invention, the relevant functional module may be implemented by a hardware processor (hardware processor).
According to the image obstacle target detection method, the radar point cloud data belonging to the obstacle target are divided and the 3D frame of the obstacle target is constructed by associating the detection frame of the obstacle target in the 2D image with the radar point cloud data, so that the required point cloud data can be quickly determined from the radar point cloud data, and the detection and the positioning of the obstacle target are accelerated.
Fig. 3 illustrates a physical structure diagram of an electronic device, which may include, as shown in fig. 3: a processor (processor)31, a Communication Interface (Communication Interface)32, a memory (memory)33 and aCommunication bus 34, wherein theprocessor 31, the Communication Interface 32 and thememory 33 are communicated with each other via theCommunication bus 34. Theprocessor 31 may call the computer program in thememory 33 to perform the steps of the method for detecting an obstacle object in an image, for example comprising: acquiring 2D image data and radar point cloud data in the current vehicle driving direction; determining a detection frame of an obstacle target in the image according to the 2D image data; determining radar point cloud data belonging to the obstacle target according to the radar point cloud data and the detection frame of the obstacle target; and constructing a 3D frame of the obstacle target according to the radar point cloud data of the obstacle target.
In addition, the logic instructions in thememory 33 may be implemented in the form of software functional units and stored in a computer readable storage medium when the software functional units are sold or used as independent products. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read-only memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
In another aspect, the present invention also provides a computer program product comprising a computer program stored on a non-transitory computer-readable storage medium, the computer program comprising program instructions which, when executed by a computer, enable the computer to perform the method for detecting an obstacle object in an image provided by the above methods, the method comprising: acquiring 2D image data and radar point cloud data in the current vehicle driving direction; determining a detection frame of an obstacle target in the image according to the 2D image data; determining radar point cloud data belonging to the obstacle target according to the radar point cloud data and the detection frame of the obstacle target; and constructing a 3D frame of the obstacle target according to the radar point cloud data of the obstacle target.
On the other hand, an embodiment of the present application further provides a processor-readable storage medium, where the processor-readable storage medium stores a computer program, where the computer program is configured to cause the processor to execute the method for detecting an obstacle target in an image provided in each of the above embodiments, and the method includes: acquiring 2D image data and radar point cloud data in the current vehicle driving direction; determining a detection frame of an obstacle target in the image according to the 2D image data; determining radar point cloud data belonging to the obstacle target according to the radar point cloud data and the detection frame of the obstacle target; and constructing a 3D frame of the obstacle target according to the radar point cloud data of the obstacle target.
The processor-readable storage medium can be any available medium or data storage device that can be accessed by a processor, including, but not limited to, magnetic memory (e.g., floppy disks, hard disks, magnetic tape, magneto-optical disks (MOs), etc.), optical memory (e.g., CDs, DVDs, BDs, HVDs, etc.), and semiconductor memory (e.g., ROMs, EPROMs, EEPROMs, non-volatile memory (NAND FLASH), Solid State Disks (SSDs)), etc.
The above-described embodiments of the apparatus are merely illustrative, and the units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment. One of ordinary skill in the art can understand and implement it without inventive effort.
Through the above description of the embodiments, those skilled in the art will clearly understand that each embodiment can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware. With this understanding in mind, the above-described technical solutions may be embodied in the form of a software product, which can be stored in a computer-readable storage medium such as ROM/RAM, magnetic disk, optical disk, etc., and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the methods described in the embodiments or some parts of the embodiments.
Finally, it should be noted that: the above examples are only intended to illustrate the technical solution of the present invention, but not to limit it; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.

Claims (10)

CN202111280575.XA2021-10-292021-10-29Image obstacle target detection method, device, electronic equipment and storage mediumActiveCN114185061B (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
CN202111280575.XACN114185061B (en)2021-10-292021-10-29Image obstacle target detection method, device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN202111280575.XACN114185061B (en)2021-10-292021-10-29Image obstacle target detection method, device, electronic equipment and storage medium

Publications (2)

Publication NumberPublication Date
CN114185061Atrue CN114185061A (en)2022-03-15
CN114185061B CN114185061B (en)2025-08-15

Family

ID=80540553

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN202111280575.XAActiveCN114185061B (en)2021-10-292021-10-29Image obstacle target detection method, device, electronic equipment and storage medium

Country Status (1)

CountryLink
CN (1)CN114185061B (en)

Citations (11)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
JPH03154972A (en)*1989-11-131991-07-02Asia Kosoku KkThree-dimensional model constructing method
CN109840448A (en)*2017-11-242019-06-04百度在线网络技术(北京)有限公司Information output method and device for automatic driving vehicle
CN110068814A (en)*2019-03-272019-07-30东软睿驰汽车技术(沈阳)有限公司A kind of method and device measuring obstacle distance
CN110119751A (en)*2018-02-062019-08-13北京四维图新科技股份有限公司Laser radar point cloud Target Segmentation method, target matching method, device and vehicle
CN111079545A (en)*2019-11-212020-04-28上海工程技术大学Three-dimensional target detection method and system based on image restoration
CN111539278A (en)*2020-04-142020-08-14浙江吉利汽车研究院有限公司Detection method and system for target vehicle
CN111591288A (en)*2020-03-312020-08-28北京智行者科技有限公司Collision detection method and device based on distance transformation graph
CN111915730A (en)*2020-07-202020-11-10北京建筑大学 A method and system for automatically generating indoor 3D model from point cloud considering semantics
WO2020258120A1 (en)*2019-06-272020-12-30深圳市汇顶科技股份有限公司Face recognition method and device, and electronic apparatus
CN112802092A (en)*2021-01-292021-05-14深圳一清创新科技有限公司Obstacle sensing method and device and electronic equipment
CN113160223A (en)*2021-05-172021-07-23深圳中科飞测科技股份有限公司 Contour determination method, determination device, detection device and storage medium

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
JPH03154972A (en)*1989-11-131991-07-02Asia Kosoku KkThree-dimensional model constructing method
CN109840448A (en)*2017-11-242019-06-04百度在线网络技术(北京)有限公司Information output method and device for automatic driving vehicle
CN110119751A (en)*2018-02-062019-08-13北京四维图新科技股份有限公司Laser radar point cloud Target Segmentation method, target matching method, device and vehicle
CN110068814A (en)*2019-03-272019-07-30东软睿驰汽车技术(沈阳)有限公司A kind of method and device measuring obstacle distance
WO2020258120A1 (en)*2019-06-272020-12-30深圳市汇顶科技股份有限公司Face recognition method and device, and electronic apparatus
CN111079545A (en)*2019-11-212020-04-28上海工程技术大学Three-dimensional target detection method and system based on image restoration
CN111591288A (en)*2020-03-312020-08-28北京智行者科技有限公司Collision detection method and device based on distance transformation graph
CN111539278A (en)*2020-04-142020-08-14浙江吉利汽车研究院有限公司Detection method and system for target vehicle
CN111915730A (en)*2020-07-202020-11-10北京建筑大学 A method and system for automatically generating indoor 3D model from point cloud considering semantics
CN112802092A (en)*2021-01-292021-05-14深圳一清创新科技有限公司Obstacle sensing method and device and electronic equipment
CN113160223A (en)*2021-05-172021-07-23深圳中科飞测科技股份有限公司 Contour determination method, determination device, detection device and storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
李明磊;刘少创;杨欢;亓晨;: "双层优化的激光雷达点云场景分割方法", 测绘学报, no. 02, 15 February 2018 (2018-02-15), pages 139 - 144*

Also Published As

Publication numberPublication date
CN114185061B (en)2025-08-15

Similar Documents

PublicationPublication DateTitle
US11282210B2 (en)Method and apparatus for segmenting point cloud data, storage medium, and electronic device
CN113156421A (en)Obstacle detection method based on information fusion of millimeter wave radar and camera
CN115049700A (en)Target detection method and device
CN112270272B (en)Method and system for extracting road intersections in high-precision map making
WO2020154990A1 (en)Target object motion state detection method and device, and storage medium
CN112507862A (en)Vehicle orientation detection method and system based on multitask convolutional neural network
CN114937255B (en) A detection method and device for laser radar and camera fusion
JP2009199284A (en)Road object recognition method
Sehestedt et al.Robust lane detection in urban environments
CN111461221A (en) A multi-source sensor fusion target detection method and system for autonomous driving
Goga et al.Fusing semantic labeled camera images and 3D LiDAR data for the detection of urban curbs
CN114550142A (en)Parking space detection method based on fusion of 4D millimeter wave radar and image recognition
CN117392423A (en) Lidar-based target true value data prediction method, device and equipment
CN114758096A (en)Road edge detection method, device, terminal equipment and storage medium
CN114120266A (en) Vehicle lane change detection method, device, electronic device and storage medium
CN119445301A (en) A multi-level and multi-attention target detection method based on radar and camera
CN116721162A (en)External parameter calibration method for radar and camera, electronic equipment and storage medium
CN113673569B (en) Target detection method, device, electronic device, and storage medium
CN114185061A (en) Image obstacle target detection method, device, electronic device and storage medium
CN116630931A (en) Obstacle detection method, system, agricultural machine, electronic device and storage medium
CN116824152A (en)Target detection method and device based on point cloud, readable storage medium and terminal
CN114701522A (en)Scene deployment method, device, equipment and medium for unmanned configuration parameters
CN116580300A (en)Obstacle recognition method, self-mobile device and storage medium
CN114782496A (en)Object tracking method and device, storage medium and electronic device
CN114882458A (en)Target tracking method, system, medium and device

Legal Events

DateCodeTitleDescription
PB01Publication
PB01Publication
SE01Entry into force of request for substantive examination
SE01Entry into force of request for substantive examination
GR01Patent grant
GR01Patent grant

[8]ページ先頭

©2009-2025 Movatter.jp