Movatterモバイル変換


[0]ホーム

URL:


CN111507973B - Target detection method and device, electronic equipment and storage medium - Google Patents

Target detection method and device, electronic equipment and storage medium
Download PDF

Info

Publication number
CN111507973B
CN111507973BCN202010314166.6ACN202010314166ACN111507973BCN 111507973 BCN111507973 BCN 111507973BCN 202010314166 ACN202010314166 ACN 202010314166ACN 111507973 BCN111507973 BCN 111507973B
Authority
CN
China
Prior art keywords
information
obstacle
grid
point cloud
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010314166.6A
Other languages
Chinese (zh)
Other versions
CN111507973A (en
Inventor
周辉
洪方舟
王哲
石建萍
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Sensetime Lingang Intelligent Technology Co Ltd
Original Assignee
Shanghai Sensetime Lingang Intelligent Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Sensetime Lingang Intelligent Technology Co LtdfiledCriticalShanghai Sensetime Lingang Intelligent Technology Co Ltd
Priority to CN202010314166.6ApriorityCriticalpatent/CN111507973B/en
Publication of CN111507973ApublicationCriticalpatent/CN111507973A/en
Priority to JP2021577017Aprioritypatent/JP2022539093A/en
Priority to PCT/CN2021/087424prioritypatent/WO2021213241A1/en
Priority to KR1020217043313Aprioritypatent/KR20220016221A/en
Application grantedgrantedCritical
Publication of CN111507973BpublicationCriticalpatent/CN111507973B/en
Activelegal-statusCriticalCurrent
Anticipated expirationlegal-statusCritical

Links

Classifications

Landscapes

Abstract

The disclosure relates to a target detection method and device, an electronic device and a storage medium, wherein the method comprises the following steps: acquiring point cloud information, wherein the point cloud information at least comprises point cloud information corresponding to a target object and an object to be detected; according to the point cloud information, grid information is obtained, wherein the grid information at least comprises an object to be detected; and identifying the obstacle in the object to be detected according to the grid information.

Description

Target detection method and device, electronic equipment and storage medium
Technical Field
The disclosure relates to the technical field of automatic driving, and in particular relates to a target detection method and device, electronic equipment and a storage medium.
Background
The object detection of obstacles is an important link in automatic driving to ensure safe driving. For target detection, a neural network-based deep learning technology can be used for predicting the possible size and position of the obstacle, however, the accuracy of target detection realized based on the deep learning technology depends on the specific type of training data and the advantages and disadvantages of a training algorithm, so that the problem of low accuracy of target detection of the obstacle is caused. However, there is no effective solution in the related art.
Disclosure of Invention
The disclosure provides a technical scheme for target detection.
According to an aspect of the present disclosure, there is provided a target detection method, the method including:
acquiring point cloud information, wherein the point cloud information at least comprises point cloud information corresponding to a target object and an object to be detected;
according to the point cloud information, grid information is obtained, wherein the grid information at least comprises an object to be detected;
and identifying the obstacle in the object to be detected according to the grid information.
In a possible implementation manner, the obtaining the point cloud information includes:
acquiring a plurality of point cloud information to be processed, which are respectively scanned by at least two sensors;
and performing splicing processing on the plurality of point cloud information to be processed to obtain the point cloud information.
In a possible implementation manner, the point cloud information further includes a sensor identifier;
the step of obtaining grid information according to the point cloud information comprises the following steps:
performing gridding treatment on the point cloud information to obtain a grid map, wherein the grid map comprises a plurality of grid areas;
determining whether an obstacle exists in a target grid area according to the category of sensor identifications included in the target grid area in the grid areas;
And obtaining the grid information under the condition that the target grid area has an obstacle.
In a possible implementation manner, the determining whether an obstacle exists in the target grid area according to the category of the sensor identifier included in the target grid area in the multiple grid areas includes:
and determining that an obstacle exists in the target grid area under the condition that the sensor identifications corresponding to at least two pixel points in the target grid area are different.
In a possible implementation manner, the point cloud information further includes altitude information;
and under the condition that the target grid area has an obstacle, obtaining the grid information, and further comprising:
determining the category of the obstacle existing in the target grid area according to the height information;
and updating the grid information according to the category of the obstacle.
In a possible implementation manner, the determining, according to the height information, a category of the obstacle existing in the target grid area includes:
acquiring sensor identifications and height information respectively corresponding to at least two pixel points in the target grid region;
dividing the at least two pixel points according to the sensor identifications, and taking the pixel points corresponding to the same sensor identification as one group of data to obtain a plurality of groups of pixel point data;
According to the height information, respectively determining the quantity corresponding to the minimum height value in each group of pixel point data in the plurality of groups of pixel point data;
and determining the category of the obstacle according to the quantity corresponding to the minimum height value.
In a possible implementation manner, the identifying, according to the grid information, an obstacle in the object to be detected includes:
carrying out connected region analysis according to the grid information to obtain a connected region;
and identifying the obstacle in the object to be detected according to the communication area.
In a possible implementation manner, after the identifying, according to the communication area, an obstacle in the object to be detected, the method further includes:
acquiring a plurality of points to be processed on a first line segment of the communication area;
selecting at least two reference points from the plurality of points to be processed;
and connecting the at least two reference points to obtain a second line segment, and adjusting the communication area according to the second line segment to obtain a first area.
In a possible implementation manner, after the identifying, according to the communication area, an obstacle in the object to be detected, the method further includes:
extracting point cloud information corresponding to a target object from the point cloud information, and obtaining a target position of the target object in the grid information according to coordinates of pixel points in the point cloud information corresponding to the target object;
Acquiring at least two barriers positioned in the grid information;
taking the center point of the target position as a reference, and obtaining a sector area according to a guide line sent out by a preset angle;
and deleting the second obstacle from the grid information when the fan-shaped area covers the first obstacle and the second obstacle is blocked by the first obstacle.
In a possible implementation manner, the method includes:
and sending a message of the existence of the obstacle on the navigation path to the target object, so that the target object responds to the message of the existence of the obstacle, and carrying out obstacle avoidance processing and/or re-planning the navigation path according to the obstacle.
According to an aspect of the present disclosure, there is also provided an object detection apparatus including:
the device comprises an acquisition unit, a detection unit and a detection unit, wherein the acquisition unit is used for acquiring point cloud information, and the point cloud information at least comprises point cloud information corresponding to a target object and an object to be detected;
the information processing unit is used for obtaining grid information according to the point cloud information, wherein the grid information at least comprises an object to be detected;
and the detection unit is used for identifying the obstacle in the object to be detected according to the grid information.
In a possible implementation manner, the acquiring unit is configured to:
acquiring a plurality of point cloud information to be processed, which are respectively scanned by at least two sensors;
and performing splicing processing on the plurality of point cloud information to be processed to obtain the point cloud information.
In a possible implementation manner, the point cloud information further includes a sensor identifier;
the information processing unit is used for:
performing gridding treatment on the point cloud information to obtain a grid map, wherein the grid map comprises a plurality of grid areas;
determining whether an obstacle exists in a target grid area according to the category of sensor identifications included in the target grid area in the grid areas;
and obtaining the grid information under the condition that the target grid area has an obstacle.
In a possible implementation manner, the information processing unit is configured to:
and determining that an obstacle exists in the target grid area under the condition that the sensor identifications corresponding to at least two pixel points in the target grid area are different.
In a possible implementation manner, the point cloud information further includes altitude information;
the device further comprises a category determining unit for:
determining the category of the obstacle existing in the target grid area according to the height information;
And updating the grid information according to the category of the obstacle.
In a possible implementation manner, the category determining unit is configured to:
acquiring sensor identifications and height information respectively corresponding to at least two pixel points in the target grid region;
dividing the at least two pixel points according to the sensor identifications, and taking the pixel points corresponding to the same sensor identification as one group of data to obtain a plurality of groups of pixel point data;
according to the height information, respectively determining the quantity corresponding to the minimum height value in each group of pixel point data in the plurality of groups of pixel point data;
and determining the category of the obstacle according to the quantity corresponding to the minimum height value.
In a possible implementation manner, the detecting unit is configured to:
carrying out connected region analysis according to the grid information to obtain a connected region;
and identifying the obstacle in the object to be detected according to the communication area.
In a possible implementation manner, the device further includes a communication area adjusting unit, configured to:
acquiring a plurality of points to be processed on a first line segment of the communication area;
selecting at least two reference points from the plurality of points to be processed;
And connecting the at least two reference points to obtain a second line segment, and adjusting the communication area according to the second line segment to obtain a first area.
In a possible implementation manner, the apparatus further includes: an occlusion processing unit configured to:
extracting point cloud information corresponding to a target object from the point cloud information, and obtaining a target position of the target object in the grid information according to coordinates of pixel points in the point cloud information corresponding to the target object;
acquiring at least two barriers positioned in the grid information;
taking the center point of the target position as a reference, and obtaining a sector area according to a guide line sent out by a preset angle;
and deleting the second obstacle from the grid information when the fan-shaped area covers the first obstacle and the second obstacle is blocked by the first obstacle.
In a possible implementation manner, the apparatus further includes a sending unit, configured to:
and sending a message of the existence of the obstacle on the navigation path to the target object, so that the target object responds to the message of the existence of the obstacle, and carrying out obstacle avoidance processing and/or re-planning the navigation path according to the obstacle.
According to an aspect of the present disclosure, there is also provided an electronic apparatus including:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to: the above-described target detection method is performed.
According to an aspect of the present disclosure, there is also provided a computer-readable storage medium having stored thereon computer program instructions which, when executed by a processor, implement the above-described target detection method.
According to the method and the device, grid information is obtained according to point cloud information at least comprising a target object and corresponding to the object to be detected, the grid information at least comprises the object to be detected, and the obstacle in the object to be detected is identified according to the grid information, compared with the method and the device which depend on specific type of training data and training algorithm, the content of the point cloud information is richer, but not specific target objects such as vehicles or pedestrians, and the like, so that the application range is more universal, the method and the device are applicable to more target detection scenes, the obstacle in the object to be detected is identified according to the grid information, and the target detection precision of the obstacle is improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Other features and aspects of the present disclosure will become apparent from the following detailed description of exemplary embodiments, which proceeds with reference to the accompanying drawings.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the disclosure and together with the description, serve to explain the technical aspects of the disclosure.
Fig. 1 shows a flowchart of a target detection method according to an embodiment of the present disclosure.
Fig. 2 shows a schematic diagram of grid information according to an embodiment of the present disclosure.
Fig. 3 shows a schematic diagram of pixel sources with different ring IDs in a grid area according to an embodiment of the present disclosure.
Fig. 4 shows a schematic diagram of a pixel point source same ring ID in a grid area according to an embodiment of the present disclosure.
Fig. 5 shows a schematic diagram of obstacle point information in each grid area according to an embodiment of the present disclosure.
Fig. 6 a-6 b illustrate schematic diagrams of the manner in which the communication regions communicate according to embodiments of the present disclosure.
Fig. 7 shows a schematic view of an obstacle in a mesh map according to an embodiment of the disclosure.
Fig. 8 shows a schematic diagram of deleting an occluded obstacle in a mesh map according to an embodiment of the present disclosure.
Fig. 9 shows a block diagram of an object detection apparatus according to an embodiment of the present disclosure.
Fig. 10 shows a block diagram of an electronic device according to an embodiment of the disclosure.
Fig. 11 shows a block diagram of an electronic device according to an embodiment of the disclosure.
Detailed Description
Various exemplary embodiments, features and aspects of the disclosure will be described in detail below with reference to the drawings. In the drawings, like reference numbers indicate identical or functionally similar elements. Although various aspects of the embodiments are illustrated in the accompanying drawings, the drawings are not necessarily drawn to scale unless specifically indicated.
The word "exemplary" is used herein to mean "serving as an example, embodiment, or illustration. Any embodiment described herein as "exemplary" is not necessarily to be construed as preferred or advantageous over other embodiments.
The term "and/or" is herein merely an association relationship describing an associated object, meaning that there may be three relationships, e.g., a and/or B, may represent: a exists alone, A and B exist together, and B exists alone. In addition, the term "at least one" herein means any one of a plurality or any combination of at least two of a plurality, for example, including at least one of A, B, C, and may mean including any one or more elements selected from the group consisting of A, B and C.
In addition, numerous specific details are set forth in the following detailed description in order to provide a better understanding of the present disclosure. It will be understood by those skilled in the art that the present disclosure may be practiced without some of these specific details. In some instances, methods, means, elements, and circuits well known to those skilled in the art have not been described in detail in order not to obscure the present disclosure.
The detection of the target object, such as the detection of the target object for vehicles or pedestrians in an automatic driving or unmanned scene, can be realized by adopting a deep learning technology based on a neural network. The target detection realized based on the deep learning technology is described as follows:
on the one hand, the accuracy of target detection based on the deep learning technology depends on specific types of training data, so that the application scene to which the target detection is applicable is limited, that is, the target detection is feasible for a specific scene related to the selected training data, and cannot be generalized into other non-specific scenes. For example, for target detection of a specific scene, such as a vehicle or a pedestrian, since the specific scene is relatively common, a large amount of data related to the target detection of the vehicle or the pedestrian is accumulated, and the data is used as specific type of training data, objects conforming to the types of characteristics are searched for in the input data based on the deep learning technology, so that the target detection precision under the specific scene is ensured. However, for an unusual object, such as a trunk or stone with random shape, it is difficult to detect the obstacle according to the deep learning technique because the object is not seen during the training process, and thus it is difficult to apply the deep learning technique to other unspecified arbitrary scenes, resulting in poor performance of the neural network trained in a specific scene in a different type of scene, and thus poor generalization ability. Moreover, the deep learning technique essentially fits a complex function to a given data (the intended target) so that data that fits the same distribution can give the correct result after entering the function to get a matched hypothesis, but often makes the training process overly complex to get this hypothesis, prone to overfitting problems. Moreover, if the input data does not conform to the distribution of training data, the results given are not necessarily accurate. Since the training data is difficult to cover all possible road situations, it is only feasible to target specific training data and a certain specific scenario of relevance, giving a result that is more reliable with specific training data.
On the other hand, the accuracy of target detection based on the deep learning technology depends on the advantages and disadvantages of a training algorithm, the deep learning characteristic is not completely controllable, and for given input data, a prediction result is unpredictable, so that an ideal value of 100% recall is difficult to achieve. The recall rate is the number of objects identified by target detection divided by the number of actual objects, and in an automatic driving or unmanned driving scene, the higher the recall rate is, the higher the driving safety is.
In summary, the deep learning technology is adopted to realize target detection in an autopilot or unmanned scene, and is more suitable for detecting target objects such as vehicles or pedestrians. The object detection of the obstacle in the road is used for collision prevention, the accuracy required by the obstacle detection is not achieved, the accuracy of the obstacle detection is an important link in automatic driving for ensuring safe driving, and if the accuracy of the obstacle detection is not achieved, the safety of automatic driving or unmanned driving cannot be ensured.
Fig. 1 shows a flowchart of an object detection method according to an embodiment of the present disclosure, which is applied to an object detection apparatus, for example, where the apparatus may be deployed in a terminal device or a server or other processing device for execution, and may perform processing such as object detection or object classification in automatic driving. The terminal device may be a User Equipment (UE), a mobile device, a cellular phone, a cordless phone, a personal digital assistant (PDA, personal Digital Assistant), a handheld device, a computing device, a vehicle mounted device, a wearable device, etc. In some possible implementations, the processing method may be implemented by way of a processor invoking computer readable instructions stored in a memory. As shown in fig. 1, the process includes:
Step S101, obtaining point cloud information, wherein the point cloud information at least comprises point cloud information corresponding to a target object and an object to be detected.
In an example, a plurality of point cloud information to be processed, which is obtained by respectively scanning at least two sensors, may be obtained, and the plurality of point cloud information to be processed may be spliced to obtain the point cloud information, so that gridding processing may be performed subsequently according to the point cloud information to obtain grid information.
In an example, the at least two sensors may be a plurality of sensors with laser transmitting and receiving functions in the lidar.
In an example, the target object may refer to a target device that is scanned by at least two sensors during target detection, such as a vehicle in an autopilot or unmanned scenario. The target object in the present disclosure is not limited to this target device, but may include a blind-guiding pedestrian or the like.
In an example, the object to be detected may refer to an object to be detected related to a target object in the target detection process, for example, if the target object is a vehicle in an autopilot or unmanned scene, then for safe driving, the object to be detected related to the target object may be a stone, a leaf, a roadblock, etc. on a driving route of the vehicle. The object to be detected may also refer to an object to be detected in the same observation frame as the target object in the target detection process, for example, the target object is still exemplified by a vehicle, and the object to be detected in the same observation frame as the target object may be a roadside billboard, a tree crown, and the like through which the vehicle passes.
Step S102, grid information is obtained according to the point cloud information, wherein the grid information at least comprises an object to be detected.
In one example, the point cloud information may include: the point cloud information of the target object, such as point cloud information corresponding to a vehicle in an automatic driving or unmanned driving scene, can also comprise point cloud information corresponding to an object to be detected, such as a small stone, leaves, roadblocks, roadside billboards, trees, crowns of trees and the like. It should be noted that in an autopilot or unmanned scene, the small stones, leaves and roadblocks in the objects to be detected are obstacles to be identified later, and the roadside billboard, the trees and the crowns thereof are located outside the driving path of the vehicle, so that the road side billboard, the trees and the crowns thereof can not be considered as the obstacles, thus not only reducing the operation amount, but also improving the detection precision of the obstacles.
In an example, the point cloud information may be subjected to meshing processing to obtain a mesh map formed by a plurality of mesh areas, as shown in fig. 2, fig. 2 shows a schematic diagram of mesh information according to an embodiment of the disclosure, and one implementation manner of the mesh information of the disclosure may be a mesh map, or may be other chart forms, which are not limited, in fig. 2, the mesh map includes a plurality of mesh areas 11, and one or a plurality of pixel points are included in each mesh area (in fig. 2, each mesh area includes a plurality of pixel points as an example). It is required to identify whether an obstacle point exists in a current grid area including a pixel point and identify the obstacle point by using obstacle point information, and the identifying process may use sensor identification (ring ID) in point cloud information, for example, the obstacle point information may be marked in a grid area of the grid image according to the ring ID, as shown in fig. 5, fig. 5 shows a schematic diagram of the obstacle point information in each grid area according to an embodiment of the present disclosure, and a number "0" and a number "1" are taken as examples of the obstacle information, where the grid area is marked as "0" and indicates that no obstacle point exists in the grid area, and the grid area is marked as "1" and indicates that an obstacle point exists in the grid area, so as to obtain a grid image including the obstacle point information, so as to identify the obstacle in the object to be detected according to the grid image including the obstacle point information.
Step S103, identifying the obstacle in the object to be detected according to the grid information.
In an example, the grid information may be a grid map including obstacle point information, and according to the grid map including the obstacle point information, an obstacle in the object to be detected may be identified, for example, a grid area is marked with "1", which indicates that there is an obstacle point in the grid area, and a plurality of obstacle points are connected together, so as to obtain a connected area corresponding to the obstacle.
In the disclosure, by combining point cloud information with a non-deep learning technology, compared with an implementation manner depending on training data of a specific type and a training algorithm in the deep learning technology, the method and the device can scan a target object according to at least two sensors to obtain point cloud information corresponding to the target object and the object to be detected, so as to obtain grid information at least comprising the object to be detected according to the point cloud information. Since the grid information contains the obstacle information, the obstacle in the object to be detected can be identified according to the obstacle information contained in the grid information, and the target detection accuracy for the obstacle is improved.
In an example, in the process that at least two sensors scan the target object respectively, the point cloud information can be obtained according to the scan detection signal sent by the sensors and the received return signal. For example, the sensor transmits a scanning detection signal to the vehicle and the obstacle thereof, and then the sensor receives a return signal reflected from the vehicle and the obstacle thereof, and compares the return signal with the transmitted scanning detection signal, so that parameters such as position information, altitude information, distance information, speed information, attitude information, shape information and the like can be obtained, and the vehicle and the obstacle thereof can be tracked and identified according to the parameters.
It should be noted that, the point cloud information of the present disclosure is a set of massive points expressing the spatial distribution and the surface characteristics of an object in a target area under the same spatial reference system, and records the three-dimensional coordinates of each pixel point in the form of pixel points (where the X/Y two-dimensional coordinates in the three-dimensional coordinates are used to calibrate the position information in the above parameters, and the third three-dimensional Z in the three-dimensional coordinates is used to calibrate the height information in the above parameters), color information (RGB), and laser reflection Intensity (Intensity) information, etc. in combination.
In an example, the ring ID may be obtained from the point cloud information (each pixel point includes three-dimensional coordinates, RGB, intensity information, and corresponding ring ID information), and according to the category of the ring ID included in a target mesh area in the plurality of mesh areas, whether an obstacle exists in the target mesh area is determined, where the mesh information is obtained when the obstacle exists in the target mesh area, including the following contents:
1. and determining that an obstacle exists in the target grid area under the condition that ring IDs corresponding to at least two pixel points in the target grid area are different identifications. Fig. 3 shows a schematic diagram of pixel sources with different ring IDs in a grid area according to an embodiment of the present disclosure. As shown in fig. 3, the sensor 21, the sensor 22, and the sensor 23, the obstacle 24, and a plurality of pixel points (identified by (1) - (6), respectively). It should be noted that the triangular shape of the obstacle is only schematic, and is not intended to limit the actual shape of the obstacle, but is merely intended to indicate that the obstacle is an obstacle. The laser beams emitted by the sensors 21 and 22 should not originally fall into the target grid area where the obstacle 24 is located, and the reflection of the laser beams emitted by the sensors 21 and 22 is caused due to the presence of the obstacle 24 in the target grid area. Under the condition that the sensor 21 scans to obtain point cloud information formed by a plurality of pixel points, the laser beam 211 emitted by the sensor 21 encounters an obstacle 24 to be reflected, and then the pixel point (1) falls into a target grid area; under the condition that the sensor 22 scans to obtain point cloud information formed by a plurality of pixel points, the laser beam 221 and the laser beam 222 emitted by the sensor 22 encounter the obstacle 24 and are reflected, and then the pixel points (2) and (3) fall into a target grid area; in the case of scanning by the sensor 23 to obtain point cloud information composed of a plurality of pixel points, the laser beam 231, the laser beam 232, and the laser beam 233 emitted by the sensor 23 do not encounter the obstacle 24, and the pixel point (4), the pixel point (5), and the pixel point (6) fall into the target mesh region. It can be seen that: and (3) a plurality of pixel points (respectively marked by (1) - (6)) are obtained through different sensors, and then the ring IDs respectively corresponding to the pixel points are different marks, and at the moment, barriers exist in the target grid area.
It should be noted that, in the embodiment of fig. 3, the original positional relationship of the plurality of sensors (the sensor 21, the sensor 22 and the sensor 23) is not necessarily set in a scattered manner, but may be set next to each other, or even the plurality of sensors are set together and present different projection angles, and in this embodiment, the plurality of sensors are set in a scattered manner for convenience in explaining the obstacle recognition, so that the present embodiment is more intuitive, and is not limited to the embodiment of the sensor placement position in this embodiment, and is within the protection scope of the present disclosure.
2. And under the condition that ring IDs corresponding to at least two pixel points in a target grid area are the same identifier, determining that no obstacle exists in the target grid area. Fig. 4 shows a schematic diagram of a pixel point source same ring ID in a grid area according to an embodiment of the present disclosure. As shown in fig. 4, a sensor 31 is included, as well as a plurality of pixels (identified by (7) -bars, respectively). Under the condition that the sensor 31 scans to obtain the point cloud information formed by a plurality of pixel points, the laser beams 311, 312, 313 and 314 emitted by the sensor 31 do not meet the obstacle, and the pixel points (7), (8), (9) and 314 fall into the target grid area. It can be seen that: and a plurality of pixel points (respectively marked by (7) - (mm)) are obtained through the same sensor, and the ring IDs respectively corresponding to the pixel points are the same mark, and at the moment, no obstacle exists in the target grid area.
Since the object to be detected included in the point cloud information may be an object including an obstacle such as a small stone, a leaf, or the like, and other objects considered as a non-obstacle in an automatic driving or unmanned driving scene, such as a crown, a signage, or the like. Therefore, in addition to the obstacle determination by the ring ID, the height information may be further increased, so as to verify the obstacle determined by the ring ID, so as to avoid possible misdetermination, for example, to identify other objects considered as non-obstacle, such as a crown, a signage, and the like, as the obstacle. Because the crown, the signboard and other aerial objects can fall on the sensor points, the sensor points can be misjudged as barriers, the crown, the signboard and other aerial objects can not belong to the barriers in the automatic driving or unmanned scene, the crown, the signboard and other aerial objects can be much higher than the common obstacles such as stones and leaves, and the height information of the pixel points in the point cloud information can be added to exclude the objects which are misidentified as the obstacles, such as the crown, the signboard and the like, from the grid area.
In an example, when the point cloud information further includes height information, the obtaining the grid information when the target grid area has an obstacle further includes: determining the category of the obstacle existing in the target grid area according to the height information; and updating the grid information according to the category of the obstacle. For example, in the case where the mesh information is a mesh map, after updating the mesh information, a more accurate mesh map containing the obstacle point information may be obtained for subsequent target detection processing.
In an example, determining a category of the obstacle existing in the target mesh area according to the height information includes: acquiring ring IDs and height information respectively corresponding to at least two pixel points in the target grid region; dividing the at least two pixel points according to the ring ID, and taking the pixel point corresponding to the same ring ID as one group of data to obtain a plurality of groups of pixel point data. And respectively determining the quantity corresponding to the minimum height value in each group of pixel point data in the plurality of groups of pixel point data according to the height information, and determining the category of the obstacle according to the quantity corresponding to the minimum height value. In one example, the number corresponding to the minimum height value may be compared to a threshold range to determine the category of the obstacle. The classifying statistics may be implemented by comparing the number corresponding to the minimum height value with a threshold range, where the threshold range may be a classifying result obtained by dividing based on a ring ID, the number corresponding to the minimum height value is greater than or equal to a number threshold (ring_count_th), and the number corresponding to the minimum height value is less than a height threshold (height_th), where an obstacle is considered to exist in a grid area of the grid graph. For example, ring_count_th=3 is set, and height_th may take the height of the vehicle, for example, 2m.
After a plurality of division results are obtained through the division based on a certain ring ID, the mesh map can be subjected to connected region analysis to obtain a connected region, and the obstacle in the object to be detected is identified according to the connected region. The obstacle may be represented in the form of a polygon such as a concave polygon, a convex polygon, a rectangle, or a triangle, as long as the obstacle can be recognized as being different from other objects. In an example of the present disclosure, a convex polygon is adopted, on one hand, in terms of shape characteristics of the convex polygon, the number of sides is more than that of a rectangle or a triangle, and the shape of an obstacle is more easily and accurately represented; on the other hand, the convex polygon is adopted to be compared with the concave polygon, so that the excessive calculation amount is not introduced to increase the calculation cost, and the calculation cost is moderate.
In one example, the above connected region analysis may search for the connected mesh region containing the obstacle point information based on the obstacle point information marked as "0" or "1" in the mesh region as shown in fig. 5, thereby forming a "connected region".
Fig. 6 a-6 b illustrate schematic diagrams of a connected region connected manner according to embodiments of the present disclosure, where the connected region operation may be implemented by a breadth first search (BFS, breadth First Search) algorithm, with two connected manners, 4 adjacency or 8 adjacency. In one example, the smallest unit in an image is a pixel, and there are 8 adjacent pixels around each pixel, then there are 2 adjacent relations: 4 (as shown in fig. 6 a) and 8 (as shown in fig. 6 b), wherein 4 is adjacent to a total of 4 points, namely four pixels up, down, left and right. And 8 adjacent points are 8 pixel points in total because the points at the diagonal positions are included, if a pixel point A is adjacent to a pixel point B, the points which are communicated with each other form one area, and the points which are not communicated form other different areas. Thus, the set of all the pixel points connected to each other is referred to as a "connected region". An obstacle may be obtained by a connected region operation, and fig. 7 shows a schematic view of the obstacle in a mesh map according to an embodiment of the present disclosure, and as shown in fig. 7, the mesh map includes a plurality of obstacles represented by convex polygons.
In an example, after identifying the obstacle in the object to be detected according to the communication area, the method further includes: and acquiring a plurality of points to be processed on a first line segment of the communication area, selecting at least two reference points from the plurality of points to be processed, connecting the at least two reference points to obtain a second line segment, and adjusting the communication area according to the second line segment to obtain the first area. For example, the first region may be smaller than the communication region. If the obstacle is a convex polygon, the adjustment process of the communication area may be referred to as convex hull processing, for example, a certain line segment (referred to as a first line segment) forming the communication area has 10 points to be processed, 6 reference points are selected from the 10 points to be processed, and one line segment (referred to as a second line segment) is obtained by connecting the 6 reference points, so that the first area can be obtained after the communication area is adjusted according to the second line segment, and the first area is smaller than the communication area, that is, after the convex hull processing, the number of convex edges for representing the obstacle is reduced (because the number of points is small, the convex edges are correspondingly reduced), and the operation amount can be reduced by adopting the convex hull processing.
In an example, after identifying the obstacle in the object to be detected according to the communication area, the method further includes: extracting point cloud information corresponding to a target object from the point cloud information, and obtaining a target position of the target object in the grid information according to coordinates of pixel points in the point cloud information corresponding to the target object; acquiring at least two barriers positioned in the grid information; taking the center point of the target position as a reference, and obtaining a sector area according to a guide line sent out by a preset angle; and deleting the second obstacle from the grid information when the fan-shaped area covers the first obstacle and the second obstacle is blocked by the first obstacle. Fig. 8 illustrates a schematic diagram of deleting an occluded obstacle in a mesh map according to an embodiment of the present disclosure, as shown in fig. 8, where the mesh map includes a target object and at least two obstacles, the target object may be a vehicle 41, a first obstacle of the at least two obstacles may be a warning object 42, a second obstacle of the at least two obstacles may be one or more stones 43, a sector area is obtained according to a guiding line sent out by a preset angle α based on a center point of a current position of the vehicle 41, the warning object 42 and the one or more stones 43 are covered by the sector area, and since the one or more stones 43 are occluded by the warning object 42, the obstacle is not seen from a center position of the vehicle 41, only the warning object 43 is seen, and then the one or more stones 43 are regarded as objects smaller than the warning object 42 without going, and the one or more stones 43 are deleted from the mesh map. It should be noted that the second obstacle is not limited to the small stone that is blocked, but may be a grass on the roadside or the like.
In one example, the method includes: and sending a message of the existence of the obstacle on the navigation path to a target object (such as a vehicle) so that the target object can respond to the message of the existence of the obstacle, and carrying out obstacle avoidance processing and/or re-planning the navigation path according to the obstacle.
Application example:
an application example according to the above embodiment includes the following:
1. and scanning the target object according to a plurality of sensors in the laser radar to obtain point cloud information containing the target object and the object to be detected, and inputting the point cloud information to obtain grid information containing the object to be detected, wherein the grid information can be a grid chart marking obstacle point information.
The plurality of sensors can be arranged in one or more laser radars, the plurality of sensors jointly construct point cloud information of the whole scene, and the whole scanning area (the area covered by a group of scanned point clouds when each laser radar is at the same moment or in the same time period) is corresponding to the grid graph. Because the direction pointed by each laser emitter in each laser radar is different from the included angle of the horizontal plane, each sensor can scan to the point cloud information of the next circle under a certain angle once every time the laser radar scans.
If no obstacle is raised on a certain grid area in the grid graph, the grid area is a plane which is almost matched with the ground in height, laser emitted by an adjacent sensor of a sensor which is emitted to the grid area is not blocked by the grid area, and the laser emitted to the grid area is sourced from the same sensor, so that pixel points falling into the grid area are sourced from the same sensor, ring IDs corresponding to the pixel points are the same, namely the pixel points falling into the grid area are scanned by the same sensor; if a certain grid area has an obstacle protrusion, the laser emitted by the adjacent sensor of the sensor which should be emitted to the grid area is blocked by the obstacle protruding on the grid area, the laser emitted to the grid area originates from different sensors, therefore, a plurality of sensors corresponding to pixel points falling into the grid area exist, and ring IDs corresponding to the pixel points are different, namely: the pixels falling within the grid area are scanned by different sensors.
Further, the number of ring IDs of the pixels falling into the grid area is used to determine whether the grid area has an obstacle, and it is further optimized that, for example, the tree crown, the signboard and other objects in the air need not belong to the obstacle concerned by the vehicle, and the tree crown and the signboard need to be excluded as the obstacle to be avoided by the vehicle, although the laser beams belonging to the plurality of sensors are emitted into the same grid area, the plurality of sensors corresponding to the pixels falling into the grid area are also available, and the ring IDs are different. Therefore, it is necessary to add the height information of the pixel points into consideration to check the obstacle obtained by the ring ID to achieve the further optimization, and filter out the obstacle higher than a certain height, thereby further improving the accuracy of obstacle detection.
It should be noted that if the input point cloud information is fusion of multiple laser radar scanning results, an nxm grid graph may be constructed for each point cloud information scanned by the laser radar, and the side length of each grid may be preset to represent 0.1M in reality, and the coordinates (N/2, M/2) are set as the vehicle center. And directly constructing an N multiplied by M grid chart under the condition that the input point cloud information is a laser radar scanning result. Whether the point cloud information comprises fusion of a plurality of laser radar scanning results or one laser radar scanning result, the following method for identifying the obstacle is adopted to judge the obstacle so as to obtain a grid map with obstacle point information.
And in the process of judging whether an obstacle exists in a certain grid area according to the ring ID and the height information, distributing pixel points in the point cloud information scanned by the single laser radar into the grid according to the position information. For each grid area, counting ring IDs (the same ring ID is not counted repeatedly) of points allocated to the grid area, taking pixel points corresponding to the same ring ID as one group of data to obtain multiple groups of pixel point data, respectively determining the quantity corresponding to the minimum height value in each group of pixel point data in the multiple groups of pixel point data according to the height information, and determining the category of the obstacle according to the quantity corresponding to the minimum height value. In one example, the number corresponding to the minimum height value may be compared to a threshold range to determine the category of the obstacle. The classifying statistics (or called clustering) can be implemented by comparing the number corresponding to the minimum height value with the threshold range, if the number corresponding to the minimum height value in a certain classifying result is greater than or equal to ring_count_th and the number corresponding to the minimum height value is less than height_th, then the grid area is considered to have an obstacle, and the classifying statistics is adopted to have the following benefits: a section of obstacle is found that is continuous in height, rather than a single point. Finally, a grid diagram can be obtained for each laser radar, and each element of the grid diagrams is subjected to OR operation to be fused, so that an output result is obtained, and the grid diagram with barrier point information is obtained. One example for an or operation is: in the grid diagram, an obstacle is indicated by '1', no obstacle is indicated by '0', two 1x3 grid diagrams are respectively [1, 0] and [0,1,0], then the OR operation is carried out on the two grid diagrams, namely, the corresponding grid areas in the two grid diagrams, if one or two grid areas are marked as '1', the corresponding positions of the overlapped grid areas are marked as '1', and the result after the OR operation is carried out on the two grid diagrams is the [1, 0].
In the process of checking whether an obstacle exists in a certain grid area by adding height information on the basis of judging the obstacle by using the ring ID, a compensation mode can be adopted to improve the detection distance, so that the detection quality is ensured. In an example, when counting ring IDs of pixel points in each grid area, since point cloud information becomes sparse on distant objects, the farther the distance (distance) between the grid area and the center of the vehicle is, the more the compensation mode of counting the surrounding area is needed, and the compensation mode may be: it is also necessary to count the range of n×n sizes centered on the grid region. Wherein, n=around (1+a×distance), the around function represents rounding, a is a preset smaller constant, and the classifying statistics is that all height values in the height value minimum value array are firstly ordered, and if the height difference between two consecutive items in the ordered array is greater than a certain threshold value (gap_th), the two classes are classified.
The distribution of the pixel points reflected on the distant object is more scattered than the distribution at the near object, so that the value of the gap_th is taken as a correction function, and the value of the gap_th can be corrected to a certain degree according to the distance of the grid area from the center of the vehicle. The value method of gap_th can be modified to a certain extent according to distance between the grid area and the center of the vehicle, for example, different compensation schemes are adopted according to different situations such as installation positions, angles, point cloud sparsity and the like of the sensor. In one example, gap_th=a×distance+b, in meters, where a, b are smaller constants. The calculated gap_th is a small value, which can be 0.1m.
As for the ring_count_th value, compensation may be performed according to the sparseness degree of the point cloud information, and in one example, a fixed value, for example, 3 may be taken. For the value of height_th, since the sensor (the sensor may be installed on the laser radar) has a certain elevation angle, the height_th cannot take a certain value, and may be corrected by a certain angle according to the distance (distance) between the grid area and the center of the vehicle. The values of the above-described parameters may be set in accordance with actual conditions, and the specific setting method is not limited thereto.
2. And (3) analyzing the communication area of the barrier points in the grid diagram to obtain a communication area, and obtaining the barrier represented by the convex polygon according to the communication area.
After the above-mentioned mesh map is obtained, the value of each mesh region indicates whether the mesh region has an obstacle, and because of the sparsity of the point cloud information, some large objects are divided into a plurality of parts, and the mesh map can be processed by using an image expansion algorithm, so that a plurality of parts of the same object are connected. Next, a connected region analysis is performed (each connected region represents an object, such as an obstacle). For each connected region, calculating a convex hull, and then using convex hull operation, such as Ramer-Douglas-Peucker algorithm, for each convex hull, so that the edge number of the convex hull can be simplified, and the operation amount can be reduced. Finally, FOV analysis is performed to remove occlusions, thereby removing small obstructions that cannot be observed from the vehicle's center point view.
In one example of a convex hull operation, it includes: 1. for a fold line to be simplified, a straight line AB is connected between the first point A and the second point B of the fold line; 2. traversing to find a point C farthest from the straight line AB on the folding line, and calculating the distance between the point C and the line AB; comparing the distance with a preset threshold value, and if the distance is smaller than the threshold value, taking the straight line AB as the approximation of the section of broken line, and finishing the section of broken line processing; 4. if the distance is greater than the threshold value, dividing the straight line AB into two sections of straight lines AC and BC by using a point C, and respectively carrying out the treatment of the steps 1-4 on the two sections of straight lines; 5. when all curves are processed, the folding lines formed by the dividing points are connected in sequence, so that the folding lines can be used as approximations of initial folding lines to obtain updated convex hulls.
In one example of FOV analysis, it includes: for every two convex hulls C1 and C2, it is necessary to detect whether the position of the vehicle can observe C1 under the shielding of C2. For each point P on the C1, connecting the center point A and the point P, detecting whether the straight line AP passes through the C2, judging whether all points on the C2 are on the same side of the straight line AP through cross multiplication operation, and if so, considering that the straight line AP does not pass through the C2. After traversing the points on C1, the number n of points on the convex hull C1 cannot be observed in the vehicle, and if n is equal to or greater than a certain threshold fov _th, C1 is considered to be invisible, and C1 is deleted.
Regarding the value of fov _th, correction is required according to the distance from the obstacle to the vehicle, and one correction example is: fov _th=min (1, ceil (con_point_num× (1-distance/a))), where con_point_num is the number of points on the corresponding convex hull, distance is the distance from the convex hull to the host vehicle, a is a certain larger constant, the value can be the maximum perceivable distance value, ceil is the upward rounding function.
It will be appreciated by those skilled in the art that in the above-described method of the specific embodiments, the written order of steps is not meant to imply a strict order of execution but rather should be construed according to the function and possibly inherent logic of the steps.
The above-mentioned method embodiments of the present disclosure may be combined with each other to form a combined embodiment without departing from the principle logic, which is limited in space and not repeated herein.
In addition, the disclosure further provides a target detection device, an electronic device, a computer readable storage medium, and a program, where the foregoing may be used to implement any one of the target detection methods provided in the disclosure, and corresponding technical schemes and descriptions and corresponding descriptions referring to method parts are not repeated.
Fig. 9 shows a block diagram of an object detection apparatus according to an embodiment of the present disclosure, as shown in fig. 9, including: an obtaining unit 51, configured to obtain point cloud information, where the point cloud information at least includes point cloud information corresponding to a target object and an object to be detected; an information processing unit 52, configured to obtain grid information according to the point cloud information, where the grid information at least includes an object to be detected; and a detection unit 53, configured to identify an obstacle in the object to be detected according to the mesh information.
In a possible implementation manner, the acquiring unit is configured to: acquiring a plurality of point cloud information to be processed, which are respectively scanned by at least two sensors; and performing splicing processing on the plurality of point cloud information to be processed to obtain the point cloud information.
In a possible implementation manner, the point cloud information further includes a ring ID, and the information processing unit is configured to: performing gridding treatment on the point cloud information to obtain a grid map, wherein the grid map comprises a plurality of grid areas; determining whether an obstacle exists in the target grid area according to the category of ring IDs included in the target grid areas in the grid areas; and obtaining the grid information under the condition that the target grid area has an obstacle.
In a possible implementation manner, the information processing unit is configured to: and determining that an obstacle exists in the target grid area under the condition that ring IDs corresponding to at least two pixel points in the target grid area are different.
In a possible implementation manner, the point cloud information further includes altitude information, and the apparatus further includes a category determining unit, configured to: determining the category of the obstacle existing in the target grid area according to the height information; and updating the grid information according to the category of the obstacle.
In a possible implementation manner, the category determining unit is configured to: acquiring ring IDs and height information respectively corresponding to at least two pixel points in the target grid region; dividing the at least two pixel points according to the ring ID, and taking the pixel point corresponding to the same ring ID as one group of data to obtain a plurality of groups of pixel point data; according to the height information, respectively determining the quantity corresponding to the minimum height value in each group of pixel point data in the plurality of groups of pixel point data; and determining the category of the obstacle according to the quantity corresponding to the minimum height value.
In a possible implementation manner, the detecting unit is configured to: carrying out connected region analysis according to the grid information to obtain a connected region; and identifying the obstacle in the object to be detected according to the communication area.
In a possible implementation manner, the device further includes a communication area adjusting unit, configured to: acquiring a plurality of points to be processed on a first line segment of the communication area; selecting at least two reference points from the plurality of points to be processed; and connecting the at least two reference points to obtain a second line segment, and adjusting the communication area according to the second line segment to obtain a first area. In an example, the first region may be smaller than the communication region.
In a possible implementation manner, the apparatus further includes: an occlusion processing unit configured to: extracting point cloud information corresponding to a target object from the point cloud information, and obtaining a target position of the target object in the grid information according to coordinates of pixel points in the point cloud information corresponding to the target object; acquiring at least two barriers positioned in the grid information; taking the center point of the target position as a reference, and obtaining a sector area according to a guide line sent out by a preset angle; and deleting the second obstacle from the grid information when the fan-shaped area covers the first obstacle and the second obstacle is blocked by the first obstacle.
In a possible implementation manner, the apparatus further includes a sending unit, configured to: and sending a message of the existence of the obstacle on the navigation path to the target object, so that the target object responds to the message of the existence of the obstacle, and carrying out obstacle avoidance processing and/or re-planning the navigation path according to the obstacle.
In some embodiments, functions or modules included in an apparatus provided by the embodiments of the present disclosure may be used to perform a method described in the foregoing method embodiments, and specific implementations thereof may refer to descriptions of the foregoing method embodiments, which are not repeated herein for brevity.
The disclosed embodiments also provide a computer readable storage medium having stored thereon computer program instructions which, when executed by a processor, implement the above-described method. The computer readable storage medium may be a non-volatile computer readable storage medium.
The embodiment of the disclosure also provides an electronic device, which comprises: a processor; a memory for storing processor-executable instructions; wherein the processor is configured as the method described above.
The electronic device may be provided as a terminal, server or other form of device.
Fig. 10 is a block diagram of an electronic device 800, according to an example embodiment. For example, electronic device 800 may be a mobile phone, computer, digital broadcast terminal, messaging device, game console, tablet device, medical device, exercise device, personal digital assistant, or the like.
Referring to fig. 10, an electronic device 800 may include one or more of the following components: a processing component 802, a memory 804, a power component 806, a multimedia component 808, an audio component 810, an input/output (I/O) interface 812, a sensor component 814, and a communication component 816.
The processing component 802 generally controls overall operation of the electronic device 800, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. The processing component 802 may include one or more processors 820 to execute instructions to perform all or part of the steps of the methods described above. Further, the processing component 802 can include one or more modules that facilitate interactions between the processing component 802 and other components. For example, the processing component 802 can include a multimedia module to facilitate interaction between the multimedia component 808 and the processing component 802.
The memory 804 is configured to store various types of data to support operations at the electronic device 800. Examples of such data include instructions for any application or method operating on the electronic device 800, contact data, phonebook data, messages, pictures, videos, and so forth. The memory 804 may be implemented by any type or combination of volatile or nonvolatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disk.
The power supply component 806 provides power to the various components of the electronic device 800. The power components 806 may include a power management system, one or more power sources, and other components associated with generating, managing, and distributing power for the electronic device 800.
The multimedia component 808 includes a screen between the electronic device 800 and the user that provides an output interface. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive input signals from a user. The touch panel includes one or more touch sensors to sense touches, swipes, and gestures on the touch panel. The touch sensor may sense not only the boundary of a touch or slide action, but also the duration and pressure associated with the touch or slide operation. In some embodiments, the multimedia component 808 includes a front camera and/or a rear camera. When the electronic device 800 is in an operational mode, such as a shooting mode or a video mode, the front camera and/or the rear camera may receive external multimedia data. Each front camera and rear camera may be a fixed optical lens system or have focal length and optical zoom capabilities.
The audio component 810 is configured to output and/or input audio signals. For example, the audio component 810 includes a Microphone (MIC) configured to receive external audio signals when the electronic device 800 is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signals may be further stored in the memory 804 or transmitted via the communication component 816. In some embodiments, audio component 810 further includes a speaker for outputting audio signals.
The I/O interface 812 provides an interface between the processing component 802 and peripheral interface modules, which may be a keyboard, click wheel, buttons, etc. These buttons may include, but are not limited to: homepage button, volume button, start button, and lock button.
The sensor assembly 814 includes one or more sensors for providing status assessment of various aspects of the electronic device 800. For example, the sensor assembly 814 may detect an on/off state of the electronic device 800, a relative positioning of the components, such as a display and keypad of the electronic device 800, the sensor assembly 814 may also detect a change in position of the electronic device 800 or a component of the electronic device 800, the presence or absence of a user's contact with the electronic device 800, an orientation or acceleration/deceleration of the electronic device 800, and a change in temperature of the electronic device 800. The sensor assembly 814 may include a proximity sensor configured to detect the presence of nearby objects without any physical contact. The sensor assembly 814 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications. In some embodiments, the sensor assembly 814 may also include an acceleration sensor, a gyroscopic sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
The communication component 816 is configured to facilitate communication between the electronic device 800 and other devices, either wired or wireless. The electronic device 800 may access a wireless network based on a communication standard, such as WiFi,2G, or 3G, or a combination thereof. In one exemplary embodiment, the communication component 816 receives broadcast signals or broadcast related information from an external broadcast management system via a broadcast channel. In one exemplary embodiment, the communication component 816 further includes a Near Field Communication (NFC) module to facilitate short range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, ultra Wideband (UWB) technology, bluetooth (BT) technology, and other technologies.
In an exemplary embodiment, the electronic device 800 may be implemented by one or more Application Specific Integrated Circuits (ASICs), digital Signal Processors (DSPs), digital Signal Processing Devices (DSPDs), programmable Logic Devices (PLDs), field Programmable Gate Arrays (FPGAs), controllers, microcontrollers, microprocessors, or other electronic elements for executing the methods described above.
In an exemplary embodiment, a non-transitory computer readable storage medium is also provided, such as memory 804 including computer program instructions executable by processor 820 of electronic device 800 to perform the above-described methods.
Fig. 11 is a block diagram of an electronic device 900, according to an example embodiment. For example, the electronic device 900 may be provided as a server. Referring to FIG. 11, electronic device 900 includes a processing component 922 that further includes one or more processors and memory resources represented by memory 932 for storing instructions, such as applications, executable by processing component 922. The application programs stored in memory 932 may include one or more modules that each correspond to a set of instructions. Further, processing component 922 is configured to execute instructions to perform the above-described methods.
The electronic device 900 may also include a power supply component 926 configured to perform power management for the electronic device 900, a wired or wireless network interface 950 configured to connect the electronic device 900 to a network, and an input output (I/O) interface 958. The electronic device 900 may operate based on an operating system stored in memory 932, such as Windows Server, mac OS XTM, unixTM, linuxTM, freeBSDTM, or the like.
In an exemplary embodiment, a non-transitory computer readable storage medium is also provided, such as memory 932, including computer program instructions executable by processing component 922 of electronic device 900 to perform the above-described method.
The present disclosure may be a system, method, and/or computer program product. The computer program product may include a computer readable storage medium having computer readable program instructions embodied thereon for causing a processor to implement aspects of the present disclosure.
The computer readable storage medium may be a tangible device that can hold and store instructions for use by an instruction execution device. The computer readable storage medium may include, for example, but is not limited to, an electronic storage device, a magnetic storage device, an optical storage device, an electromagnetic storage device, a semiconductor storage device, or any suitable combination of the foregoing. More specific examples (a non-exhaustive list) of the computer-readable storage medium would include the following: portable computer disks, hard disks, random Access Memory (RAM), read-only memory (ROM), erasable programmable read-only memory (EPROM or flash memory), static Random Access Memory (SRAM), portable compact disk read-only memory (CD-ROM), digital Versatile Disks (DVD), memory sticks, floppy disks, mechanical coding devices, punch cards or in-groove structures such as punch cards or grooves having instructions stored thereon, and any suitable combination of the foregoing. Computer-readable storage media, as used herein, are not to be construed as transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through waveguides or other transmission media (e.g., optical pulses through fiber optic cables), or electrical signals transmitted through wires.
The computer readable program instructions described herein may be downloaded from a computer readable storage medium to a respective computing/processing device or to an external computer or external storage device over a network, such as the internet, a local area network, a wide area network, and/or a wireless network. The network may include copper transmission cables, fiber optic transmissions, wireless transmissions, routers, firewalls, switches, gateway computers and/or edge servers. The network interface card or network interface in each computing/processing device receives computer readable program instructions from the network and forwards the computer readable program instructions for storage in a computer readable storage medium in the respective computing/processing device.
Computer program instructions for performing the operations of the present disclosure can be assembly instructions, instruction Set Architecture (ISA) instructions, machine-related instructions, microcode, firmware instructions, state setting data, or source or object code written in any combination of one or more programming languages, including an object oriented programming language such as Smalltalk, c++ or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The computer readable program instructions may be executed entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computer (for example, through the Internet using an Internet service provider). In some embodiments, aspects of the present disclosure are implemented by personalizing electronic circuitry, such as programmable logic circuitry, field Programmable Gate Arrays (FPGAs), or Programmable Logic Arrays (PLAs), with state information of computer readable program instructions, which can execute the computer readable program instructions.
Various aspects of the present disclosure are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the disclosure. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer-readable program instructions.
These computer readable program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer readable program instructions may also be stored in a computer readable storage medium that can direct a computer, programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer readable medium having the instructions stored therein includes an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks.
The computer readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer, other programmable apparatus or other devices implement the functions/acts specified in the flowchart and/or block diagram block or blocks.
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The various embodiments may be combined with each other without violating logic, and the description of the various embodiments is focused on, and for the part of the focused description, reference may be made to the description of other embodiments.
The foregoing description of the embodiments of the present disclosure has been presented for purposes of illustration and description, and is not intended to be exhaustive or limited to the embodiments disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the various embodiments described. The terminology used herein was chosen in order to best explain the principles of the embodiments, the practical application, or the technical improvement of the technology in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein.

Claims (14)

CN202010314166.6A2020-04-202020-04-20Target detection method and device, electronic equipment and storage mediumActiveCN111507973B (en)

Priority Applications (4)

Application NumberPriority DateFiling DateTitle
CN202010314166.6ACN111507973B (en)2020-04-202020-04-20Target detection method and device, electronic equipment and storage medium
JP2021577017AJP2022539093A (en)2020-04-202021-04-15 Target detection method and device, electronic device, storage medium, and program
PCT/CN2021/087424WO2021213241A1 (en)2020-04-202021-04-15Target detection method and apparatus, and electronic device, storage medium and program
KR1020217043313AKR20220016221A (en)2020-04-202021-04-15 Target detection method and apparatus, electronic device, storage medium and program

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN202010314166.6ACN111507973B (en)2020-04-202020-04-20Target detection method and device, electronic equipment and storage medium

Publications (2)

Publication NumberPublication Date
CN111507973A CN111507973A (en)2020-08-07
CN111507973Btrue CN111507973B (en)2024-04-12

Family

ID=71878738

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN202010314166.6AActiveCN111507973B (en)2020-04-202020-04-20Target detection method and device, electronic equipment and storage medium

Country Status (4)

CountryLink
JP (1)JP2022539093A (en)
KR (1)KR20220016221A (en)
CN (1)CN111507973B (en)
WO (1)WO2021213241A1 (en)

Families Citing this family (18)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN111507973B (en)*2020-04-202024-04-12上海商汤临港智能科技有限公司Target detection method and device, electronic equipment and storage medium
CN112697188B (en)*2020-12-082022-12-23北京百度网讯科技有限公司 Detection system test method and device, computer equipment, media and program products
CN112597937B (en)*2020-12-292025-04-04广州极飞科技股份有限公司 Obstacle clustering method, device, electronic device and storage medium
CN115236694A (en)*2021-04-152022-10-25阿里巴巴新加坡控股有限公司 Obstacle detection method, device, electronic device and storage medium
CN113901970B (en)*2021-12-082022-05-24深圳市速腾聚创科技有限公司Obstacle detection method and apparatus, medium, and electronic device
CN114241145A (en)*2021-12-102022-03-25中国民航科学技术研究院 Monitoring method of building height in airport clearance area
CN114445802A (en)*2022-01-292022-05-06北京百度网讯科技有限公司Point cloud processing method and device and vehicle
CN114519686B (en)*2022-02-172025-06-17北京京东乾石科技有限公司 Method, device, electronic device and medium for detecting curbs
CN114549781A (en)*2022-02-212022-05-27脸萌有限公司Data processing method and device, electronic equipment and storage medium
CN116968014A (en)*2022-03-292023-10-31北京小米机器人技术有限公司Robot, motion control method and device thereof, electronic equipment and storage medium
CN117091516B (en)*2022-05-122024-05-28广州镭晨智能装备科技有限公司Method, system and storage medium for detecting thickness of circuit board protective layer
CN114926818B (en)*2022-05-272025-07-15杭州飞步科技有限公司 Obstacle detection model evaluation method, device and computer storage medium
CN115471629A (en)*2022-09-012022-12-13深圳鹏行智能研究有限公司 Object information extraction method, device, robot and computer-readable storage medium
CN115330969A (en)*2022-10-122022-11-11之江实验室 A vectorized description method of local static environment for ground unmanned vehicles
CN116503629A (en)*2023-04-032023-07-28杭州飞步科技有限公司 Obstacle clustering method, device, equipment, storage medium and program product
CN116541113A (en)*2023-04-212023-08-04深圳绿米联创科技有限公司Object determination method, device, storage medium and computer equipment
CN119545290A (en)*2023-08-292025-02-28华为技术有限公司 Communication method and communication device
CN118570559B (en)*2024-07-312024-10-22浙江大华技术股份有限公司Target calibration method, target identification method, electronic device, and readable storage medium

Citations (8)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN102779280A (en)*2012-06-192012-11-14武汉大学Traffic information extraction method based on laser sensor
CN106951847A (en)*2017-03-132017-07-14百度在线网络技术(北京)有限公司Obstacle detection method, device, equipment and storage medium
CN109145677A (en)*2017-06-152019-01-04百度在线网络技术(北京)有限公司Obstacle detection method, device, equipment and storage medium
CN109635685A (en)*2018-11-292019-04-16北京市商汤科技开发有限公司Target object 3D detection method, device, medium and equipment
CN109840448A (en)*2017-11-242019-06-04百度在线网络技术(北京)有限公司Information output method and device for automatic driving vehicle
CN110147706A (en)*2018-10-242019-08-20腾讯科技(深圳)有限公司The recognition methods of barrier and device, storage medium, electronic device
JP2019207655A (en)*2018-05-302019-12-05株式会社IhiDetection device and detection system
JP2020038631A (en)*2018-08-302020-03-12キヤノン株式会社 Information processing apparatus, information processing method, program and system

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN105957145A (en)*2016-04-292016-09-21百度在线网络技术(北京)有限公司Road barrier identification method and device
WO2018180285A1 (en)*2017-03-312018-10-04パイオニア株式会社Three-dimensional data generation device, three-dimensional data generation method, three-dimensional data generation program, and computer-readable recording medium having three-dimensional data generation program recorded thereon
JP6969738B2 (en)*2017-07-102021-11-24株式会社Zmp Object detection device and method
US10354444B2 (en)*2017-07-282019-07-16The Boeing CompanyResolution adaptive mesh that is generated using an intermediate implicit representation of a point cloud
JP7056842B2 (en)*2018-03-232022-04-19株式会社豊田中央研究所 State estimator and program
JP7128577B2 (en)*2018-03-302022-08-31セコム株式会社 monitoring device
CN111507973B (en)*2020-04-202024-04-12上海商汤临港智能科技有限公司Target detection method and device, electronic equipment and storage medium

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN102779280A (en)*2012-06-192012-11-14武汉大学Traffic information extraction method based on laser sensor
CN106951847A (en)*2017-03-132017-07-14百度在线网络技术(北京)有限公司Obstacle detection method, device, equipment and storage medium
CN109145677A (en)*2017-06-152019-01-04百度在线网络技术(北京)有限公司Obstacle detection method, device, equipment and storage medium
CN109840448A (en)*2017-11-242019-06-04百度在线网络技术(北京)有限公司Information output method and device for automatic driving vehicle
JP2019207655A (en)*2018-05-302019-12-05株式会社IhiDetection device and detection system
JP2020038631A (en)*2018-08-302020-03-12キヤノン株式会社 Information processing apparatus, information processing method, program and system
CN110147706A (en)*2018-10-242019-08-20腾讯科技(深圳)有限公司The recognition methods of barrier and device, storage medium, electronic device
CN109635685A (en)*2018-11-292019-04-16北京市商汤科技开发有限公司Target object 3D detection method, device, medium and equipment

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Lanxiang Zheng et al.The Obstacle Detection Method of UAV Based on 2D Lidar.《IEEE Access》.2019,第7卷163437-163448.*
娄新雨等.采用64线激光雷达的实时道路障碍物检测与分类算法的研究.《汽车工程》.2019,第41卷(第8期),779-784.*

Also Published As

Publication numberPublication date
CN111507973A (en)2020-08-07
WO2021213241A1 (en)2021-10-28
KR20220016221A (en)2022-02-08
JP2022539093A (en)2022-09-07

Similar Documents

PublicationPublication DateTitle
CN111507973B (en)Target detection method and device, electronic equipment and storage medium
US11468581B2 (en)Distance measurement method, intelligent control method, electronic device, and storage medium
US11308809B2 (en)Collision control method and apparatus, and storage medium
US20210312214A1 (en)Image recognition method, apparatus and non-transitory computer readable storage medium
US20210365696A1 (en)Vehicle Intelligent Driving Control Method and Device and Storage Medium
CN110543850B (en) Target detection method and device, neural network training method and device
US11301726B2 (en)Anchor determination method and apparatus, electronic device, and storage medium
WO2020055767A1 (en)Mapping objects detected in images to geographic positions
CN111881827B (en)Target detection method and device, electronic equipment and storage medium
CN111104920B (en)Video processing method and device, electronic equipment and storage medium
CN111523599B (en)Target detection method and device, electronic equipment and storage medium
CN113313115B (en)License plate attribute identification method and device, electronic equipment and storage medium
CN114821573B (en) Target detection method, device, storage medium, electronic device and vehicle
CN109696173A (en)A kind of car body air navigation aid and device
CN113065392A (en)Robot tracking method and device
CN110390252B (en)Obstacle detection method and device based on prior map information and storage medium
CN113344900B (en)Airport runway intrusion detection method, airport runway intrusion detection device, storage medium and electronic device
CN111832338A (en) Object detection method and device, electronic device and storage medium
EP3369602A1 (en)Display controller, display control method, and carrier means
CN111860074B (en)Target object detection method and device, and driving control method and device
CN113433965B (en)Unmanned aerial vehicle obstacle avoidance method and device, storage medium and electronic equipment
CN113157848A (en)Method and device for determining air route, electronic equipment and storage medium
CN109829393B (en)Moving object detection method and device and storage medium
CN116142173A (en)Vehicle control method, device, equipment and storage medium based on image depth
CN115588180A (en)Map generation method, map generation device, electronic apparatus, map generation medium, and program product

Legal Events

DateCodeTitleDescription
PB01Publication
PB01Publication
SE01Entry into force of request for substantive examination
SE01Entry into force of request for substantive examination
GR01Patent grant
GR01Patent grant

[8]ページ先頭

©2009-2025 Movatter.jp