Movatterモバイル変換


[0]ホーム

URL:


CN114556442B - Three-dimensional point cloud segmentation method and device, and movable platform - Google Patents

Three-dimensional point cloud segmentation method and device, and movable platform
Download PDF

Info

Publication number
CN114556442B
CN114556442BCN202080070568.4ACN202080070568ACN114556442BCN 114556442 BCN114556442 BCN 114556442BCN 202080070568 ACN202080070568 ACN 202080070568ACN 114556442 BCN114556442 BCN 114556442B
Authority
CN
China
Prior art keywords
dimensional
dimensional point
point cloud
points
movable platform
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202080070568.4A
Other languages
Chinese (zh)
Other versions
CN114556442A (en
Inventor
李星河
郑杨杨
刘晓洋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Zhuoyu Technology Co ltd
Original Assignee
Shenzhen Zhuoyu Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Zhuoyu Technology Co ltdfiledCriticalShenzhen Zhuoyu Technology Co ltd
Publication of CN114556442ApublicationCriticalpatent/CN114556442A/en
Application grantedgrantedCritical
Publication of CN114556442BpublicationCriticalpatent/CN114556442B/en
Activelegal-statusCriticalCurrent
Anticipated expirationlegal-statusCritical

Links

Classifications

Landscapes

Abstract

A three-dimensional point cloud segmentation method, a device and a movable platform are used for carrying out point cloud segmentation on a three-dimensional point cloud acquired by the movable platform, the method comprises the steps of determining adjacent three-dimensional points (301) adjacent to three-dimensional points in the three-dimensional point cloud in a physical space based on scanning angles and scanning moments when the three-dimensional point cloud is acquired by a laser radar on the movable platform, acquiring normal vectors (302) of local areas where the three-dimensional points and the adjacent three-dimensional points of the three-dimensional points are located, and carrying out point cloud segmentation (303) on the three-dimensional points based on the normal vectors.

Description

Three-dimensional point cloud segmentation method and device and movable platform
Technical Field
The disclosure relates to the technical field of computer vision, in particular to a three-dimensional point cloud segmentation method and device and a movable platform.
Background
During the running process of the movable platform, the running state (for example, pose and speed) of the movable platform can be subjected to decision planning through a path planning (planning) module on the movable platform. In order for the planning module to complete decision planning, a point cloud acquisition device on the movable platform is required to acquire three-dimensional point clouds of the surrounding environment and to perform point cloud segmentation so as to distinguish the ground and the obstacle in the three-dimensional point clouds and further distinguish dynamic objects and static objects from the obstacle. Therefore, the point cloud segmentation is an important link for making a decision and planning on the running state of the movable platform.
The traditional point cloud segmentation mode generally carries out point cloud segmentation based on local characteristics or a global model of a movable platform driving road surface, and the two modes have certain requirements on point cloud density, so that three-dimensional point clouds are required to be accumulated for a period of time, but when the point cloud segmentation is carried out on a dynamic object, the two point cloud segmentation modes are easy to generate motion blur, and the reliability of the point cloud segmentation is reduced.
Disclosure of Invention
In view of this, embodiments of the present disclosure provide a three-dimensional point cloud segmentation method and apparatus, and a movable platform, so as to reliably perform point cloud segmentation on a three-dimensional point cloud of a dynamic object.
According to a first aspect of the embodiment of the present disclosure, a three-dimensional point cloud segmentation method is provided, and is used for performing point cloud segmentation on a three-dimensional point cloud acquired by a movable platform, wherein the method includes determining adjacent three-dimensional points adjacent to three-dimensional points in the three-dimensional point cloud in a physical space based on a scanning angle and a scanning time when the three-dimensional point cloud is acquired by a laser radar on the movable platform, acquiring normal vectors of local areas where the three-dimensional points and the adjacent three-dimensional points of the three-dimensional points are located, and performing point cloud segmentation on the three-dimensional points based on the normal vectors.
According to a second aspect of the embodiment of the present disclosure, there is provided a three-dimensional point cloud segmentation apparatus, including a processor, the three-dimensional point cloud segmentation apparatus is configured to perform point cloud segmentation on a three-dimensional point cloud acquired by a movable platform, the processor is configured to determine an adjacent three-dimensional point adjacent to a three-dimensional point in a physical space based on a scanning angle and a scanning time when the three-dimensional point cloud is acquired by a laser radar on the movable platform, acquire a normal vector of a local area where the three-dimensional point and the adjacent three-dimensional point of the three-dimensional point are located, and perform point cloud segmentation on the three-dimensional point based on the normal vector.
According to a third aspect of embodiments of the present disclosure, there is provided a movable platform, including a housing, a point cloud collecting device disposed on the housing and configured to collect three-dimensional point clouds, and a three-dimensional point cloud segmentation device disposed in the housing and configured to perform the method according to any one of the embodiments of the present disclosure.
According to a fourth aspect of embodiments of the present disclosure, there is provided a computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements a method as described in any of the embodiments of the present disclosure.
By applying the scheme of the embodiment of the disclosure, the adjacent three-dimensional points of the three-dimensional points in the physical space are determined based on the scanning angle and the scanning time when the laser radar collects the three-dimensional point cloud, and the point cloud segmentation is performed based on the normal vector of the local area where the three-dimensional points and the adjacent three-dimensional points are positioned, so that the basis for performing the point cloud segmentation can be extracted from the natural geometric constraint generated by the laser radar scanning, the geometric characteristics among the point clouds are utilized in the point cloud segmentation process, and the reliability of the point cloud segmentation is improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the description of the embodiments will be briefly described below, it being obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings may be obtained according to these drawings without inventive effort to a person skilled in the art.
FIG. 1 is a schematic diagram of a point cloud segmentation process of some embodiments.
Fig. 2 is a schematic diagram of a decision planning process during travel of a mobile platform of some embodiments.
Fig. 3 is a flowchart of a point cloud segmentation method of an embodiment of the present disclosure.
Fig. 4 is a schematic diagram of a point cloud arrangement according to an embodiment of the present disclosure.
Fig. 5A is a schematic diagram of a normal vector of a ground point of an embodiment of the present disclosure.
Fig. 5B is a schematic diagram of normal vectors for non-ground points in an embodiment of the present disclosure.
Fig. 6A is a schematic diagram of a scanning manner of a radar according to an embodiment of the present disclosure.
Fig. 6B is a schematic diagram of an application scenario of an embodiment of the present disclosure.
Fig. 6C is a schematic diagram of a three-dimensional point cloud obtained by scanning in the application scenario shown in fig. 6B.
Fig. 7 is an overall flow chart of a point cloud segmentation process of an embodiment of the present disclosure.
Fig. 8 is a schematic diagram of a point cloud segmentation apparatus according to an embodiment of the disclosure.
Fig. 9 is a schematic diagram of a movable platform of an embodiment of the present disclosure.
Detailed Description
Reference will now be made in detail to exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, the same numbers in different drawings refer to the same or similar elements, unless otherwise indicated. The implementations described in the following exemplary examples are not representative of all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with some aspects of the present disclosure as detailed in the accompanying claims.
The terminology used in the present disclosure is for the purpose of describing particular embodiments only and is not intended to be limiting of the disclosure. As used in this disclosure and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used herein refers to and encompasses any or all possible combinations of one or more of the associated listed items.
It should be understood that although the terms first, second, third, etc. may be used in this disclosure to describe various information, these information should not be limited to these terms. These terms are only used to distinguish one type of information from another. For example, first information may also be referred to as second information, and similarly, second information may also be referred to as first information, without departing from the scope of the present disclosure. The term "if" as used herein may be interpreted as "at..once" or "when..once" or "in response to a determination", depending on the context.
In the running process of the movable platform, the running state of the movable platform can be subjected to decision planning through a path planning (planning) module on the movable platform. The point cloud segmentation is an important link for carrying out decision planning on the running state of the movable platform. As shown in fig. 1, a schematic diagram of a point cloud segmentation process of some embodiments. In step 101, a three-dimensional point cloud may be acquired by a point cloud acquisition device on a movable platform, and then in step 102, for the movable platform (e.g., an unmanned vehicle) traveling on the ground, ground segmentation may be performed on the acquired three-dimensional point cloud, that is, three-dimensional points in the three-dimensional point cloud are segmented into ground points and non-ground points. For other types of mobile platforms (e.g., mobile robots), the acquired three-dimensional point cloud may be segmented to segment three-dimensional points in the three-dimensional point cloud into points on the mobile platform's travel surface and points not on the mobile platform's travel surface. For convenience of description, the following description will be made with the running road surface as the ground. In step 103, if one three-dimensional point is a ground point, step 104 is executed to add a ground point label to the three-dimensional point, otherwise step 105 is executed to perform dynamic and static segmentation on the three-dimensional point, namely, the three-dimensional point is segmented into a static point which is static and a dynamic point which moves. In step 106, if one three-dimensional point is a static point, step 107 is executed to add a static point tag to the three-dimensional point, otherwise step 108 is executed to add a dynamic point tag to the three-dimensional point, and in step 109, the tagged three-dimensional point cloud is output to the downstream module. Wherein all or part of the three-dimensional points in the three-dimensional point cloud can be labeled. The tag may include at least one of a first tag for characterizing whether the three-dimensional point is a ground point and a second tag for characterizing whether the three-dimensional point is a static point, and may further include a tag for characterizing other information of the three-dimensional point.
The downstream module may be a planning module on a movable platform, such as an electronic control unit (Electronic Control Unit, ECU), a central processing unit (Central Processing Unit, CPU), or the like. After receiving the three-dimensional point cloud with the label, the Planning module can carry out decision Planning on the running state of the movable platform based on the label of the three-dimensional point. The driving state may include at least one of a pose and a speed of the movable platform. As shown in fig. 2, is a schematic diagram of a decision-making process of some embodiments. In step 201 and step 202, the playing module may receive the three-dimensional point cloud and read the tag carried in the three-dimensional point cloud. In step 203, it may be determined whether a three-dimensional point in the three-dimensional point cloud is a point on a road surface (e.g., ground) on which the movable platform travels, based on the tag. Taking the ground point as an example, if yes, step 204 is performed to identify a three-dimensional point belonging to the lane line from the ground point, and determine the posture of the movable platform according to the direction of the lane line, so that the movable platform can travel along the direction of the lane line. If the non-ground point is a static point, step 205 is performed to determine if the non-ground point is a static point. If so, step 206 is performed to determine the pose of the movable platform based on the orientation of the static point. For example, it is determined whether the stationary point is on a pre-planned travel path, and if so, the path is re-planned to avoid collision of the movable platform with the stationary point. If the non-ground point is a dynamic point, step 207 is performed to determine at least one of a pose and a velocity of the movable platform based on the position and the velocity of the static point. For example, if the dynamic point is on a pre-planned running path of the movable platform and the moving speed of the dynamic point is less than or equal to the moving speed of the movable platform, the movable platform is controlled to run at a reduced speed, or the posture of the movable platform is adjusted so that the movable platform bypasses the dynamic point. For another example, the movable platform may be controlled to travel at the same speed as the dynamic point.
Therefore, the point cloud segmentation is an important link for making a decision on the running state of the movable platform, and accurate point cloud segmentation is beneficial to making an accurate decision on the running state of the movable platform. The conventional three-dimensional point cloud segmentation method generally has two segmentation methods based on local features, namely, firstly, a frame of three-dimensional point cloud is projected into a two-dimensional grid map or a three-dimensional voxel map, then, one or a plurality of adjacent grids or voxels in the map are taken as objects, the features such as absolute height, relative height, coordinate variance, local normal vector, principal component vector and the like are extracted, and the three-dimensional points in the grids are subjected to point cloud segmentation based on the extracted features. The other is a segmentation mode based on a global model, namely, candidate points are selected by utilizing a random sampling (Random Sample Consensus, RANSAC) mode to perform multiple global road surface fitting on a running road surface of a movable platform, after an optimal solution is found, the distance between the global point and the ground model is analyzed, points near the ground model are segmented into ground points, and points far away from the model are segmented into obstacle points.
However, when the point cloud segmentation is performed on the dynamic object, the motion blur problem is easily generated in both the above two point cloud segmentation methods, and the reliability of the point cloud segmentation is reduced. In addition, the two point cloud segmentation modes have the following problems that the three-dimensional point cloud needs to be traversed to build a map in the point cloud segmentation mode based on the local characteristics, time cost and storage cost are large, and local failure of the local characteristics can occur when the point cloud noise is large. The point cloud segmentation mode based on the global model needs to use a model with lower complexity to ensure that the model cannot be biased by an external point, so that the model cannot be well adapted to a complex road surface environment, and the segmentation accuracy of a short obstacle is lower due to the existence of fitting errors.
Based on this, the present disclosure provides a three-dimensional point cloud segmentation method, as shown in fig. 3, which may include:
Step 301, determining adjacent three-dimensional points adjacent to three-dimensional points in a physical space based on a scanning angle and a scanning moment when a laser radar on a movable platform collects the three-dimensional point cloud;
302, acquiring normal vectors of the three-dimensional points and local areas where adjacent three-dimensional points of the three-dimensional points are located;
and 303, carrying out point cloud segmentation on the three-dimensional points based on the normal vector.
The method and the device can divide the three-dimensional point cloud acquired by the laser radar into three-dimensional points on the running road surface of the movable platform and three-dimensional points outside the running road surface of the movable platform. Compared with the prior art, a frame of three-dimensional point cloud acquired by the laser radar is regarded as a whole group of unordered point cloud, the method and the device determine adjacent three-dimensional points of the three-dimensional points in the physical space based on the scanning angle and the scanning time when the three-dimensional point cloud is acquired by the laser radar, and perform point cloud segmentation based on the three-dimensional points and normal vectors of local areas where the adjacent three-dimensional points are positioned, so that the basis for performing point cloud segmentation can be extracted from natural geometric constraints generated by laser radar scanning, geometric features among the point clouds are utilized in the point cloud segmentation process, the reliability of point cloud segmentation is improved, and the method and the device are particularly effective for the radar with obvious motion blur. In addition, the method and the device can divide a relatively complex road surface without plane assumption, do not need to establish a grid map or a voxel map, do not need to randomly sample, are very light in weight, can obtain a division result by traversing the point cloud once, do not need to accumulate the point cloud for a long time, and can output a high-frequency division result.
In step 301, adjacent three-dimensional points that are adjacent to three-dimensional points in the three-dimensional point cloud in physical space may be determined. An adjacent three-dimensional point, where one three-dimensional point is adjacent in physical space, may include one or more three-dimensional points where a distance between the three-dimensional point and the adjacent three-dimensional point in physical space is less than a preset distance threshold. Since the scanning angle of the laser radar when the three-dimensional point cloud is acquired is continuously changed, the adjacent three-dimensional points can be determined based on the scanning angle and the scanning time when the laser radar acquires the three-dimensional point cloud. For example, if the scanning angle of the laser radar to the three-dimensional point a and the three-dimensional point B is smaller than the preset angle difference, the three-dimensional point a is considered to be an adjacent three-dimensional point of the three-dimensional point B. For another example, if the time difference between the scanning time of the laser radar on the three-dimensional point a and the scanning time of the laser radar on the three-dimensional point B is smaller than the preset duration, the three-dimensional point a is considered to be an adjacent three-dimensional point of the three-dimensional point B. And if the scanning angle of the laser radar to the three-dimensional point A and the three-dimensional point B is larger than or equal to the preset angle difference, or the time difference of the scanning time of the laser radar to the three-dimensional point A and the scanning time of the laser radar to the three-dimensional point B is larger than or equal to the preset duration, the three-dimensional point A is not considered to be an adjacent three-dimensional point of the three-dimensional point B.
In some embodiments, the three-dimensional point cloud may be acquired by a variety of lidar acquisitions. For example, the three-dimensional point cloud may be acquired by a single line lidar. In this case, the adjacent three-dimensional points of the three-dimensional points include at least one three-dimensional point continuously collected while the single-line laser radar is moving in a first direction, and at least one three-dimensional point continuously collected while the single-line laser radar is moving in a second direction, the first direction being different from the second direction. Optionally, the first direction is perpendicular to the second direction. For example, the first direction may be a direction perpendicular to a traveling road surface of the movable platform, and the second direction may be a direction perpendicular to both the first direction and the traveling direction of the movable platform. When the movable platform is a vehicle traveling on a horizontal ground, the first direction may be a vertical direction and the second direction may be a direction perpendicular to the traveling direction of the vehicle on the horizontal ground. When the movable platform is a glass cleaning robot, the first direction may be a direction perpendicular to the glass plane (generally, a horizontal direction), and the second direction may be a direction perpendicular to the running direction of the robot on the glass plane.
For another example, the three-dimensional point cloud may be acquired by a multi-line lidar. In this case, the adjacent three-dimensional points of the three-dimensional points include three-dimensional points obtained by scanning at least two adjacent lines of lasers in the multi-line laser radar a plurality of times in succession. For another example, the three-dimensional point cloud is acquired by an array laser radar, and the three-dimensional points adjacent to the three-dimensional points comprise three-dimensional points obtained by scanning an m×n array block in the array laser radar once, wherein m and n are integers greater than 1. The three-dimensional point cloud is acquired through the multi-line laser radar or the array laser radar, so that the acquisition efficiency of the three-dimensional point cloud can be effectively improved.
The case in which a three-dimensional point cloud is acquired by the multi-line lidar will be described below with reference to fig. 4. In the figure, beam0 to Beam5 represent different lines of the multi-line laser radar, scan0 to Scan5 represent the 0 th to 5 th scans, each black dot in the figure represents a three-dimensional point obtained by scanning, and the point of the ith row and the jth column represents the three-dimensional point obtained by the jth scanning of the ith line laser. In practical applications, the number of lines of the laser radar may be other, and the present disclosure is not limited thereto, and each line may be scanned simultaneously or sequentially. Further, the number of scans is not limited to 6. The three-dimensional points obtained by scanning are arranged into a two-dimensional array based on different lines of the laser radar and the two dimensions of the scanning times. It should be noted that this arrangement is merely for ease of understanding, and does not represent that the operation of arranging the three-dimensional point cloud into the two-dimensional array is performed in the scheme of the present disclosure, nor does the position of each three-dimensional point in the two-dimensional array represent the true position of the corresponding three-dimensional point in the physical space, but the adjacent positional relationship of each three-dimensional point in the two-dimensional array is the same as the true adjacent positional relationship of the corresponding three-dimensional point in the physical space, and the adjacent positional relationship is used to represent whether or not two three-dimensional points are adjacent three-dimensional points. For example, a three-dimensional point of row 1 and column 1 (referred to as point a) in a two-dimensional array is a neighboring three-dimensional point of a three-dimensional point of row 1 and column 2 (referred to as point B), then in physical space, point a is also a neighboring three-dimensional point of point B. Because the difference between the scanning angles of the adjacent two-line lasers is small, and the scanning angles of the beams 0 to 5 are sequentially increased or sequentially decreased, the three-dimensional points obtained by scanning the adjacent two-line lasers are adjacent three-dimensional points. Meanwhile, the scanning angle of each line of laser is continuously changed, so that the three-dimensional point obtained by scanning the same line of laser twice is also an adjacent three-dimensional point.
Taking three-dimensional point a in a two-dimensional array as an example, three-dimensional points B1, B2, B3, and B4 may be determined as neighboring three-dimensional points of three-dimensional point a. Further, the three-dimensional points B5, B6, B7, and B8 may also be determined as neighboring three-dimensional points of the three-dimensional point a. The three-dimensional point a may also be considered to be its own adjacent three-dimensional point. In practice, not every scan has a return point, and white dots in the figure represent invalid three-dimensional points without a return point. A continuous plurality of invalid neighboring three-dimensional points in the three-dimensional point cloud may be directly marked as points of unknown attribute prior to acquiring the normal vector of the local region. For the multi-line laser radar, the invalid three-dimensional points scanned by adjacent multi-line lasers at the same time can be determined to be a plurality of continuous invalid adjacent three-dimensional points, and for the array laser radar, the invalid three-dimensional points scanned by a plurality of continuous laser sources in the same column or row at the same time can be determined to be a plurality of continuous invalid adjacent three-dimensional points. In addition, isolated points where no neighboring three-dimensional points exist may be marked as points of unknown attribute. The continuous plurality of invalid adjacent three-dimensional points and isolated points can be collectively called as unusable three-dimensional points, and the unusable three-dimensional points are marked as points with unknown attributes, so that the point cloud segmentation is not needed to be carried out on the unusable three-dimensional points in the subsequent process of the point cloud segmentation.
In step 302, a three-dimensional normal vector of the three-dimensional point and a local area where a three-dimensional point adjacent to the three-dimensional point is located may be obtained first, and if the three-dimensional normal vector is not obtained, a two-dimensional normal vector of the three-dimensional point and a local area where a three-dimensional point adjacent to the three-dimensional point is located is obtained. Wherein the three-dimensional normal vector and the two-dimensional normal vector should be significant normal vectors. And carrying out principal component analysis on a three-dimensional point by adopting the adjacent three-dimensional point of the three-dimensional point to obtain the normal vector of the three-dimensional point. If the number of adjacent three-dimensional points of a three-dimensional point is sufficient and the three-dimensional point is not collinear with its adjacent three-dimensional points, a significant three-dimensional normal vector may be obtained. If the number of adjacent three-dimensional points of the three-dimensional points is insufficient, or when the laser radar performs reciprocating scanning, in the case that the scanned three-dimensional points are degraded to be collinear due to the slowing down of the change of angular velocity at the edge of the scanning, a significant two-dimensional normal vector cannot be obtained. Under the condition of adopting the multi-line laser radar, three-dimensional points obtained by multiple times of scanning of the same line of laser can be obtained, and the two-dimensional normal vector of the local area where the three-dimensional points obtained by multiple times of scanning are located is determined.
In step 303, a point cloud segmentation may be performed on the three-dimensional points based on the normal vector. If the three-dimensional normal vector of the local area is obtained, the three-dimensional points can be subjected to point cloud segmentation based on the three-dimensional normal vector. If the three-dimensional normal vector of the local area is not obtained, the three-dimensional point can be subjected to point cloud segmentation based on the two-dimensional normal vector of the local area.
If the included angle between the three-dimensional normal vector of the local area and the normal vector of the movable platform running plane is smaller than a first preset included angle, the three-dimensional point can be divided into points on the movable platform running plane. If the included angle between the three-dimensional normal vector of the local area and the normal vector of the movable platform running plane is larger than a second preset included angle, the three-dimensional point can be divided into barriers. If the included angle between the three-dimensional normal vector of the local area and the normal vector of the movable platform running plane is larger than or equal to the first preset included angle and smaller than or equal to the second preset included angle, the three-dimensional point can be marked as a point with unknown attribute. Wherein the first preset included angle is smaller than the second preset included angle.
If the included angle between the two-dimensional normal vector of the local area and the normal vector of the movable platform running plane is smaller than a third preset included angle, the three-dimensional point can be divided into points on the movable platform running plane. If the included angle between the two-dimensional normal vector of the local area and the normal vector of the movable platform running plane is larger than a fourth preset included angle, the three-dimensional point can be divided into barriers. If the included angle between the two-dimensional normal vector of the local area and the normal vector of the movable platform running plane is larger than or equal to the third preset included angle and smaller than or equal to the fourth preset included angle, the three-dimensional point can be marked as a point with unknown attribute. Wherein the third preset included angle is smaller than the fourth preset included angle.
The point cloud segmentation process is illustrated below in conjunction with fig. 5A and 5B. For convenience of description, the movable platform is taken as an example of a vehicle running on the ground, wherein the ground can be a plane or a curved surface. Since the angular variation of the ground is generally smooth, the three-dimensional points may be point cloud segmented based on the angle between the normal vector of the local area and the normal vector of the ground. If the included angle is small, the local area and the ground can be approximately seen as two parallel surfaces, that is, the three-dimensional points on the local area do not obstruct the running of the vehicle, so that the three-dimensional points on the local area can be divided into the ground points. As shown in FIG. 5A, the normal vector of the local area where the three-dimensional point A is locatedIncluded angle with normal vector of groundAnd the normal vector of the local area where the three-dimensional point B is locatedIncluded angle with normal vector of groundBoth the three-dimensional point a and the three-dimensional point B can be divided into ground points because they are relatively small.
If the included angle is large, the local area and the ground can be approximately seen as two mutually perpendicular surfaces, that is, the local area is one inside relative to the ground, and the three-dimensional points on the local area can obstruct the running of the vehicle, so that the three-dimensional points on the local area can be divided into non-ground points or can be specifically divided into barriers. As shown in FIG. 5B, the normal vector of the local area where the three-dimensional point A is locatedIncluded angle with normal vector of groundNormal vector of local area where three-dimensional point B is locatedIncluded angle with normal vector of groundAnd the normal vector of the local area where the three-dimensional point C is locatedIncluded angle with normal vector of groundAre relatively small, and thus, the three-dimensional points A, B, C can be all segmented into non-ground points.
In some embodiments, the first preset included angle may be less than or equal to 35 ° and the second preset included angle may be greater than or equal to 65 °. For example, the first preset included angle is 30 ° or 20 °. For another example, the second preset included angle is 60 ° or 75 °. In some embodiments, the third preset included angle may be less than or equal to 20 °, and the second preset included angle may be greater than or equal to 70 °. For example, the third preset included angle is 15 ° or 10 °. For another example, the second preset included angle is 80 ° or 75 °. The threshold values of each angle can be flexibly set according to actual needs, and are not described herein.
In some embodiments, the three-dimensional point cloud after being segmented in the above manner may include a plurality of three-dimensional points with failed segmentation, including three-dimensional points with insignificant normal vectors, unavailable three-dimensional points, and the like. In order to further divide the three-dimensional points with the failure in division into point clouds, multiple frames of three-dimensional point clouds can be obtained, a traveling road surface of the movable platform is fitted based on the multiple frames of three-dimensional point clouds, a fitting model of the traveling road surface is obtained, and secondary point cloud division is carried out on the three-dimensional points with the failure in division in the multiple frames of three-dimensional point clouds based on the fitting model. The road surface fitting may be achieved by polynomial fitting or other fitting means, which is not limited by the present disclosure.
The method comprises the steps of acquiring pose information of each frame of three-dimensional point cloud in multiple frames of three-dimensional point clouds acquired by a movable platform, transforming each frame of three-dimensional point cloud to a preset coordinate system based on the pose information corresponding to each frame of three-dimensional point cloud acquired by the movable platform, and fitting a running pavement of the movable platform based on points, located on a running plane of the movable platform, in the transformed multiple frames of three-dimensional point clouds. Alternatively, the preset coordinate system may be a current coordinate system of the movable platform.
After the pavement model is fitted, the three-dimensional points with the failed segmentation can be subjected to secondary point cloud segmentation based on the distance between the three-dimensional points with the failed segmentation and the fitted model. For example, if the distance between the three-dimensional point with the failed segmentation and the fitted model is greater than a preset distance threshold, the three-dimensional point with the failed segmentation is segmented into points on the running plane of the movable platform. For another example, if the distance between the three-dimensional point with the failed segmentation and the fitted model is greater than a preset distance threshold, the three-dimensional point with the failed segmentation is marked as a point with unknown attribute. In this way, the accuracy of the point cloud segmentation can be improved.
In some embodiments, the three-dimensional point cloud may be acquired by a lidar. The scanning patterns used by different lidars may be different. Fig. 6A shows several common radar scanning modes, in which gray areas are scanning ranges of the laser radar, and arrows indicate scanning directions of the laser light. In the scanning method 1, the laser radar performs a circumferential scan at a scan angleEqual to 360 deg.. In scanning mode 2, the lidar is at a large angleA reciprocating scan is performed, wherein,May be an angle greater than or equal to 135 °. In scanning mode 3, the lidar is at a small fixed angleA scan is performed, wherein,May be an angle of less than or equal to 45 °. The above embodiments divide the scanning modes of the lidar based on the scanning angles, and those skilled in the art will understand that the specific angles given are only exemplary, and other angle thresholds may be used to divide different scanning modes in practical applications.
Fig. 6B shows one possible application scenario. A lidar may be mounted on the vehicle B, for example, in front of the vehicle B for acquiring a three-dimensional point cloud in front of the vehicle B. Of course, the lidar may be mounted on the top, side or other parts of the vehicle B, which will not be described here. For ease of analysis, it is assumed that vehicle B is stationary, vehicle a is traveling in front of vehicle B, and vehicle a is gradually moving away from vehicle B at different times t1, t2, and t 3.
In practical application, because the number of three-dimensional points in each frame of three-dimensional point cloud is small and is insufficient for processing the three-dimensional point cloud, multiple frames of three-dimensional point clouds are required to be accumulated to obtain 1 frame of accumulated three-dimensional point clouds, and then the accumulated three-dimensional point clouds are processed. Different scanning modes are accumulated, so that three-dimensional point clouds in different forms can be obtained. The scanning angle which can be scanned to the vehicle A in the laser radar scanning process is recorded asFor ease of understanding, assume the aboveAnd180 ° And 36 °, respectively, and assumingEqual to. If the laser radar scans in the scanning mode 1 or the scanning mode 2 described above, only 1/10 of the time can be scanned to the vehicle a in the entire scanning period of the laser radar, and therefore, only the three-dimensional point cloud of 1/10 of the three-dimensional point clouds includes the three-dimensional points on the vehicle a, and the interval between the three-dimensional points on the vehicle a in the different three-dimensional point clouds is relatively large, as shown in the three-dimensional point cloud 1 in fig. 6C. In the case that the laser radar scans in the scanning manner 3, the vehicle a can be scanned in the whole scanning period of the laser radar, so that each frame of three-dimensional point cloud includes three-dimensional points on the vehicle a, and because the time interval for collecting two adjacent frames of three-dimensional point clouds is smaller, the distance for moving the vehicle a in the two adjacent frames of three-dimensional point clouds is also smaller, and therefore, the interval between the three-dimensional points on the vehicle a in the two adjacent frames of three-dimensional point clouds is smaller. Therefore, the three-dimensional point cloud scanned by the scanning method 3 has a relatively serious "tailing" problem from a visual point of view, as shown by the three-dimensional point cloud 2 in fig. 6C. Such a three-dimensional point cloud with "tail" is prone to errors in processing.
Therefore, when processing the three-dimensional point cloud acquired by the laser radar scanned by the scanning method 3, only one frame of three-dimensional point cloud may be used for point cloud segmentation. Because the method disclosed by the invention does not need to establish a grid map or a voxel map, does not need to randomly sample, is very light in weight, and can obtain a segmentation result only by traversing the point cloud once, even if the number of three-dimensional points in one frame of three-dimensional point cloud is small, the accurate segmentation result can be obtained. Meanwhile, under the scene with higher requirement on real-time performance, one frame of three-dimensional point cloud can be obtained after one frame of three-dimensional point cloud is input, and the real-time performance of point cloud segmentation is improved. For the laser radars adopting the scanning mode 1 and the scanning mode 2, multi-frame point clouds can be further accumulated, and secondary point cloud segmentation is performed by combining the distance between the fitted road surface and the three-dimensional points, so that the accuracy of a point cloud segmentation result is improved.
Further, each three-dimensional point in the three-dimensional point cloud can be labeled based on the point cloud segmentation result. The label may include at least one of numbers, letters, symbols. Taking the example that the tag comprises a number, a three-dimensional point on the running road surface of the movable platform can be represented by a bit 1, a three-dimensional point outside the running road surface of the movable platform can be represented by a bit 0, and a three-dimensional point with unknown attribute can be represented by a bit 01.
After the tagged three-dimensional point cloud is obtained, the tagged three-dimensional point cloud may be output. In an embodiment in which only one frame of three-dimensional point cloud is used for point cloud segmentation, the output frame rate of the tagged three-dimensional point cloud may reach a frequency equal to the input frame rate of the three-dimensional point cloud. In an embodiment where n frames of three-dimensional point clouds are accumulated and fitted to the road surface model, the frame rate of the output tagged three-dimensional point clouds is equal to 1/n of the frame rate of the input three-dimensional point clouds.
In practical applications, the point cloud segmentation result may be used by a planning unit on the movable platform to plan the running state of the movable platform. For example, the planning unit may determine the position of the obstacle on the travel path based on the tag obtained from the point cloud segmentation result, thereby deciding whether the speed and posture of the movable platform need to be controlled to avoid the obstacle. The point cloud segmentation result can also be output to a multimedia system on the movable platform, such as a display screen, a voice playing system and the like, for outputting multimedia prompt information to a user.
As shown in fig. 7, there is an overall flow chart of a point cloud segmentation method of an embodiment of the present disclosure.
In step 701, a three-dimensional point cloud of lidar output may be continuously acquired, and the frame rate of the three-dimensional point cloud may be 100Hz, or may be another value.
In step 702, the three-dimensional point cloud may be reorganized, for example, in the form shown in fig. 4, at the time of the laser radar scan. Of course, this step may be omitted, and in the subsequent step, the adjacent three-dimensional points may be directly acquired from the three-dimensional point cloud according to the scanning time of the laser radar and processed. In addition to reorganizing the three-dimensional point cloud according to the scanning time of the laser radar, the three-dimensional point cloud can also be reorganized according to the scanning angle of the laser radar.
In step 703, a continuity analysis is performed on the three-dimensional points in the three-dimensional point cloud and their neighboring three-dimensional points to determine unusable three-dimensional points.
In step 704, a three-dimensional (3D) normal vector or a two-dimensional (2D) normal vector for each three-dimensional point is extracted.
In step 705, the point cloud is divided based on the three-dimensional normal vector or the two-dimensional normal vector extracted in step 704, and the category labels of the three-dimensional points in the three-dimensional point cloud are marked.
In step 706, a tagged three-dimensional point cloud is output, and the output frame rate and the input frame rate of the three-dimensional point cloud may be equal, for example, both 100Hz.
In step 707, it may be determined whether n frames are currently accumulated, where n may be predetermined based on an input frame rate of a downstream processing unit that processes the three-dimensional point cloud. If yes, go to step 708, otherwise return to step 701.
In step 708, pose data for the n frames of three-dimensional point clouds collected by the mobile platform may be acquired for the n frames of three-dimensional point clouds accumulated.
In step 709, the n-frame three-dimensional point cloud may be transformed into a current coordinate system of the movable platform based on the pose data corresponding to the n-frame three-dimensional point cloud.
In step 710, a ground model fit may be performed using ground points in the n-frame three-dimensional point cloud. The ground points used in this step may be the ground points marked in step 705.
In step 711, the existing class labels may be optimized using the distance of the ground model from the three-dimensional points.
In step 712, a tagged three-dimensional point cloud may be output at a frame rate equal to 1/n of the frame rate of the input three-dimensional point cloud, which may be (100/n) Hz, for example.
In step 713, it is determined whether the routine is ended. If the program is not over, the process returns to step 701 to continue point cloud segmentation.
It will be appreciated by those skilled in the art that in the above-described method of the specific embodiments, the written order of steps is not meant to imply a strict order of execution but rather should be construed according to the function and possibly inherent logic of the steps.
The embodiment of the disclosure also provides a point cloud segmentation device, which comprises a processor, wherein the processor is used for executing the following steps:
Determining adjacent three-dimensional points adjacent to the three-dimensional points in the three-dimensional point cloud in a physical space based on a scanning angle and scanning time when the three-dimensional point cloud is acquired by a laser radar on a movable platform;
Acquiring normal vectors of the three-dimensional points and local areas where adjacent three-dimensional points of the three-dimensional points are located;
And carrying out point cloud segmentation on the three-dimensional points based on the normal vector.
In some embodiments, the angular difference between the scan angle of the lidar to the three-dimensional point and the scan angle of an adjacent three-dimensional point to the three-dimensional point is less than a preset angular difference, and/or the time difference between the scan time of the lidar to the three-dimensional point and the scan time of the adjacent three-dimensional point to the three-dimensional point is less than a preset time period.
In some embodiments, the three-dimensional point cloud is acquired by a single-line laser radar, the three-dimensional point adjacent to the three-dimensional point comprises at least one three-dimensional point continuously acquired when the single-line laser radar moves along a first direction, and at least one three-dimensional point continuously acquired when the single-line laser radar moves along a second direction, wherein the first direction is different from the second direction, or the three-dimensional point cloud is acquired by a multi-line laser radar, the three-dimensional point adjacent to the three-dimensional point comprises three-dimensional points obtained by continuously scanning at least two adjacent lines of laser light in the multi-line laser radar for a plurality of times, or the three-dimensional point cloud is acquired by an array laser radar, the three-dimensional point adjacent to the three-dimensional point comprises three-dimensional points obtained by scanning an array block m×n in the array laser radar at a time, and m and n are integers greater than 1.
In some embodiments, the processor is further configured to mark three-dimensional points in the three-dimensional point cloud that are not available as points of unknown attribute prior to acquiring the normal vector of the local region.
In some embodiments, the unavailable three-dimensional points include a continuous plurality of invalid neighboring three-dimensional points, and isolated points where no neighboring three-dimensional points exist.
In some embodiments, the continuous plurality of invalid adjacent three-dimensional points comprises any one of an invalid three-dimensional point scanned by adjacent multi-line lasers of the multi-line laser radar at the same time and an invalid three-dimensional point scanned by continuous plurality of laser sources of the array laser radar at the same column or row at the same time.
In some embodiments, the processor is configured to perform point cloud segmentation on the three-dimensional points based on the three-dimensional normal vector if the three-dimensional normal vector of the local area is acquired, and/or perform point cloud segmentation on the three-dimensional points based on the two-dimensional normal vector of the local area if the three-dimensional normal vector of the local area is not acquired.
In some embodiments, the processor is configured to divide the three-dimensional point into points on the movable platform running plane if an angle between the three-dimensional normal vector of the local area and the normal vector of the movable platform running plane is smaller than a first preset angle, and/or divide the three-dimensional point into obstacles if an angle between the three-dimensional normal vector of the local area and the normal vector of the movable platform running plane is larger than a second preset angle, and/or mark the three-dimensional point as a point of unknown attribute if an angle between the three-dimensional normal vector of the local area and the normal vector of the movable platform running plane is larger than or equal to the first preset angle and smaller than or equal to the second preset angle, wherein the first preset angle is smaller than the second preset angle.
In some embodiments, the first predetermined included angle is less than or equal to 35 ° and the second predetermined included angle is greater than or equal to 65 °.
In some embodiments, the processor is configured to divide the three-dimensional point into points on the movable platform running plane if an angle between the two-dimensional normal vector of the local area and the normal vector of the movable platform running plane is smaller than a third preset angle, and/or divide the three-dimensional point into obstacles if an angle between the two-dimensional normal vector of the local area and the normal vector of the movable platform running plane is larger than a fourth preset angle, and/or divide the three-dimensional point into points of unknown attribute if an angle between the two-dimensional normal vector of the local area and the normal vector of the movable platform running plane is larger than or equal to the third preset angle and smaller than or equal to the fourth preset angle, and mark the three-dimensional point as a point of unknown attribute.
In some embodiments, the third preset included angle is less than or equal to 20 ° and the fourth preset included angle is greater than or equal to 70 °.
In some embodiments, after the point cloud segmentation is performed on the three-dimensional points based on the normal vector, the processor is further configured to obtain a plurality of frames of the three-dimensional point clouds;
and carrying out secondary point cloud segmentation on three-dimensional points with failed segmentation in the multi-frame three-dimensional point cloud based on the fitting model.
In some embodiments, the processor is configured to obtain pose information when the movable platform collects each frame of three-dimensional point clouds in multiple frames of three-dimensional point clouds, transform each frame of three-dimensional point clouds to a preset coordinate system based on the pose information corresponding to each frame of three-dimensional point clouds collected by the movable platform, and fit a driving road surface of the movable platform based on points, located on a driving plane of the movable platform, in the multiple transformed frames of three-dimensional point clouds.
In some embodiments, the processor is configured to perform a secondary point cloud segmentation on the three-dimensional points that failed in segmentation based on distances of the three-dimensional points that failed in segmentation from the fitted model.
In some embodiments, the processor is configured to segment the three-dimensional point with failure in segmentation into points on the running plane of the movable platform if the distance between the three-dimensional point with failure in segmentation and the fitted model is greater than a preset distance threshold, and/or mark the three-dimensional point with failure in segmentation as a point with unknown attribute if the distance between the three-dimensional point with failure in segmentation and the fitted model is greater than a preset distance threshold.
In some embodiments, after the three-dimensional points are point cloud segmented based on the normal vector of the three-dimensional points, the processor is further configured to tag each three-dimensional point in the three-dimensional point cloud based on a point cloud segmentation result.
In some embodiments, after tagging each three-dimensional point in the three-dimensional point cloud, the processor is further configured to output the tagged three-dimensional point cloud, wherein an output frame rate of the tagged three-dimensional point cloud is equal to an input frame rate of the three-dimensional point cloud.
In some embodiments, a point cloud segmentation result obtained by performing point cloud segmentation on the three-dimensional point cloud is used for planning a driving state of the movable platform by a planning unit on the movable platform.
In some embodiments, the laser radar has a scan angle less than or equal to 45 °.
The specific embodiments of the method executed by the processor in the point cloud segmentation apparatus in the embodiments of the present disclosure may refer to the foregoing method embodiments, and are not described herein again.
Fig. 8 shows a more specific hardware architecture of a data processing apparatus according to an embodiment of the present disclosure, where the apparatus may include a processor 801, a memory 802, an input/output interface 803, a communication interface 804, and a bus 805. Wherein the processor 801, the memory 802, the input/output interface 803, and the communication interface 804 implement communication connection between each other inside the device through a bus 805.
The processor 801 may be implemented by a general-purpose CPU (Central Processing Unit ), a microprocessor, an Application SPECIFIC INTEGRATED Circuit (ASIC), or one or more integrated circuits, etc. for executing related programs to implement the technical solutions provided in the embodiments of the present disclosure.
The Memory 802 may be implemented in the form of ROM (Read Only Memory), RAM (Random Access Memory ), static storage, dynamic storage, etc. The memory 802 may store an operating system and other application programs, and when the technical solutions provided in the embodiments of the present specification are implemented by software or firmware, relevant program codes are stored in the memory 802 and executed by the processor 801.
The input/output interface 803 is used to connect with an input/output module to realize information input and output. The input/output module may be configured as a component in a device (not shown) or may be external to the device to provide corresponding functionality. Wherein the input devices may include a keyboard, mouse, touch screen, microphone, various types of sensors, etc., and the output devices may include a display, speaker, vibrator, indicator lights, etc.
The communication interface 804 is used to connect with a communication module (not shown in the figure) to enable the present device to interact with other devices. The communication module may implement communication through a wired manner (such as USB, network cable, etc.), or may implement communication through a wireless manner (such as mobile network, WIFI, bluetooth, etc.).
The bus 805 includes a path to transfer information between components of the device (e.g., the processor 801, the memory 802, the input/output interface 803, and the communication interface 804).
It should be noted that although the above device only shows the processor 801, the memory 802, the input/output interface 803, the communication interface 804, and the bus 805, in the specific implementation, the device may further include other components necessary for realizing normal operation. Furthermore, it will be understood by those skilled in the art that the above-described apparatus may include only the components necessary to implement the embodiments of the present description, and not all the components shown in the drawings.
As shown in fig. 9, the embodiment of the disclosure further provides a movable platform 900, which includes a housing 901, a point cloud collecting device 902 disposed on the housing 901 and configured to collect three-dimensional point clouds, and a three-dimensional point cloud dividing device 903 disposed in the housing 901 and configured to perform the method according to any embodiment of the disclosure. The movable platform 900 may be an unmanned plane, an unmanned vehicle, an unmanned ship, a movable robot, etc., and the point cloud collecting device 902 may be a vision sensor (e.g., a binocular vision sensor, a tri-vision sensor, etc.) or a laser radar.
The disclosed embodiments also provide a computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements the steps of the method of any of the previous embodiments performed by the second processing unit.
Computer readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of storage media for a computer include, but are not limited to, phase change memory (PRAM), static Random Access Memory (SRAM), dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), read Only Memory (ROM), electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium, which can be used to store information that can be accessed by a computing device. Computer-readable media, as defined herein, does not include transitory computer-readable media (transmission media), such as modulated data signals and carrier waves.
From the foregoing description of embodiments, it will be apparent to those skilled in the art that the present embodiments may be implemented in software plus a necessary general purpose hardware platform. Based on such understanding, the technical solutions of the embodiments of the present specification may be embodied in essence or what contributes to the prior art in the form of a software product, which may be stored in a storage medium, such as a ROM/RAM, a magnetic disk, an optical disk, etc., including several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the method described in the embodiments or some parts of the embodiments of the present specification.
The system, apparatus, module or unit set forth in the above embodiments may be implemented in particular by a computer chip or entity, or by a product having a certain function. A typical implementation device is a computer, which may be in the form of a personal computer, laptop computer, cellular telephone, camera phone, smart phone, personal digital assistant, media player, navigation device, email device, game console, tablet computer, wearable device, or a combination of any of these devices.
The various technical features in the above embodiments may be arbitrarily combined as long as there is no conflict or contradiction between the combinations of the features, but are limited to a spread, and are not described one by one, so that any combination of the various technical features in the above embodiments also falls within the scope of the present disclosure.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification disclosed herein and practice of the disclosure. This disclosure is intended to cover any adaptations, uses, or adaptations of the disclosure following the general principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It is to be understood that the present disclosure is not limited to the precise arrangements and instrumentalities shown in the drawings, and that various modifications and changes may be effected without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.
The foregoing description of the preferred embodiments of the present disclosure is not intended to limit the disclosure, but rather to cover all modifications, equivalents, improvements and alternatives falling within the spirit and principles of the present disclosure.

Claims (40)

Translated fromChinese
1.一种三维点云分割方法,其特征在于,用于对可移动平台采集到的三维点云进行点云分割,所述方法包括:1. A three-dimensional point cloud segmentation method, characterized in that it is used to segment a three-dimensional point cloud collected by a movable platform, and the method comprises:基于可移动平台上的激光雷达采集三维点云时的扫描角度和扫描时刻,确定与所述三维点云中的三维点在物理空间中邻近的邻近三维点;Determine, based on a scanning angle and a scanning time when a laser radar on a movable platform collects a three-dimensional point cloud, adjacent three-dimensional points that are adjacent to the three-dimensional point in the three-dimensional point cloud in physical space;获取所述三维点与所述三维点的邻近三维点所在的局部区域的法向量;Obtaining a normal vector of a local area where the 3D point and adjacent 3D points of the 3D point are located;基于所述法向量对所述三维点进行点云分割。Perform point cloud segmentation on the three-dimensional points based on the normal vectors.2.根据权利要求1所述的方法,其特征在于,所述激光雷达对所述三维点的扫描角度与对所述三维点的邻近三维点的扫描角度之间的角度差小于预设角度差;和/或2. The method according to claim 1, characterized in that the angle difference between the laser radar's scanning angle for the three-dimensional point and the scanning angle for the three-dimensional point adjacent to the three-dimensional point is less than a preset angle difference; and/or所述激光雷达对所述三维点的扫描时间与对所述三维点的邻近三维点的扫描时间之间的时间差小于预设时长。The time difference between the laser radar's scanning time for the three-dimensional point and the scanning time for the three-dimensional points adjacent to the three-dimensional point is less than a preset time length.3.根据权利要求2所述的方法,其特征在于,所述三维点云由单线激光雷达采集得到,所述三维点的邻近三维点包括所述单线激光雷达沿着第一方向移动时连续采集到的至少一个三维点,以及所述单线激光雷达沿着第二方向移动时连续采集到的至少一个三维点,所述第一方向不同于所述第二方向;或者3. The method according to claim 2, characterized in that the three-dimensional point cloud is acquired by a single-line laser radar, and the neighboring three-dimensional points of the three-dimensional point include at least one three-dimensional point continuously acquired when the single-line laser radar moves along a first direction, and at least one three-dimensional point continuously acquired when the single-line laser radar moves along a second direction, and the first direction is different from the second direction; or所述三维点云由多线激光雷达采集得到,所述三维点的邻近三维点包括所述多线激光雷达中相邻的至少两线激光连续多次扫描得到的三维点;或者The three-dimensional point cloud is acquired by a multi-line laser radar, and the neighboring three-dimensional points of the three-dimensional point include three-dimensional points acquired by continuous multiple scanning of at least two adjacent laser lines in the multi-line laser radar; or所述三维点云由阵列激光雷达采集得到,所述三维点的邻近三维点包括所述阵列激光雷达中的m×n的阵列块一次扫描得到的三维点,m和n均为大于1的整数。The three-dimensional point cloud is collected by an array laser radar, and the neighboring three-dimensional points of the three-dimensional point include the three-dimensional points obtained by scanning an m×n array block in the array laser radar at one time, where m and n are both integers greater than 1.4.根据权利要求1所述的方法,其特征在于,所述方法还包括:4. The method according to claim 1, characterized in that the method further comprises:在获取所述局部区域的法向量之前,将所述三维点云中不可用的三维点标记为未知属性的点。Before obtaining the normal vector of the local area, unavailable three-dimensional points in the three-dimensional point cloud are marked as points with unknown attributes.5.根据权利要求4所述的方法,其特征在于,所述不可用的三维点包括连续的多个无效的邻近三维点,以及不存在邻近三维点的孤立点。5. The method according to claim 4 is characterized in that the unavailable 3D points include a plurality of consecutive invalid adjacent 3D points and isolated points without adjacent 3D points.6.根据权利要求5所述的方法,其特征在于,所述连续的多个无效的邻近三维点包括以下任意一种:6. The method according to claim 5, wherein the plurality of consecutive invalid adjacent three-dimensional points comprises any one of the following:多线激光雷达的相邻多线激光同一次扫描到的无效三维点;Invalid 3D points scanned by adjacent multi-line lasers of multi-line laser radar at the same time;阵列激光雷达的同一列或者同一行的连续多个激光源一次扫描到的无效三维点。Invalid 3D points scanned at one time by multiple consecutive laser sources in the same column or row of the array lidar.7.根据权利要求1所述的方法,其特征在于,所述基于所述法向量对所述三维点进行点云分割,包括:7. The method according to claim 1, characterized in that the step of performing point cloud segmentation on the three-dimensional points based on the normal vector comprises:若获取到所述局部区域的三维法向量,基于所述三维法向量对所述三维点进行点云分割;和/或If the three-dimensional normal vector of the local area is obtained, performing point cloud segmentation on the three-dimensional point based on the three-dimensional normal vector; and/or若未获取到所述局部区域的三维法向量,基于所述局部区域的二维法向量对所述三维点进行点云分割。If the three-dimensional normal vector of the local area is not obtained, point cloud segmentation is performed on the three-dimensional points based on the two-dimensional normal vector of the local area.8.根据权利要求7所述的方法,其特征在于,所述基于所述局部区域的三维法向量对所述三维点进行点云分割,包括:8. The method according to claim 7, characterized in that the step of performing point cloud segmentation on the three-dimensional points based on the three-dimensional normal vector of the local area comprises:若所述局部区域的三维法向量与所述可移动平台行驶平面的法向量的夹角小于第一预设夹角,将所述三维点分割为所述可移动平台行驶平面上的点;和/或If the angle between the three-dimensional normal vector of the local area and the normal vector of the movable platform driving plane is smaller than a first preset angle, segmenting the three-dimensional point into points on the movable platform driving plane; and/or若所述局部区域的三维法向量与所述可移动平台行驶平面的法向量的夹角大于第二预设夹角,将所述三维点分割为障碍物;和/或If the angle between the three-dimensional normal vector of the local area and the normal vector of the driving plane of the movable platform is greater than a second preset angle, segmenting the three-dimensional point into an obstacle; and/or若所述局部区域的三维法向量与所述可移动平台行驶平面的法向量的夹角大于或等于所述第一预设夹角,且小于或等于所述第二预设夹角,将所述三维点标记为未知属性的点;If the angle between the three-dimensional normal vector of the local area and the normal vector of the driving plane of the movable platform is greater than or equal to the first preset angle, and less than or equal to the second preset angle, mark the three-dimensional point as a point of unknown attribute;其中,所述第一预设夹角小于所述第二预设夹角。Wherein, the first preset angle is smaller than the second preset angle.9.根据权利要求8所述的方法,其特征在于,所述第一预设夹角小于或等于35°,所述第二预设夹角大于或等于65°。9 . The method according to claim 8 , wherein the first preset angle is less than or equal to 35°, and the second preset angle is greater than or equal to 65°.10.根据权利要求7所述的方法,其特征在于,所述基于所述局部区域的二维法向量对所述三维点进行点云分割,包括:10. The method according to claim 7, characterized in that the step of performing point cloud segmentation on the three-dimensional points based on the two-dimensional normal vector of the local area comprises:若所述局部区域的二维法向量与所述可移动平台行驶平面的法向量的夹角小于第三预设夹角,将所述三维点分割为所述可移动平台行驶平面上的点;和/或If the angle between the two-dimensional normal vector of the local area and the normal vector of the movable platform driving plane is less than a third preset angle, segmenting the three-dimensional point into points on the movable platform driving plane; and/or若所述局部区域的二维法向量与所述可移动平台行驶平面的法向量的夹角大于第四预设夹角,将所述三维点分割为障碍物;和/或If the angle between the two-dimensional normal vector of the local area and the normal vector of the driving plane of the movable platform is greater than a fourth preset angle, segmenting the three-dimensional point into an obstacle; and/or若所述局部区域的二维法向量与所述可移动平台行驶平面的法向量的夹角大于或等于所述第三预设夹角,且小于或等于所述第四预设夹角,将所述三维点标记为未知属性的点;If the angle between the two-dimensional normal vector of the local area and the normal vector of the driving plane of the movable platform is greater than or equal to the third preset angle and less than or equal to the fourth preset angle, mark the three-dimensional point as a point of unknown attribute;其中,所述第三预设夹角小于所述第四预设夹角。Wherein, the third preset angle is smaller than the fourth preset angle.11.根据权利要求10所述的方法,其特征在于,所述第三预设夹角小于或等于20°,所述第四预设夹角大于或等于70°。11. The method according to claim 10, characterized in that the third preset angle is less than or equal to 20°, and the fourth preset angle is greater than or equal to 70°.12.根据权利要求1所述的方法,其特征在于,在基于所述法向量对所述三维点进行点云分割之后,所述方法还包括:12. The method according to claim 1, characterized in that after performing point cloud segmentation on the three-dimensional points based on the normal vectors, the method further comprises:获取多帧所述三维点云;Acquire multiple frames of the three-dimensional point cloud;基于多帧所述三维点云对所述可移动平台的行驶路面进行拟合,得到所述行驶路面的拟合模型;Fitting the driving road surface of the movable platform based on multiple frames of the three-dimensional point cloud to obtain a fitting model of the driving road surface;基于所述拟合模型对多帧所述三维点云中分割失败的三维点进行二次点云分割。Based on the fitting model, secondary point cloud segmentation is performed on the three-dimensional points that fail to be segmented in the three-dimensional point clouds of multiple frames.13.根据权利要求12所述的方法,其特征在于,所述基于多帧所述三维点云对所述可移动平台的行驶路面进行拟合,包括:13. The method according to claim 12, characterized in that the step of fitting the driving road surface of the movable platform based on the three-dimensional point clouds of multiple frames comprises:获取所述可移动平台采集多帧所述三维点云中的每帧三维点云时的位姿信息;Acquire the position and posture information of each frame of the three-dimensional point cloud when the movable platform collects multiple frames of the three-dimensional point cloud;基于所述可移动平台采集所述每帧三维点云对应的位姿信息,将所述每帧三维点云变换到预设坐标系下;Based on the movable platform, the pose information corresponding to each frame of the three-dimensional point cloud is collected, and each frame of the three-dimensional point cloud is transformed into a preset coordinate system;基于变换后的多帧所述三维点云中位于所述可移动平台行驶平面上的点,对所述可移动平台的行驶路面进行拟合。The driving road surface of the movable platform is fitted based on the points located on the driving plane of the movable platform in the transformed multiple frames of the three-dimensional point cloud.14.根据权利要求12所述的方法,其特征在于,所述基于所述拟合模型对多帧所述三维点云中分割失败的三维点进行二次点云分割,包括:14. The method according to claim 12, characterized in that the performing secondary point cloud segmentation on the three-dimensional points that failed to be segmented in the three-dimensional point clouds of multiple frames based on the fitting model comprises:基于所述分割失败的三维点与所述拟合模型的距离,对所述分割失败的三维点进行二次点云分割。Based on the distance between the three-dimensional point that failed to be segmented and the fitting model, a secondary point cloud segmentation is performed on the three-dimensional point that failed to be segmented.15.根据权利要求14所述的方法,其特征在于,所述基于所述分割失败的三维点与所述拟合模型的距离,对所述分割失败的三维点进行二次点云分割,包括:15. The method according to claim 14, characterized in that the performing secondary point cloud segmentation on the 3D points that failed to be segmented based on the distance between the 3D points that failed to be segmented and the fitting model comprises:若所述分割失败的三维点与所述拟合模型的距离大于预设距离阈值,将所述分割失败的三维点分割为所述可移动平台行驶平面上的点;和/或If the distance between the three-dimensional point that fails to be segmented and the fitting model is greater than a preset distance threshold, segmenting the three-dimensional point that fails to be segmented into points on the driving plane of the movable platform; and/or若所述分割失败的三维点与所述拟合模型的距离大于预设距离阈值,将所述分割失败的三维点标记为未知属性的点。If the distance between the three-dimensional point where the segmentation fails and the fitting model is greater than a preset distance threshold, the three-dimensional point where the segmentation fails is marked as a point of unknown attribute.16.根据权利要求1所述的方法,其特征在于,在基于所述三维点的法向量对所述三维点进行点云分割之后,所述方法还包括:16. The method according to claim 1, characterized in that after performing point cloud segmentation on the three-dimensional point based on the normal vector of the three-dimensional point, the method further comprises:基于点云分割结果,为所述三维点云中的各个三维点打标签。Based on the point cloud segmentation result, each three-dimensional point in the three-dimensional point cloud is labeled.17.根据权利要求16所述的方法,其特征在于,在为所述三维点云中的各个三维点打标签之后,所述方法还包括:17. The method according to claim 16, characterized in that, after labeling each 3D point in the 3D point cloud, the method further comprises:输出带标签的所述三维点云;Outputting the three-dimensional point cloud with labels;其中,带标签的所述三维点云的输出帧率与所述三维点云的输入帧率相等。The output frame rate of the labeled three-dimensional point cloud is equal to the input frame rate of the three-dimensional point cloud.18.根据权利要求1所述的方法,其特征在于,对所述三维点云进行点云分割得到的点云分割结果用于所述可移动平台上的规划单元对所述可移动平台的行驶状态进行规划。18. The method according to claim 1 is characterized in that the point cloud segmentation result obtained by performing point cloud segmentation on the three-dimensional point cloud is used by the planning unit on the movable platform to plan the driving state of the movable platform.19.根据权利要求1所述的方法,其特征在于,所述激光雷达的扫描角度小于或等于45°。19. The method according to claim 1 is characterized in that the scanning angle of the laser radar is less than or equal to 45°.20.一种三维点云分割装置,包括处理器,其特征在于,所述三维点云分割装置用于对可移动平台采集到的三维点云进行点云分割,所述处理器用于执行以下步骤:20. A three-dimensional point cloud segmentation device, comprising a processor, characterized in that the three-dimensional point cloud segmentation device is used to segment the three-dimensional point cloud collected by a movable platform, and the processor is used to perform the following steps:基于可移动平台上的激光雷达采集三维点云时的扫描角度和扫描时刻,确定与所述三维点云中的三维点在物理空间中邻近的邻近三维点;Determine, based on a scanning angle and a scanning time when a laser radar on a movable platform collects a three-dimensional point cloud, adjacent three-dimensional points that are adjacent to the three-dimensional point in the three-dimensional point cloud in physical space;获取所述三维点与所述三维点的邻近三维点所在的局部区域的法向量;Obtaining a normal vector of a local area where the 3D point and adjacent 3D points of the 3D point are located;基于所述法向量对所述三维点进行点云分割。Perform point cloud segmentation on the three-dimensional points based on the normal vectors.21.根据权利要求20所述的装置,其特征在于,所述激光雷达对所述三维点的扫描角度与对所述三维点的邻近三维点的扫描角度之间的角度差小于预设角度差;和/或21. The device according to claim 20, characterized in that the angle difference between the laser radar's scanning angle for the three-dimensional point and the scanning angle for the three-dimensional point adjacent to the three-dimensional point is less than a preset angle difference; and/or所述激光雷达对所述三维点的扫描时间与对所述三维点的邻近三维点的扫描时间之间的时间差小于预设时长。The time difference between the laser radar's scanning time for the three-dimensional point and the scanning time for the three-dimensional points adjacent to the three-dimensional point is less than a preset time length.22.根据权利要求21所述的装置,其特征在于,所述三维点云由单线激光雷达采集得到,所述三维点的邻近三维点包括所述单线激光雷达沿着第一方向移动时连续采集到的至少一个三维点,以及所述单线激光雷达沿着第二方向移动时连续采集到的至少一个三维点,所述第一方向不同于所述第二方向;或者22. The device according to claim 21, characterized in that the three-dimensional point cloud is collected by a single-line laser radar, and the neighboring three-dimensional points of the three-dimensional point include at least one three-dimensional point continuously collected when the single-line laser radar moves along a first direction, and at least one three-dimensional point continuously collected when the single-line laser radar moves along a second direction, and the first direction is different from the second direction; or所述三维点云由多线激光雷达采集得到,所述三维点的邻近三维点包括所述多线激光雷达中相邻的至少两线激光连续多次扫描得到的三维点;或者The three-dimensional point cloud is acquired by a multi-line laser radar, and the neighboring three-dimensional points of the three-dimensional point include three-dimensional points obtained by continuous multiple scanning of at least two adjacent laser lines in the multi-line laser radar; or所述三维点云由阵列激光雷达采集得到,所述三维点的邻近三维点包括所述阵列激光雷达中的m×n的阵列块一次扫描得到的三维点,m和n均为大于1的整数。The three-dimensional point cloud is collected by an array laser radar, and the neighboring three-dimensional points of the three-dimensional point include the three-dimensional points obtained by a single scan of an m×n array block in the array laser radar, where m and n are both integers greater than 1.23.根据权利要求20所述的装置,其特征在于,所述处理器还用于:23. The device according to claim 20, wherein the processor is further configured to:在获取所述局部区域的法向量之前,将所述三维点云中不可用的三维点标记为未知属性的点。Before obtaining the normal vector of the local area, unavailable three-dimensional points in the three-dimensional point cloud are marked as points with unknown attributes.24.根据权利要求23所述的装置,其特征在于,所述不可用的三维点包括连续的多个无效的邻近三维点,以及不存在邻近三维点的孤立点。24. The device according to claim 23, characterized in that the unavailable 3D points include a plurality of consecutive invalid adjacent 3D points and isolated points without adjacent 3D points.25.根据权利要求24所述的装置,其特征在于,所述连续的多个无效的邻近三维点包括以下任意一种:25. The device according to claim 24, wherein the plurality of consecutive invalid adjacent three-dimensional points include any one of the following:多线激光雷达的相邻多线激光同一次扫描到的无效三维点;Invalid 3D points scanned by adjacent multi-line lasers of multi-line laser radar at the same time;阵列激光雷达的同一列或者同一行的连续多个激光源一次扫描到的无效三维点。Invalid 3D points scanned at one time by multiple consecutive laser sources in the same column or row of the array lidar.26.根据权利要求20所述的装置,其特征在于,所述处理器用于:26. The apparatus according to claim 20, wherein the processor is configured to:若获取到所述局部区域的三维法向量,基于所述三维法向量对所述三维点进行点云分割;和/或If the three-dimensional normal vector of the local area is obtained, performing point cloud segmentation on the three-dimensional point based on the three-dimensional normal vector; and/or若未获取到所述局部区域的三维法向量,基于所述局部区域的二维法向量对所述三维点进行点云分割。If the three-dimensional normal vector of the local area is not obtained, point cloud segmentation is performed on the three-dimensional points based on the two-dimensional normal vector of the local area.27.根据权利要求26所述的装置,其特征在于,所述处理器用于:27. The device according to claim 26, wherein the processor is configured to:若所述局部区域的三维法向量与所述可移动平台行驶平面的法向量的夹角小于第一预设夹角,将所述三维点分割为所述可移动平台行驶平面上的点;和/或If the angle between the three-dimensional normal vector of the local area and the normal vector of the movable platform driving plane is smaller than a first preset angle, segmenting the three-dimensional point into points on the movable platform driving plane; and/or若所述局部区域的三维法向量与所述可移动平台行驶平面的法向量的夹角大于第二预设夹角,将所述三维点分割为障碍物;和/或If the angle between the three-dimensional normal vector of the local area and the normal vector of the driving plane of the movable platform is greater than a second preset angle, segmenting the three-dimensional point into an obstacle; and/or若所述局部区域的三维法向量与所述可移动平台行驶平面的法向量的夹角大于或等于所述第一预设夹角,且小于或等于所述第二预设夹角,将所述三维点标记为未知属性的点;If the angle between the three-dimensional normal vector of the local area and the normal vector of the driving plane of the movable platform is greater than or equal to the first preset angle, and less than or equal to the second preset angle, mark the three-dimensional point as a point of unknown attribute;其中,所述第一预设夹角小于所述第二预设夹角。Wherein, the first preset angle is smaller than the second preset angle.28.根据权利要求27所述的装置,其特征在于,所述第一预设夹角小于或等于35°,所述第二预设夹角大于或等于65°。28. The device according to claim 27, characterized in that the first preset angle is less than or equal to 35°, and the second preset angle is greater than or equal to 65°.29.根据权利要求26所述的装置,其特征在于,所述处理器用于:29. The device according to claim 26, wherein the processor is configured to:若所述局部区域的二维法向量与所述可移动平台行驶平面的法向量的夹角小于第三预设夹角,将所述三维点分割为所述可移动平台行驶平面上的点;和/或If the angle between the two-dimensional normal vector of the local area and the normal vector of the movable platform driving plane is less than a third preset angle, segmenting the three-dimensional point into points on the movable platform driving plane; and/or若所述局部区域的二维法向量与所述可移动平台行驶平面的法向量的夹角大于第四预设夹角,将所述三维点分割为障碍物;和/或If the angle between the two-dimensional normal vector of the local area and the normal vector of the driving plane of the movable platform is greater than a fourth preset angle, segmenting the three-dimensional point into an obstacle; and/or若所述局部区域的二维法向量与所述可移动平台行驶平面的法向量的夹角大于或等于所述第三预设夹角,且小于或等于所述第四预设夹角,将所述三维点标记为未知属性的点;If the angle between the two-dimensional normal vector of the local area and the normal vector of the driving plane of the movable platform is greater than or equal to the third preset angle and less than or equal to the fourth preset angle, mark the three-dimensional point as a point of unknown attribute;其中,所述第三预设夹角小于所述第四预设夹角。Wherein, the third preset angle is smaller than the fourth preset angle.30.根据权利要求29所述的装置,其特征在于,所述第三预设夹角小于或等于20°,所述第四预设夹角大于或等于70°。30. The device according to claim 29, characterized in that the third preset angle is less than or equal to 20°, and the fourth preset angle is greater than or equal to 70°.31.根据权利要求20所述的装置,其特征在于,在基于所述法向量对所述三维点进行点云分割之后,所述处理器还用于:31. The device according to claim 20, characterized in that after performing point cloud segmentation on the three-dimensional points based on the normal vectors, the processor is further configured to:获取多帧所述三维点云;Acquire multiple frames of the three-dimensional point cloud;基于多帧所述三维点云对所述可移动平台的行驶路面进行拟合,得到所述行驶路面的拟合模型;Fitting the driving road surface of the movable platform based on multiple frames of the three-dimensional point cloud to obtain a fitting model of the driving road surface;基于所述拟合模型对多帧所述三维点云中分割失败的三维点进行二次点云分割。Based on the fitting model, secondary point cloud segmentation is performed on the three-dimensional points that fail to be segmented in the three-dimensional point clouds of multiple frames.32.根据权利要求31所述的装置,其特征在于,所述处理器用于:32. The apparatus according to claim 31, wherein the processor is configured to:获取所述可移动平台采集多帧所述三维点云中的每帧三维点云时的位姿信息;Acquire the position and posture information of each frame of the three-dimensional point cloud when the movable platform collects multiple frames of the three-dimensional point cloud;基于所述可移动平台采集所述每帧三维点云对应的位姿信息,将所述每帧三维点云变换到预设坐标系下;Based on the movable platform, the pose information corresponding to each frame of the three-dimensional point cloud is collected, and each frame of the three-dimensional point cloud is transformed into a preset coordinate system;基于变换后的多帧所述三维点云中位于所述可移动平台行驶平面上的点,对所述可移动平台的行驶路面进行拟合。The driving road surface of the movable platform is fitted based on the points located on the driving plane of the movable platform in the transformed multiple frames of the three-dimensional point cloud.33.根据权利要求31所述的装置,其特征在于,所述处理器用于:33. The apparatus according to claim 31, wherein the processor is configured to:基于所述分割失败的三维点与所述拟合模型的距离,对所述分割失败的三维点进行二次点云分割。Based on the distance between the three-dimensional point that failed to be segmented and the fitting model, a secondary point cloud segmentation is performed on the three-dimensional point that failed to be segmented.34.根据权利要求33所述的装置,其特征在于,所述处理器用于:34. The apparatus according to claim 33, wherein the processor is configured to:若所述分割失败的三维点与所述拟合模型的距离大于预设距离阈值,将所述分割失败的三维点分割为所述可移动平台行驶平面上的点;和/或If the distance between the three-dimensional point that fails to be segmented and the fitting model is greater than a preset distance threshold, segmenting the three-dimensional point that fails to be segmented into points on the driving plane of the movable platform; and/or若所述分割失败的三维点与所述拟合模型的距离大于预设距离阈值,将所述分割失败的三维点标记为未知属性的点。If the distance between the three-dimensional point where the segmentation fails and the fitting model is greater than a preset distance threshold, the three-dimensional point where the segmentation fails is marked as a point of unknown attribute.35.根据权利要求20所述的装置,其特征在于,在基于所述三维点的法向量对所述三维点进行点云分割之后,所述处理器还用于:35. The device according to claim 20, characterized in that after performing point cloud segmentation on the three-dimensional point based on the normal vector of the three-dimensional point, the processor is further configured to:基于点云分割结果,为所述三维点云中的各个三维点打标签。Based on the point cloud segmentation result, each three-dimensional point in the three-dimensional point cloud is labeled.36.根据权利要求35所述的装置,其特征在于,在为所述三维点云中的各个三维点打标签之后,所述处理器还用于:36. The device according to claim 35, characterized in that after labeling each 3D point in the 3D point cloud, the processor is further configured to:输出带标签的所述三维点云;Outputting the three-dimensional point cloud with labels;其中,带标签的所述三维点云的输出帧率与所述三维点云的输入帧率相等。The output frame rate of the labeled three-dimensional point cloud is equal to the input frame rate of the three-dimensional point cloud.37.根据权利要求20所述的装置,其特征在于,对所述三维点云进行点云分割得到的点云分割结果用于所述可移动平台上的规划单元对所述可移动平台的行驶状态进行规划。37. The device according to claim 20 is characterized in that the point cloud segmentation result obtained by performing point cloud segmentation on the three-dimensional point cloud is used by the planning unit on the movable platform to plan the driving state of the movable platform.38.根据权利要求20所述的装置,其特征在于,所述激光雷达的扫描角度小于或等于45°。38. The device according to claim 20 is characterized in that the scanning angle of the laser radar is less than or equal to 45°.39.一种可移动平台,其特征在于,包括:39. A movable platform, comprising:壳体;case;点云采集装置,设于所述壳体上,用于采集三维点云;以及A point cloud acquisition device, disposed on the housing, for acquiring three-dimensional point clouds; and三维点云分割装置,设于所述壳体内,用于执行权利要求1至19任意一项所述的方法。A three-dimensional point cloud segmentation device is arranged in the shell and is used to execute the method described in any one of claims 1 to 19.40.一种计算机可读存储介质,其特征在于,其上存储有计算机指令,该指令被处理器执行时实现权利要求1至19任意一项所述的方法。40. A computer-readable storage medium, characterized in that computer instructions are stored thereon, and when the instructions are executed by a processor, the method described in any one of claims 1 to 19 is implemented.
CN202080070568.4A2020-12-292020-12-29 Three-dimensional point cloud segmentation method and device, and movable platformActiveCN114556442B (en)

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
PCT/CN2020/141076WO2022141116A1 (en)2020-12-292020-12-29Three-dimensional point cloud segmentation method and apparatus, and movable platform

Publications (2)

Publication NumberPublication Date
CN114556442A CN114556442A (en)2022-05-27
CN114556442Btrue CN114556442B (en)2025-01-14

Family

ID=81668402

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN202080070568.4AActiveCN114556442B (en)2020-12-292020-12-29 Three-dimensional point cloud segmentation method and device, and movable platform

Country Status (2)

CountryLink
CN (1)CN114556442B (en)
WO (1)WO2022141116A1 (en)

Families Citing this family (10)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN115331099A (en)*2022-07-202022-11-11高德软件有限公司Method and device for acquiring pavement point cloud data, electronic equipment and storage medium
CN115113170B (en)*2022-07-212024-12-20电子科技大学 A LiDAR edge feature prediction method based on indoor feature degradation environment
CN115718492B (en)*2022-11-172025-08-19国网智能科技股份有限公司Autonomous navigation method and system for foot-type robot for comprehensive pipe gallery inspection
CN116935050A (en)*2023-07-182023-10-24毛丽超 A lidar ground segmentation method and system
CN116777934B (en)*2023-07-212025-09-09长沙万为机器人有限公司Ground segmentation method for laser point cloud
CN116958303A (en)*2023-09-192023-10-27山东博昂信息科技有限公司Intelligent mapping method and system based on outdoor laser radar
CN117173424B (en)*2023-11-012024-01-26武汉追月信息技术有限公司 A point cloud slope edge line identification method, system and readable storage medium
CN117201685B (en)*2023-11-062024-01-26中国民航大学Surface coverage scanning method, device, equipment and medium for three-dimensional object
CN119714107B (en)*2025-02-252025-05-02湖北省水文地质工程地质勘察院有限公司 A method for monitoring geological disaster deformation based on airborne laser radar
CN120355866B (en)*2025-06-202025-09-16深圳市其域创新科技有限公司 Object singulation method, electronic device and three-dimensional scanning and reconstruction device

Citations (2)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN110441791A (en)*2019-08-142019-11-12深圳无境智能机器人有限公司A kind of ground obstacle detection method based on the 2D laser radar that leans forward
CN110517354A (en)*2018-05-212019-11-29北京京东尚科信息技术有限公司Method, apparatus, system and medium for three-dimensional point cloud segmentation

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
JP5343042B2 (en)*2010-06-252013-11-13株式会社トプコン Point cloud data processing apparatus and point cloud data processing program
CN104794687A (en)*2014-01-202015-07-22鸿富锦精密工业(深圳)有限公司Point clouds simplifying system and method
CN110992341A (en)*2019-12-042020-04-10沈阳建筑大学 A segmentation-based method for building extraction from airborne LiDAR point cloud
CN111142514B (en)*2019-12-112024-02-13深圳市优必选科技股份有限公司Robot and obstacle avoidance method and device thereof
CN111815776B (en)*2020-02-042024-10-25深圳留形科技有限公司Fine geometric reconstruction method for three-dimensional building integrating airborne and vehicle-mounted three-dimensional laser point clouds and street view images
CN111461245B (en)*2020-04-092022-11-04武汉大学Wheeled robot semantic mapping method and system fusing point cloud and image
CN111639682B (en)*2020-05-132024-06-21北京三快在线科技有限公司Ground segmentation method and device based on point cloud data
CN111985322B (en)*2020-07-142024-02-06西安理工大学Road environment element sensing method based on laser radar

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN110517354A (en)*2018-05-212019-11-29北京京东尚科信息技术有限公司Method, apparatus, system and medium for three-dimensional point cloud segmentation
CN110441791A (en)*2019-08-142019-11-12深圳无境智能机器人有限公司A kind of ground obstacle detection method based on the 2D laser radar that leans forward

Also Published As

Publication numberPublication date
WO2022141116A1 (en)2022-07-07
CN114556442A (en)2022-05-27

Similar Documents

PublicationPublication DateTitle
CN114556442B (en) Three-dimensional point cloud segmentation method and device, and movable platform
US11567496B2 (en)Method and apparatus for optimizing scan data and method and apparatus for correcting trajectory
CN111986472B (en)Vehicle speed determining method and vehicle
US11086016B2 (en)Method and apparatus for tracking obstacle
US11216951B2 (en)Method and apparatus for representing environmental elements, system, and vehicle/robot
JP2023523243A (en) Obstacle detection method and apparatus, computer device, and computer program
CN114270410A (en)Point cloud fusion method and system for moving object and computer storage medium
CN112513679A (en)Target identification method and device
KR20160026989A (en)Lidar-based classification of object movement
EP4213128A1 (en)Obstacle detection device, obstacle detection system, and obstacle detection method
CN110717918B (en)Pedestrian detection method and device
JP2022045947A5 (en)
CN114631124B (en) Three-dimensional point cloud segmentation method and device, and movable platform
WO2022126380A1 (en)Three-dimensional point cloud segmentation method and apparatus, and movable platform
CN118608435B (en)De-distortion method and device for point cloud, electronic equipment and readable storage medium
CN115273015A (en)Prediction method and device, intelligent driving system and vehicle
CN111273314A (en) Point cloud data processing method, device and storage medium
CN114384486B (en) Data processing method and device
CN111912418A (en) Method, device and medium for removing obstacles in non-drivable area of mobile carrier
KR102683721B1 (en)Apparatus and method for removing outlier of point cloud data
CN115994934A (en)Data time alignment method and device and domain controller
CN115201828A (en) Optimization method, system, electronic device, storage medium for sparse point cloud
WO2018120932A1 (en)Method and apparatus for optimizing scan data and method and apparatus for correcting trajectory
CN112198523B (en) Method and device for point cloud segmentation
CN112116698B (en) Method and device for point cloud fusion

Legal Events

DateCodeTitleDescription
PB01Publication
PB01Publication
SE01Entry into force of request for substantive examination
SE01Entry into force of request for substantive examination
TA01Transfer of patent application right

Effective date of registration:20240515

Address after:Building 3, Xunmei Science and Technology Plaza, No. 8 Keyuan Road, Science and Technology Park Community, Yuehai Street, Nanshan District, Shenzhen City, Guangdong Province, 518057, 1634

Applicant after:Shenzhen Zhuoyu Technology Co.,Ltd.

Country or region after:China

Address before:518057 Shenzhen Nanshan High-tech Zone, Shenzhen, Guangdong Province, 6/F, Shenzhen Industry, Education and Research Building, Hong Kong University of Science and Technology, No. 9 Yuexingdao, South District, Nanshan District, Shenzhen City, Guangdong Province

Applicant before:SZ DJI TECHNOLOGY Co.,Ltd.

Country or region before:China

TA01Transfer of patent application right
GR01Patent grant
GR01Patent grant

[8]ページ先頭

©2009-2025 Movatter.jp