Movatterモバイル変換


[0]ホーム

URL:


CN110866449A - Method and device for identifying target object in road - Google Patents

Method and device for identifying target object in road
Download PDF

Info

Publication number
CN110866449A
CN110866449ACN201911001284.5ACN201911001284ACN110866449ACN 110866449 ACN110866449 ACN 110866449ACN 201911001284 ACN201911001284 ACN 201911001284ACN 110866449 ACN110866449 ACN 110866449A
Authority
CN
China
Prior art keywords
point cloud
dimensional
cloud data
target
plane
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911001284.5A
Other languages
Chinese (zh)
Inventor
刘冬冬
赫桂望
蔡金华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Jingdong Qianshi Technology Co Ltd
Original Assignee
Beijing Jingdong Century Trading Co Ltd
Beijing Jingdong Shangke Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Jingdong Century Trading Co Ltd, Beijing Jingdong Shangke Information Technology Co LtdfiledCriticalBeijing Jingdong Century Trading Co Ltd
Priority to CN201911001284.5ApriorityCriticalpatent/CN110866449A/en
Publication of CN110866449ApublicationCriticalpatent/CN110866449A/en
Pendinglegal-statusCriticalCurrent

Links

Images

Classifications

Landscapes

Abstract

The invention discloses a method and a device for identifying a target object in a road, and relates to the technical field of computers. One embodiment of the method comprises: collecting point cloud data of a road; the point cloud data includes: three-dimensional space coordinate information and reflection intensity information of each point cloud; converting the point cloud data into four-dimensional semantic targets based on three-dimensional space coordinate information and reflection intensity information of each point cloud, and clustering the four-dimensional semantic targets to obtain target point cloud clusters; identifying a target object in each of the target point cloud clusters. According to the embodiment, sample learning and other data auxiliary registration are not needed, the target object can be identified automatically and quickly based on the spatial features of the three-dimensional point cloud data, the calculation process is simplified, and the hardware cost is reduced.

Description

Method and device for identifying target object in road
Technical Field
The invention relates to the technical field of computers, in particular to a method and a device for identifying a target object in a road.
Background
In order to ensure the driving safety of vehicles, target objects in roads, such as well covers, traffic signboards and the like, need to be quickly extracted and positioned, so as to provide obstacle avoidance or traffic decision for vehicle ends. At present, the following technical means are mainly used for extracting and positioning the target object:
(1) performing model training by adopting machine learning or deep learning based on a large number of positive and negative samples, and performing prediction identification on the basis of a learning training model;
(2) meanwhile, the image data and the point cloud data are registered according to a strict conversion relation by depending on the image data and the point cloud data, and the characteristics of the image data and the point cloud data are fused for identification, so that the purpose of extracting and positioning the well lid is achieved.
In the process of implementing the invention, the inventor finds that at least the following problems exist in the prior art:
(1) a large amount of positive and negative samples need to be manually marked in the early stage based on a sample learning and predicting mode; the extraction and positioning accuracy is limited by the number of positive and negative samples and the scene covered by the samples, and the accuracy is reduced due to small data samples or unbalanced positive and negative samples.
(2) The extraction and positioning mode of fusing the image and point cloud data features needs to accurately calibrate the external parameter conversion relation between the sensors of the acquisition equipment, and the calculation process is complex; and a camera sensor needs to be erected to acquire image data, so that the hardware cost is increased.
Disclosure of Invention
In view of this, embodiments of the present invention provide a method and an apparatus for identifying a target object in a road, which do not require sample learning or other data-assisted registration, and can fully automatically and rapidly identify the target object based on spatial features of three-dimensional point cloud data, thereby simplifying a calculation process and reducing hardware cost.
To achieve the above object, according to an aspect of an embodiment of the present invention, there is provided a method of identifying a target object in a road, including:
collecting point cloud data of a road; the point cloud data includes: three-dimensional space coordinate information and reflection intensity information of each point cloud;
converting the point cloud data into four-dimensional semantic targets based on three-dimensional space coordinate information and reflection intensity information of each point cloud, and clustering the four-dimensional semantic targets to obtain target point cloud clusters;
identifying a target object in each of the target point cloud clusters.
Optionally, before converting the point cloud data into a four-dimensional semantic target based on the three-dimensional space coordinate information and the reflection intensity information of each point cloud, the method further includes: and dividing the point cloud data into a plurality of single-frame point cloud data according to the time stamps, filtering each single-frame point cloud data by adopting a characteristic plane filtering method, and merging the plurality of single-frame point cloud data after filtering.
Optionally, the feature plane filtering method includes: fitting a point cloud plane of the single-frame point cloud data according to the space geometric information of all point clouds in the single-frame point cloud data, and filtering the point cloud data, of which the distance from the point cloud plane in the single-frame point cloud data meets a first filtering condition;
wherein the target object is a ground target, and the first filtering condition is: the distance from the point cloud plane is greater than a preset distance threshold; or, the target object is a non-ground target, and the first filtering condition is: and the distance from the point cloud plane is less than or equal to a preset distance threshold.
Optionally, fitting a point cloud plane of the single-frame point cloud data according to the spatial geometric information of all point clouds in the single-frame point cloud data includes:
filtering the single-frame point cloud data based on the height features, and filtering out the point cloud data with the height higher than a preset height in the single-frame point cloud data; performing plane fitting in the filtered point cloud to obtain a plurality of fitting planes; and determining an included angle between the plane normal of each fitting plane and the height direction, and taking the fitting plane with the minimum included angle or the maximum number of point clouds in the plane as a point cloud plane of the single-frame point cloud data.
Optionally, before converting the point cloud data into a four-dimensional semantic target based on the three-dimensional space coordinate information and the reflection intensity information of each point cloud, the method further includes: and generating a passing track curve according to the point cloud data, forming dividing point cloud lines parallel to the track curve on the side edges of the track curve, and filtering the point cloud data scattered outside the area between the dividing point cloud lines in the point cloud data.
Optionally, a K-means clustering method is adopted to cluster the four-dimensional semantic objects.
Optionally, identifying a target object in each of the target point cloud clusters includes:
projecting the target point cloud cluster into a two-dimensional grid, and then converting the two-dimensional grid into a two-dimensional characteristic image, wherein one grid represents one pixel; detecting a target object region in the two-dimensional characteristic image by adopting a Hough algorithm; and reversely mapping the coordinates of the target object area on the two-dimensional characteristic image into a point cloud in a three-dimensional space to determine the three-dimensional space coordinates of the target object.
Optionally, projecting the target point cloud cluster into a two-dimensional grid, and then converting the two-dimensional grid into a two-dimensional feature image, including:
1) determining a point cloud range of the target point cloud cluster, dividing a two-dimensional equidistant grid according to the point cloud range, projecting the target point cloud cluster into the two-dimensional grid, determining the area of a plane covered by the target point cloud cluster according to the number of the projected grids, and calculating the plane density of the point cloud according to the area and the number of points of the target point cloud cluster;
2) calculating the average point distance according to the point cloud density to obtain a plane space scale;
3) creating a two-dimensional grid index according to the plane space scale, dividing the point cloud range of the target point cloud cluster into space two-dimensional grids, and recording the point cloud index falling into each grid;
4) converting the divided two-dimensional space grid into a two-dimensional characteristic image; a grid represents a pixel, and the pixel value represented by each grid is calculated from intensity information expressed by the point cloud falling within the grid.
According to a second aspect of the embodiments of the present invention, there is provided an apparatus for identifying a target object in a road, including:
the acquisition unit is used for acquiring point cloud data of a road; the point cloud data includes: three-dimensional space coordinate information and reflection intensity information of each point cloud;
the clustering unit is used for converting the point cloud data into a four-dimensional semantic target based on the three-dimensional space coordinate information and the reflection intensity information of each point cloud, and clustering the four-dimensional semantic target to obtain a target point cloud cluster;
and the identification unit is used for identifying a target object in each target point cloud cluster.
Optionally, the apparatus of the embodiment of the present invention further includes a first filtering unit, configured to: before the point cloud data are converted into a four-dimensional semantic target based on the three-dimensional space coordinate information and the reflection intensity information of each point cloud, the point cloud data are divided into a plurality of single-frame point cloud data according to time stamps, each single-frame point cloud data are filtered by adopting a feature plane filtering method, and the plurality of single-frame point cloud data after being filtered are combined.
Optionally, the feature plane filtering method includes: fitting a point cloud plane of the single-frame point cloud data according to the space geometric information of all point clouds in the single-frame point cloud data, and filtering the point cloud data, of which the distance from the point cloud plane in the single-frame point cloud data meets a first filtering condition;
wherein the target object is a ground target, and the first filtering condition is: the distance from the point cloud plane is greater than a preset distance threshold; or, the target object is a non-ground target, and the first filtering condition is: and the distance from the point cloud plane is less than or equal to a preset distance threshold.
Optionally, fitting a point cloud plane of the single-frame point cloud data according to the spatial geometric information of all point clouds in the single-frame point cloud data includes:
filtering the single-frame point cloud data based on the height features, and filtering out the point cloud data with the height higher than a preset height in the single-frame point cloud data; performing plane fitting in the filtered point cloud to obtain a plurality of fitting planes; and determining an included angle between the plane normal of each fitting plane and the height direction, and taking the fitting plane with the minimum included angle or the maximum number of point clouds in the plane as a point cloud plane of the single-frame point cloud data.
Optionally, the apparatus in this embodiment of the present invention further includes a second filtering unit, configured to: before the point cloud data are converted into a four-dimensional semantic target based on the three-dimensional space coordinate information and the reflection intensity information of each point cloud, a passing track curve is generated according to the point cloud data, cloud lines of division points parallel to the track curve are formed on the side edge of the track curve, and the point cloud data scattered outside an area between the cloud lines of the division points in the point cloud data are filtered.
Optionally, the clustering unit clusters the four-dimensional semantic objects by using a K-means clustering method.
Optionally, the identifying unit identifies a target object in each of the target point cloud clusters, including:
projecting the target point cloud cluster into a two-dimensional grid, and then converting the two-dimensional grid into a two-dimensional characteristic image, wherein one grid represents one pixel; detecting a target object region in the two-dimensional characteristic image by adopting a Hough algorithm; and reversely mapping the coordinates of the target object area on the two-dimensional characteristic image into a point cloud in a three-dimensional space to determine the three-dimensional space coordinates of the target object.
Optionally, projecting the target point cloud cluster into a two-dimensional grid, and then converting the two-dimensional grid into a two-dimensional feature image, including:
1) determining a point cloud range of the target point cloud cluster, dividing a two-dimensional equidistant grid according to the point cloud range, projecting the target point cloud cluster into the two-dimensional grid, determining the area of a plane covered by the target point cloud cluster according to the number of the projected grids, and calculating the plane density of the point cloud according to the area and the number of points of the target point cloud cluster;
2) calculating the average point distance according to the point cloud density to obtain a plane space scale;
3) creating a two-dimensional grid index according to the plane space scale, dividing the point cloud range of the target point cloud cluster into space two-dimensional grids, and recording the point cloud index falling into each grid;
4) converting the divided two-dimensional space grid into a two-dimensional characteristic image; a grid represents a pixel, and the pixel value represented by each grid is calculated from intensity information expressed by the point cloud falling within the grid.
According to a third aspect of embodiments of the present invention, there is provided an electronic device for identifying a target object in a road, comprising:
one or more processors;
a storage device for storing one or more programs,
when the one or more programs are executed by the one or more processors, the one or more processors are caused to implement the method provided by the first aspect of the embodiments of the present invention.
According to a fourth aspect of embodiments of the present invention, there is provided a computer readable medium, on which a computer program is stored, which when executed by a processor, implements the method provided by the first aspect of embodiments of the present invention.
One embodiment of the above invention has the following advantages or benefits: the method is directly based on the point cloud data of the road for processing, sample learning and other data auxiliary registration are not needed, the target object can be identified automatically and rapidly based on the spatial characteristics of the three-dimensional point cloud data, the calculation process is simplified, and the hardware cost is reduced.
Further effects of the above-mentioned non-conventional alternatives will be described below in connection with the embodiments.
Drawings
The drawings are included to provide a better understanding of the invention and are not to be construed as unduly limiting the invention. Wherein:
fig. 1 is a schematic view of a main flow of a method of identifying a target object in a road according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of the main flow of identifying a manhole cover in a roadway in an alternative embodiment of the invention;
FIG. 3 is a schematic representation of road width filtering in an alternative embodiment of the present invention;
FIG. 4 is a schematic diagram of the main modules of an apparatus for identifying target objects in a roadway according to an embodiment of the present invention;
FIG. 5 is an exemplary system architecture diagram in which embodiments of the present invention may be employed;
fig. 6 is a schematic block diagram of a computer system suitable for use in implementing a terminal device or server of an embodiment of the invention.
Detailed Description
Exemplary embodiments of the present invention are described below with reference to the accompanying drawings, in which various details of embodiments of the invention are included to assist understanding, and which are to be considered as merely exemplary. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the invention. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
According to an aspect of an embodiment of the present invention, there is provided a method of identifying a target object in a road.
Fig. 1 is a schematic diagram of a main flow of a method for identifying a target object in a road according to an embodiment of the present invention, and as shown in fig. 1, the method for identifying a target object in a road includes: step S101, step S102, and step S103.
S101, collecting point cloud data of a road; the point cloud data includes: three-dimensional spatial coordinate information and reflection intensity information for each point cloud.
The method for acquiring the point cloud data may be selected according to actual situations, for example, directly acquired by scanning a road, or acquired from other systems, which is not specifically limited in the present invention.
Illustratively, the point cloud data may be acquired by an onboard mobile laser scanning system. The vehicle-mounted mobile laser scanning system consists of a laser scanner (LIDAR), a GNSS/INS combined inertial navigation system, a milemeter and other sensors, is fixed on a carrier in a relative geometric mode, and dynamically acquires three-dimensional information of the surface of a target ground object on the road and two sides in the moving process of the carrier.
LDIAR is a main sensor for data acquisition, accurately records coordinate information of a target in a three-dimensional space, and acquires position and attitude information of an acquisition system in real time by a GNSS/INS and a odometer in a tightly coupled mode. The process of generating the high-precision point cloud comprises the following steps: and performing rotation and translation transformation on the local coordinate system point cloud recorded by the LIDAR according to external references from the LIDAR to the inertial navigation, transforming the local point cloud to a coordinate system point cloud with the inertial navigation as a center, and then transforming the point cloud to a global point cloud in a rotation and translation manner according to the real-time pose recorded by the inertial navigation to generate a real-world-scale high-precision point cloud.
It should be noted that, besides the vehicle-mounted mobile laser scanning system, the invention may also acquire the point cloud data of the road through other devices, such as mobile communication devices, such as a mobile phone and a notebook computer.
The collected point cloud data contains spatial three-dimensional coordinate information and also has reflection intensity information. The reflection intensity is affected by factors such as scanning distance, scanning angle and target material, and presents different values. Different materials have different reflection intensities, and each type of material has similar reflection intensity value.
Step S102, converting the point cloud data into a four-dimensional semantic target based on three-dimensional space coordinate information and reflection intensity information of each point cloud, and clustering the four-dimensional semantic target to obtain a target point cloud cluster.
Since the respective portions of the target object are in close proximity, the point cloud of the target object is closer in distance in the three-dimensional space. In addition, the materials of the parts with the same material in the target object have similar reflection intensity values. Therefore, the point cloud data are converted into four-dimensional semantic targets based on the three-dimensional space coordinate information and the reflection intensity information of each point cloud, and the four-dimensional semantic targets are clustered, so that target point cloud clusters corresponding to target objects can be effectively screened out.
The clustering method can be selectively determined according to actual conditions, for example, by using an Euclidean distance method, a K-means clustering algorithm (a clustering analysis algorithm for iterative solution), and the like.
And step S103, identifying a target object in each target point cloud cluster.
The target point cloud cluster comprises a large amount of densely distributed point clouds, the shape of the target object can be determined according to the distribution condition of the point clouds, and the position, the size, the outline and the like of the target object can be accurately determined according to the three-dimensional coordinates of the point clouds.
In some embodiments, before converting the point cloud data into the four-dimensional semantic target based on the three-dimensional space coordinate information and the reflection intensity information of each point cloud, the method further comprises: and dividing the point cloud data into a plurality of single-frame point cloud data according to the time stamps, filtering each single-frame point cloud data by adopting a characteristic plane filtering method, and merging the plurality of single-frame point cloud data after filtering. The feature plane filtering method includes: and fitting a point cloud plane of the single-frame point cloud data according to the space geometric information of all point clouds in the single-frame point cloud data, and filtering the point cloud data, of which the distance from the point cloud plane in the single-frame point cloud data meets a first filtering condition. Wherein the target object is a ground target, and the first filtering condition is: the distance from the point cloud plane is greater than a preset distance threshold; or, the target object is a non-ground target, and the first filtering condition is: and the distance from the point cloud plane is less than or equal to a preset distance threshold.
The ground target means a target located on the ground or having a small distance from the ground, such as a manhole cover, a ground lane, and the like. Non-ground objects are objects that are located above or on the ground but at a greater distance from the ground, such as billboards standing to the side of the road, traffic signs located above the road, etc.
For example, if the target object is a manhole cover, since the target manhole cover is located in the ground point cloud, the first filtering condition may be set to a distance from the point cloud plane greater than a preset distance threshold, so as to filter out the point cloud data greater than the preset distance threshold. The value of the distance threshold can be selectively determined according to actual conditions. Taking the target object as the manhole cover as an example, since the manhole cover is generally located on the ground and the height of the manhole cover is generally not higher than the road teeth on the ground, a distance threshold (for example, 0.15m) may be set according to the height of the road teeth, and the point cloud with the height greater than 0.15m is filtered out during filtering.
Through the characteristic plane filtering, a large amount of point cloud data irrelevant to the target object can be filtered, the processing time of subsequent steps is shortened, and the efficiency of identifying the target object is improved.
When fitting the point cloud plane of the single-frame point cloud data, a plane capable of containing the most point clouds in the single-frame point cloud data may be used as the point cloud plane of the single-frame point cloud data, and a plane with the minimum sum of distances from all point clouds to the plane in the single-frame point cloud data may also be used as the point cloud plane of the single-frame point cloud data. Optionally, fitting a point cloud plane of the single-frame point cloud data according to the spatial geometric information of all point clouds in the single-frame point cloud data includes: filtering the single-frame point cloud data based on the height features, and filtering out the point cloud data with the height higher than a preset height in the single-frame point cloud data; performing plane fitting in the filtered point cloud to obtain a plurality of fitting planes; and determining an included angle between the plane normal of each fitting plane and the height direction, and taking the fitting plane with the minimum included angle or the maximum number of point clouds in the plane as a point cloud plane of the single-frame point cloud data. When the fitting plane with the smallest included angle is used as the point cloud plane of the single frame of point cloud data, but more than one plane fitting plane with the smallest included angle is used, the fitting plane with the largest number of point clouds can be selected. When the fitting plane with the largest number of point clouds in a plane is used as the point cloud plane of the single frame of point cloud data, but the fitting plane with the largest number of point clouds is more than one, the plane fitting plane with the smallest included angle can be selected. In this example, the height direction refers to a direction perpendicular to the road, and the height of the point cloud data refers to a distance between the point cloud and the road surface in the direction perpendicular to the road. A fitting plane with a small included angle is selected as a point cloud plane, so that the selected plane and the road can be parallel to each other as much as possible, the road condition of the road can be reflected most truly, and the accuracy of the identification result is improved. The plane with a large number of point clouds is selected, so that large errors caused by a small number of point clouds can be avoided, and the accuracy of the identification result is improved.
Optionally, before converting the point cloud data into a four-dimensional semantic target based on the three-dimensional space coordinate information and the reflection intensity information of each point cloud, the method further includes: and generating a passing track curve according to the point cloud data, forming dividing point cloud lines parallel to the track curve on the side edges of the track curve, and filtering the point cloud data scattered outside the area between the dividing point cloud lines in the point cloud data. In the embodiment, the point cloud data are filtered based on the width of the road surface, so that point clouds outside the road surface can be filtered out, and the identification efficiency is improved.
Optionally, identifying a target object in each of the target point cloud clusters includes: projecting the target point cloud cluster into a two-dimensional grid, and then converting the two-dimensional grid into a two-dimensional characteristic image, wherein one grid represents one pixel; detecting a target object region in the two-dimensional characteristic image by adopting a Hough algorithm; and reversely mapping the coordinates of the target object area on the two-dimensional characteristic image into a point cloud in a three-dimensional space to determine the three-dimensional space coordinates of the target object. The three-dimensional point cloud data is projected to the two-dimensional grid to lock the target object area and then is back-projected to the three-dimensional space, and the identification method is simple and convenient and has high accuracy and identification efficiency.
Optionally, projecting the target point cloud cluster into a two-dimensional grid, and then converting the two-dimensional grid into a two-dimensional feature image, including:
1) determining a point cloud range of the target point cloud cluster, dividing a two-dimensional equidistant grid according to the point cloud range, projecting the target point cloud cluster into the two-dimensional grid, determining the area of a plane covered by the target point cloud cluster according to the number of the projected grids, and calculating the plane density of the point cloud according to the area and the number of points of the target point cloud cluster;
2) calculating the average point distance according to the point cloud density to obtain a plane space scale;
3) creating a two-dimensional grid index according to the plane space scale, dividing the point cloud range of the target point cloud cluster into space two-dimensional grids, and recording the point cloud index falling into each grid;
4) converting the divided two-dimensional space grid into a two-dimensional characteristic image; a grid represents a pixel, and the pixel value represented by each grid is calculated from intensity information expressed by the point cloud falling within the grid.
The two-dimensional grid index is to divide a two-dimensional plane into grids with the same size, each grid corresponds to a storage space, and an index item registers a space object falling into the grid. The method for generating the two-dimensional characteristic image is simple and has good accuracy.
The method according to the embodiment of the present invention will be described in detail below with reference to fig. 2, with the manhole cover as a target object. As shown in fig. 2, the main process of identifying the manhole cover in the road includes:
step S201, point cloud data of a road are obtained.
In this example, a vehicle-mounted mobile laser scanning system is used to obtain point cloud data of a road. The vehicle-mounted mobile laser scanning system consists of a laser scanner (LIDAR), a GNSS/INS combined inertial navigation system, a milemeter and other sensors, is fixed on a carrier in a relative geometric mode, and dynamically acquires three-dimensional information of the surface of a target ground object on the road and two sides in the moving process of the carrier.
LDIAR is a main sensor for data acquisition, accurately records coordinate information of a target in a three-dimensional space, and acquires position and attitude information of an acquisition system in real time by a GNSS/INS and a odometer in a tightly coupled mode. Generating a high-precision point cloud chart: and performing rotation and translation transformation on the local coordinate system point cloud recorded by the LIDAR according to external references from the LIDAR to the inertial navigation, transforming the local point cloud to a coordinate system point cloud with the inertial navigation as a center, and then transforming the point cloud to a global point cloud in a rotation and translation manner according to the real-time pose recorded by the inertial navigation to generate a real-world-scale high-precision point cloud.
And S202, filtering non-ground point clouds.
The high-precision three-dimensional point cloud data acquired by the vehicle-mounted mobile laser scanning system comprises a large number of high-density non-ground points. Because the positioning target well lid is in the ground point cloud, the point cloud data is preprocessed firstly to remove non-ground point cloud for subsequent efficient extraction of the well lid position.
The non-ground point cloud filtering method based on the characteristic plane has the advantages that the slope change rate in the local range of the urban road is small and is approximate to a plane, the purpose of filtering non-ground points can be achieved according to the local range plane characteristics, and the specific steps are as follows:
1) the single frame data is divided. The high-precision point cloud is formed by splicing single-frame point clouds collected by a laser scanner through transformation among different coordinate systems according to the mapping principle, and each frame of data of the laser scanner has different time stamps. Therefore, point cloud segmentation is carried out according to unequal time stamps, and an original single-frame point cloud structure can be separated.
2) And (5) extracting a point cloud plane. The single-frame point cloud data expresses real world three-dimensional information in a local range scanned by the position where the acquisition time is located, wherein the road surface is approximate to a plane. According to the erection height h of the acquisition system on the carrier, filtering in the Z direction by using the height h, and only keeping point clouds below the height h; and performing plane fitting in the filtered point clouds, extracting all plane information, calculating an included angle between the normal direction of each plane and the Z axis, and only reserving the plane with a smaller angle and the largest number of point clouds in the plane.
3) And (5) distance filtering. And calculating the distances from all points to the plane in the single-frame point cloud data, filtering out all point clouds larger than 0.15m according to a threshold value of 0.15, and taking the rest points as ground point clouds. (this threshold is around 0.15m depending on the height of most road teeth in urban areas).
Processing all single-frame point cloud data according to the steps 2) -3), and finally filtering non-ground point clouds in all point clouds.
And step S203, road surface width filtering.
In the data acquisition process, a driving track curve can be formed in the high-precision point cloud according to the driving track, width threshold filtering is carried out according to the driving vertical direction (perpendicular to the track curve), two segmentation point cloud lines parallel to the track curve are formed on two sides in the driving direction, road surface point cloud is reserved to the maximum extent, and other ground point cloud of non-road surfaces are filtered. The width threshold may be set according to the particular municipal road grade and road information collected.
As shown in fig. 3, the middle solid line is a driving track, the dotted line is a cloud line of division points of a parallel track curve formed on the left and right sides of the track curve according to a fixed threshold, the point cloud in the region between the two cloud lines of division points is retained, and the point cloud outside the region between the two cloud lines of division points is filtered.
And S204, clustering the K means.
The spatial three-dimensional point cloud has spatial geometric information and intensity information. The space geometry is expressed by three-dimensional XYZ coordinates, and the intensity is expressed by a laser scanner according to reflection information of different object materials. According to the road ground point cloud data obtained in step S203, K-means clustering is performed by using four-dimensional features such as XYZ intensity, and the road ground point cloud can be divided into a plurality of cluster point clouds. And (4) respectively carrying out the following steps S205 to S208 on each cluster of point cloud, and extracting the well lid position from each cluster of point cloud.
S205, generating a two-dimensional characteristic image by the three-dimensional point cloud, wherein the steps are as follows:
1) calculating the point cloud range of each point cloud cluster, dividing a two-dimensional equidistant grid according to the point cloud range, wherein the length and the width of the grid are 1 meter, projecting the three-dimensional point cloud into the two-dimensional grid, calculating the number of the projected grids, and calculating the plane area covered by the point cloud according to the calculation result, wherein one grid is 1 square meter, and calculating the plane density of the point cloud according to the area and the number of the points;
2) calculating the average point spacing according to the point cloud density, and expressing the plane space scale of the point cloud;
3) and creating a two-dimensional grid index according to the plane space scale. The two-dimensional grid index is to divide the range expressed by the point cloud into space two-dimensional grids and record the point cloud index falling into each grid;
4) converting the divided two-dimensional space grids into two-dimensional characteristic images, wherein one grid represents one pixel;
5) the pixel value represented by each mesh is computed from intensity information expressed by the point cloud that falls within the mesh.
Step S206, circular area detection.
A circle is detected from a two-dimensional feature image based on Hough (Hough) algorithm (an image processing algorithm) transformation, and a circular area is extracted. And after the circular area is obtained, calculating the circle center and the radius.
The basic idea of the Hough algorithm is to transform an image from the spatial domain to a parametric space, describing curves in the image with some form of parameter that most boundary points satisfy. Assuming that the parameters of a circle are detected and determined in the x-y plane, the set of circumferential points to be detected in the image is { (xi, yi), i { (1, 2, 3, …, n }, (x, y) is a point in the set, which in the parametric coordinate system (a, b, r) is resolved as:
(a-xi)2+(b-yi)2=r2
the curved surface corresponding to the analytic expression is a three-dimensional conical surface. Any determined point in the image has a three-dimensional cone of the parameter space corresponding to it. For any point on the circumference { (xi, yi), i ═ 1, 2, 3, …, n }, these three-dimensional cones form a cluster of conical surfaces. If the points in the set are all on the same circumference, the conical surface clusters intersect at a certain point in a parameter space, and the point parameters of the parameter space just correspond to the center coordinates of the image plane and the radius of the circle.
And step S207, inverse mapping the three-dimensional space.
And reversely mapping the coordinate of the circle center on the two-dimensional image into the three-dimensional point cloud to obtain the position and the radius of the spatial three-dimensional coordinate.
And S208, obtaining the position result of the well cover, wherein the position result comprises the circle center and the radius of the well cover.
The method is directly based on the point cloud data of the road for processing, sample learning and other data auxiliary registration are not needed, the well lid in the road can be identified automatically and rapidly based on the spatial characteristics of the three-dimensional point cloud data, the calculation process is simplified, and the hardware cost is reduced.
According to a second aspect of the embodiments of the present invention, there is provided an apparatus for implementing the above method.
Fig. 4 is a schematic diagram of main modules of an apparatus for identifying a target object in a road according to an embodiment of the present invention. As shown in fig. 4, theapparatus 400 for identifying a target object in a road includes:
anacquisition unit 401 that acquires point cloud data of a road; the point cloud data includes: three-dimensional space coordinate information and reflection intensity information of each point cloud;
aclustering unit 402, which converts the point cloud data into four-dimensional semantic targets based on the three-dimensional space coordinate information and the reflection intensity information of each point cloud, and clusters the four-dimensional semantic targets to obtain target point cloud clusters;
an identifyingunit 403 that identifies a target object in each of the target point cloud clusters.
Optionally, the apparatus of the embodiment of the present invention further comprises a first filtering unit (not shown in the figure) for: before converting the point cloud data into a four-dimensional semantic target based on the three-dimensional space coordinate information and the reflection intensity information of each point cloud, dividing the point cloud data into a plurality of single-frame point cloud data according to a timestamp, filtering each single-frame point cloud data by adopting a feature plane filtering method, and merging the filtered single-frame point cloud data;
the feature plane filtering method includes: fitting a point cloud plane of the single-frame point cloud data according to the space geometric information of all point clouds in the single-frame point cloud data, and filtering the point cloud data, of which the distance from the point cloud plane in the single-frame point cloud data meets a first filtering condition;
wherein the target object is a ground target, and the first filtering condition is: the distance from the point cloud plane is greater than a preset distance threshold; or, the target object is a non-ground target, and the first filtering condition is: and the distance from the point cloud plane is less than or equal to a preset distance threshold.
Optionally, fitting a point cloud plane of the single-frame point cloud data according to the spatial geometric information of all point clouds in the single-frame point cloud data includes:
filtering the single-frame point cloud data based on the height features, and filtering out the point cloud data with the height higher than a preset height in the single-frame point cloud data; performing plane fitting in the filtered point cloud to obtain a plurality of fitting planes; and determining an included angle between the plane normal of each fitting plane and the height direction, and taking the fitting plane with the minimum included angle or the maximum number of point clouds in the plane as a point cloud plane of the single-frame point cloud data.
Optionally, the apparatus of the embodiment of the present invention further includes a second filtering unit (not shown in the figure), configured to: before the point cloud data are converted into a four-dimensional semantic target based on the three-dimensional space coordinate information and the reflection intensity information of each point cloud, a passing track curve is generated according to the point cloud data, cloud lines of division points parallel to the track curve are formed on the side edge of the track curve, and the point cloud data scattered outside an area between the cloud lines of the division points in the point cloud data are filtered.
Optionally, the clustering unit clusters the four-dimensional semantic objects by using a K-means clustering method.
Optionally, the identifying unit identifies a target object in each of the target point cloud clusters, including:
projecting the target point cloud cluster into a two-dimensional grid, and then converting the two-dimensional grid into a two-dimensional characteristic image, wherein one grid represents one pixel; detecting a target object region in the two-dimensional characteristic image by adopting a Hough algorithm; and reversely mapping the coordinates of the target object area on the two-dimensional characteristic image into a point cloud in a three-dimensional space to determine the three-dimensional space coordinates of the target object.
According to a third aspect of embodiments of the present invention, there is provided an electronic device for identifying a target object in a road, comprising:
one or more processors;
a storage device for storing one or more programs,
when the one or more programs are executed by the one or more processors, the one or more processors are caused to implement the method provided by the first aspect of the embodiments of the present invention.
According to a fourth aspect of embodiments of the present invention, there is provided a computer readable medium, on which a computer program is stored, which when executed by a processor, implements the method provided by the first aspect of embodiments of the present invention.
Fig. 5 illustrates anexemplary system architecture 500 to which the method of identifying a target object in a road or the apparatus for identifying a target object in a road of embodiments of the present invention may be applied.
As shown in fig. 5, thesystem architecture 500 may includeterminal devices 501, 502, 503, anetwork 504, and aserver 505. Thenetwork 504 serves to provide a medium for communication links between theterminal devices 501, 502, 503 and theserver 505.Network 504 may include various connection types, such as wired, wireless communication links, or fiber optic cables, to name a few.
The user may use theterminal devices 501, 502, 503 to interact with aserver 505 over anetwork 504 to receive or send messages or the like. Theterminal devices 501, 502, 503 may have installed thereon various communication client applications, such as shopping-like applications, web browser applications, search-like applications, instant messaging tools, mailbox clients, social platform software, etc. (by way of example only).
Theterminal devices 501, 502, 503 may be various electronic devices having a display screen and supporting web browsing, including but not limited to smart phones, tablet computers, laptop portable computers, desktop computers, and the like.
Theserver 505 may be a server providing various services, such as a background management server (for example only) providing support for shopping websites browsed by users using theterminal devices 501, 502, 503. The background management server may analyze and otherwise process data such as the received request for positioning and navigation, and feed back a processing result (for example, positioning result information, road surface target object recognition result information, and the like — just an example) to the terminal device.
It should be noted that the method for identifying the target object in the road provided by the embodiment of the present invention is generally executed by theserver 505, and accordingly, the device for identifying the target object in the road is generally disposed in theserver 505.
It should be understood that the number of terminal devices, networks, and servers in fig. 5 is merely illustrative. There may be any number of terminal devices, networks, and servers, as desired for implementation.
Referring now to FIG. 6, a block diagram of acomputer system 600 suitable for use with a terminal device implementing an embodiment of the invention is shown. The terminal device shown in fig. 6 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present invention.
As shown in fig. 6, thecomputer system 600 includes a Central Processing Unit (CPU)601 that can perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM)602 or a program loaded from astorage section 608 into a Random Access Memory (RAM) 603. In theRAM 603, various programs and data necessary for the operation of thesystem 600 are also stored. TheCPU 601,ROM 602, andRAM 603 are connected to each other via abus 604. An input/output (I/O)interface 605 is also connected tobus 604.
The following components are connected to the I/O interface 605: aninput portion 606 including a keyboard, a mouse, and the like; anoutput portion 607 including a display such as a Cathode Ray Tube (CRT), a Liquid Crystal Display (LCD), and the like, and a speaker; astorage section 608 including a hard disk and the like; and acommunication section 609 including a network interface card such as a LAN card, a modem, or the like. Thecommunication section 609 performs communication processing via a network such as the internet. Thedriver 610 is also connected to the I/O interface 605 as needed. Aremovable medium 611 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is mounted on thedrive 610 as necessary, so that a computer program read out therefrom is mounted in thestorage section 608 as necessary.
In particular, according to the embodiments of the present disclosure, the processes described above with reference to the flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method illustrated in the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network through thecommunication section 609, and/or installed from theremovable medium 611. The computer program performs the above-described functions defined in the system of the present invention when executed by the Central Processing Unit (CPU) 601.
It should be noted that the computer readable medium shown in the present invention can be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present invention, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In the present invention, however, a computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: wireless, wire, fiber optic cable, RF, etc., or any suitable combination of the foregoing.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams or flowchart illustration, and combinations of blocks in the block diagrams or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units described in the embodiments of the present invention may be implemented by software or hardware. The described units may also be provided in a processor, and may be described as: a processor comprising: the acquisition unit is used for acquiring point cloud data of a road; the point cloud data includes: three-dimensional space coordinate information and reflection intensity information of each point cloud; the clustering unit is used for converting the point cloud data into a four-dimensional semantic target based on the three-dimensional space coordinate information and the reflection intensity information of each point cloud, and clustering the four-dimensional semantic target to obtain a target point cloud cluster; and the identification unit is used for identifying a target object in each target point cloud cluster. Where the names of these units do not in some cases constitute a limitation on the unit itself, for example, an acquisition unit may also be described as a "unit that identifies a target object in each of the target point cloud clusters".
As another aspect, the present invention also provides a computer-readable medium that may be contained in the apparatus described in the above embodiments; or may be separate and not incorporated into the device. The computer readable medium carries one or more programs which, when executed by a device, cause the device to comprise: collecting point cloud data of a road; the point cloud data includes: three-dimensional space coordinate information and reflection intensity information of each point cloud; converting the point cloud data into four-dimensional semantic targets based on three-dimensional space coordinate information and reflection intensity information of each point cloud, and clustering the four-dimensional semantic targets to obtain target point cloud clusters; identifying a target object in each of the target point cloud clusters.
According to the technical scheme of the embodiment of the invention, the point cloud data based on the road is directly processed without sample learning and other data auxiliary registration, the target object can be identified automatically and quickly based on the spatial characteristics of the three-dimensional point cloud data, the calculation process is simplified, and the hardware cost is reduced.
The above-described embodiments should not be construed as limiting the scope of the invention. Those skilled in the art will appreciate that various modifications, combinations, sub-combinations, and substitutions can occur, depending on design requirements and other factors. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (10)

CN201911001284.5A2019-10-212019-10-21Method and device for identifying target object in roadPendingCN110866449A (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
CN201911001284.5ACN110866449A (en)2019-10-212019-10-21Method and device for identifying target object in road

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN201911001284.5ACN110866449A (en)2019-10-212019-10-21Method and device for identifying target object in road

Publications (1)

Publication NumberPublication Date
CN110866449Atrue CN110866449A (en)2020-03-06

Family

ID=69652223

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN201911001284.5APendingCN110866449A (en)2019-10-212019-10-21Method and device for identifying target object in road

Country Status (1)

CountryLink
CN (1)CN110866449A (en)

Cited By (34)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN111383337A (en)*2020-03-202020-07-07北京百度网讯科技有限公司Method and device for identifying objects
CN111611900A (en)*2020-05-152020-09-01北京京东乾石科技有限公司 A target point cloud recognition method, device, electronic device and storage medium
CN111950589A (en)*2020-07-022020-11-17东华理工大学 Optimal segmentation method of point cloud region growth combined with K-means clustering
CN112070838A (en)*2020-09-072020-12-11洛伦兹(北京)科技有限公司Object identification and positioning method and device based on two-dimensional-three-dimensional fusion characteristics
CN112132108A (en)*2020-10-092020-12-25安徽江淮汽车集团股份有限公司Ground point cloud data extraction method, device, equipment and storage medium
CN112215952A (en)*2020-10-262021-01-12湖北亿咖通科技有限公司Curve drawing method, computer storage medium and electronic device
CN112258646A (en)*2020-10-262021-01-22上海汽车集团股份有限公司Three-dimensional linear landmark construction method and device
CN112435336A (en)*2020-11-132021-03-02武汉中海庭数据技术有限公司Curve type identification method and device, electronic equipment and storage medium
CN112710313A (en)*2020-12-312021-04-27广州极飞科技股份有限公司Overlay path generation method and device, electronic equipment and storage medium
CN112907421A (en)*2021-02-242021-06-04中煤科工集团重庆智慧城市科技研究院有限公司Service scene acquisition system and method based on spatial analysis
CN112907739A (en)*2021-01-222021-06-04中北大学Method, device and system for acquiring height difference information of well lid
CN112964264A (en)*2021-02-072021-06-15上海商汤临港智能科技有限公司Road edge detection method and device, high-precision map, vehicle and storage medium
CN113155027A (en)*2021-04-272021-07-23中铁工程装备集团有限公司Tunnel rock wall feature identification method
CN113255609A (en)*2021-07-022021-08-13智道网联科技(北京)有限公司Traffic identification recognition method and device based on neural network model
CN113343840A (en)*2021-06-022021-09-03合肥泰瑞数创科技有限公司Object identification method and device based on three-dimensional point cloud
CN113379923A (en)*2021-06-222021-09-10北醒(北京)光子科技有限公司Track identification method, device, storage medium and equipment
CN113536850A (en)*2020-04-202021-10-22长沙莫之比智能科技有限公司Target object size testing method and device based on 77G millimeter wave radar
CN113607185A (en)*2021-10-082021-11-05禾多科技(北京)有限公司Lane line information display method, lane line information display device, electronic device, and computer-readable medium
WO2021227797A1 (en)*2020-05-132021-11-18长沙智能驾驶研究院有限公司Road boundary detection method and apparatus, computer device and storage medium
CN113901903A (en)*2021-09-302022-01-07北京百度网讯科技有限公司 Road recognition method and device
CN114092898A (en)*2020-07-312022-02-25华为技术有限公司Target object sensing method and device
CN114200477A (en)*2021-12-132022-03-18上海无线电设备研究所 A method for processing point cloud data of ground targets of laser 3D imaging radar
CN114419150A (en)*2021-12-212022-04-29未来机器人(深圳)有限公司 Forklift pickup method, device, computer equipment, storage medium
CN114882198A (en)*2022-06-082022-08-09一汽解放汽车有限公司Target determination method, device, equipment and medium
CN114910902A (en)*2021-01-292022-08-16富士通株式会社Action detection device and method based on neural network
CN114943947A (en)*2022-03-222022-08-26深圳元戎启行科技有限公司Marking method of road traffic light, automatic driving calculation platform and storage medium
CN115273071A (en)*2022-08-122022-11-01上海节卡机器人科技有限公司 Object recognition method, device, electronic device and storage medium
CN115331099A (en)*2022-07-202022-11-11高德软件有限公司Method and device for acquiring pavement point cloud data, electronic equipment and storage medium
CN115453563A (en)*2022-09-192022-12-09忘平(广东)科技有限公司 Three-dimensional space dynamic object recognition method, system and storage medium
CN115980702A (en)*2023-03-102023-04-18安徽蔚来智驾科技有限公司 Object false detection prevention method, equipment, driving equipment and medium
CN116681932A (en)*2023-05-292023-09-01中国第一汽车股份有限公司Object identification method and device, electronic equipment and storage medium
CN117274651A (en)*2023-11-172023-12-22北京亮道智能汽车技术有限公司Object detection method and device based on point cloud and computer readable storage medium
US12164031B2 (en)2021-04-302024-12-10Waymo LlcMethod and system for a threshold noise filter
CN119442381A (en)*2024-08-162025-02-14如你所视(北京)科技有限公司 Plane structure diagram processing method and device

Citations (13)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN103500338A (en)*2013-10-162014-01-08厦门大学Road zebra crossing automatic extraction method based on vehicle-mounted laser scanning point cloud
CN104197897A (en)*2014-04-252014-12-10厦门大学Urban road marker automatic sorting method based on vehicle-mounted laser scanning point cloud
CN105184852A (en)*2015-08-042015-12-23百度在线网络技术(北京)有限公司Laser-point-cloud-based urban road identification method and apparatus
CN105260988A (en)*2015-09-092016-01-20百度在线网络技术(北京)有限公司High-precision map data processing method and high-precision map data processing device
CN105423915A (en)*2015-11-162016-03-23天津师范大学Accurate positioning method of planar target for ground laser scanning data registration
CN105528588A (en)*2015-12-312016-04-27百度在线网络技术(北京)有限公司Lane line recognition method and device
CN105574929A (en)*2015-12-152016-05-11电子科技大学Single vegetation three-dimensional modeling method based on ground LiDAR point cloud data
CN108171131A (en)*2017-12-152018-06-15湖北大学Based on the Lidar point cloud data road marking line extracting methods and system for improving MeanShift
CN108898672A (en)*2018-04-272018-11-27厦门维斯云景信息科技有限公司A kind of semi-automatic cloud method making three-dimensional high-definition mileage chart lane line
CN109711242A (en)*2018-10-312019-05-03百度在线网络技术(北京)有限公司Modification method, device and the storage medium of lane line
CN109858460A (en)*2019-02-202019-06-07重庆邮电大学A kind of method for detecting lane lines based on three-dimensional laser radar
CN110111378A (en)*2019-04-042019-08-09贝壳技术有限公司A kind of point cloud registering optimization method and device based on indoor three-dimensional data
CN110335295A (en)*2019-06-062019-10-15浙江大学 A registration and optimization method for plant point cloud acquisition based on TOF camera

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN103500338A (en)*2013-10-162014-01-08厦门大学Road zebra crossing automatic extraction method based on vehicle-mounted laser scanning point cloud
CN104197897A (en)*2014-04-252014-12-10厦门大学Urban road marker automatic sorting method based on vehicle-mounted laser scanning point cloud
CN105184852A (en)*2015-08-042015-12-23百度在线网络技术(北京)有限公司Laser-point-cloud-based urban road identification method and apparatus
CN105260988A (en)*2015-09-092016-01-20百度在线网络技术(北京)有限公司High-precision map data processing method and high-precision map data processing device
CN105423915A (en)*2015-11-162016-03-23天津师范大学Accurate positioning method of planar target for ground laser scanning data registration
CN105574929A (en)*2015-12-152016-05-11电子科技大学Single vegetation three-dimensional modeling method based on ground LiDAR point cloud data
CN105528588A (en)*2015-12-312016-04-27百度在线网络技术(北京)有限公司Lane line recognition method and device
CN108171131A (en)*2017-12-152018-06-15湖北大学Based on the Lidar point cloud data road marking line extracting methods and system for improving MeanShift
CN108898672A (en)*2018-04-272018-11-27厦门维斯云景信息科技有限公司A kind of semi-automatic cloud method making three-dimensional high-definition mileage chart lane line
CN109711242A (en)*2018-10-312019-05-03百度在线网络技术(北京)有限公司Modification method, device and the storage medium of lane line
CN109858460A (en)*2019-02-202019-06-07重庆邮电大学A kind of method for detecting lane lines based on three-dimensional laser radar
CN110111378A (en)*2019-04-042019-08-09贝壳技术有限公司A kind of point cloud registering optimization method and device based on indoor three-dimensional data
CN110335295A (en)*2019-06-062019-10-15浙江大学 A registration and optimization method for plant point cloud acquisition based on TOF camera

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
张国良: "《移动机器人的SLAM与VSLAM方法》", 31 October 2018*
樊丽 等: "基于特征融合的林下环境点云分割", 《北京林业大学学报》*
胡啸: "基于车载激光扫描数据的道路要素提取方法研究", 《中国优秀硕士学位论文全文数据库工程科技Ⅱ辑》*
胡文庆 等: "模糊C-均值聚类对点云数据的分割", 《安徽农业科学》*
魏占营 等: "《车载激光测量数据智能后处理技术 SWDY深入解析与应用》", 30 November 2018*

Cited By (49)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN111383337A (en)*2020-03-202020-07-07北京百度网讯科技有限公司Method and device for identifying objects
CN113536850A (en)*2020-04-202021-10-22长沙莫之比智能科技有限公司Target object size testing method and device based on 77G millimeter wave radar
WO2021227797A1 (en)*2020-05-132021-11-18长沙智能驾驶研究院有限公司Road boundary detection method and apparatus, computer device and storage medium
CN111611900A (en)*2020-05-152020-09-01北京京东乾石科技有限公司 A target point cloud recognition method, device, electronic device and storage medium
CN111611900B (en)*2020-05-152023-06-30北京京东乾石科技有限公司Target point cloud identification method and device, electronic equipment and storage medium
CN111950589A (en)*2020-07-022020-11-17东华理工大学 Optimal segmentation method of point cloud region growth combined with K-means clustering
CN111950589B (en)*2020-07-022022-09-30东华理工大学 Optimal segmentation method of point cloud region growth combined with K-means clustering
CN114092898A (en)*2020-07-312022-02-25华为技术有限公司Target object sensing method and device
CN112070838A (en)*2020-09-072020-12-11洛伦兹(北京)科技有限公司Object identification and positioning method and device based on two-dimensional-three-dimensional fusion characteristics
CN112070838B (en)*2020-09-072024-02-02洛伦兹(北京)科技有限公司Object identification and positioning method and device based on two-dimensional-three-dimensional fusion characteristics
CN112132108A (en)*2020-10-092020-12-25安徽江淮汽车集团股份有限公司Ground point cloud data extraction method, device, equipment and storage medium
CN112258646A (en)*2020-10-262021-01-22上海汽车集团股份有限公司Three-dimensional linear landmark construction method and device
CN112215952B (en)*2020-10-262021-08-13湖北亿咖通科技有限公司Curve drawing method, computer storage medium and electronic device
CN112215952A (en)*2020-10-262021-01-12湖北亿咖通科技有限公司Curve drawing method, computer storage medium and electronic device
CN112258646B (en)*2020-10-262024-03-12上海汽车集团股份有限公司Three-dimensional line landmark construction method and device
CN112435336B (en)*2020-11-132022-04-19武汉中海庭数据技术有限公司Curve type identification method and device, electronic equipment and storage medium
CN112435336A (en)*2020-11-132021-03-02武汉中海庭数据技术有限公司Curve type identification method and device, electronic equipment and storage medium
CN112710313A (en)*2020-12-312021-04-27广州极飞科技股份有限公司Overlay path generation method and device, electronic equipment and storage medium
CN112907739A (en)*2021-01-222021-06-04中北大学Method, device and system for acquiring height difference information of well lid
CN112907739B (en)*2021-01-222022-10-04中北大学Method, device and system for acquiring height difference information of well lid
CN114910902A (en)*2021-01-292022-08-16富士通株式会社Action detection device and method based on neural network
CN112964264A (en)*2021-02-072021-06-15上海商汤临港智能科技有限公司Road edge detection method and device, high-precision map, vehicle and storage medium
CN112964264B (en)*2021-02-072024-03-26上海商汤临港智能科技有限公司Road edge detection method, device, high-precision map, vehicle and storage medium
CN112907421B (en)*2021-02-242023-08-01中煤科工集团重庆智慧城市科技研究院有限公司Business scene acquisition system and method based on spatial analysis
CN112907421A (en)*2021-02-242021-06-04中煤科工集团重庆智慧城市科技研究院有限公司Service scene acquisition system and method based on spatial analysis
CN113155027A (en)*2021-04-272021-07-23中铁工程装备集团有限公司Tunnel rock wall feature identification method
US12164031B2 (en)2021-04-302024-12-10Waymo LlcMethod and system for a threshold noise filter
CN113343840B (en)*2021-06-022022-03-08合肥泰瑞数创科技有限公司Object identification method and device based on three-dimensional point cloud
CN113343840A (en)*2021-06-022021-09-03合肥泰瑞数创科技有限公司Object identification method and device based on three-dimensional point cloud
CN113379923A (en)*2021-06-222021-09-10北醒(北京)光子科技有限公司Track identification method, device, storage medium and equipment
CN113255609B (en)*2021-07-022021-10-29智道网联科技(北京)有限公司Traffic identification recognition method and device based on neural network model
CN113255609A (en)*2021-07-022021-08-13智道网联科技(北京)有限公司Traffic identification recognition method and device based on neural network model
CN113901903A (en)*2021-09-302022-01-07北京百度网讯科技有限公司 Road recognition method and device
CN113607185A (en)*2021-10-082021-11-05禾多科技(北京)有限公司Lane line information display method, lane line information display device, electronic device, and computer-readable medium
CN113607185B (en)*2021-10-082022-01-04禾多科技(北京)有限公司 Lane line information display method, device, electronic device and computer readable medium
CN114200477A (en)*2021-12-132022-03-18上海无线电设备研究所 A method for processing point cloud data of ground targets of laser 3D imaging radar
CN114419150A (en)*2021-12-212022-04-29未来机器人(深圳)有限公司 Forklift pickup method, device, computer equipment, storage medium
CN114943947B (en)*2022-03-222025-07-15深圳元戎启行科技有限公司 Road traffic light labeling method, autonomous driving computing platform and storage medium
CN114943947A (en)*2022-03-222022-08-26深圳元戎启行科技有限公司Marking method of road traffic light, automatic driving calculation platform and storage medium
CN114882198A (en)*2022-06-082022-08-09一汽解放汽车有限公司Target determination method, device, equipment and medium
CN114882198B (en)*2022-06-082025-05-27一汽解放汽车有限公司 A target determination method, device, equipment and medium
CN115331099A (en)*2022-07-202022-11-11高德软件有限公司Method and device for acquiring pavement point cloud data, electronic equipment and storage medium
CN115273071A (en)*2022-08-122022-11-01上海节卡机器人科技有限公司 Object recognition method, device, electronic device and storage medium
CN115453563A (en)*2022-09-192022-12-09忘平(广东)科技有限公司 Three-dimensional space dynamic object recognition method, system and storage medium
CN115980702A (en)*2023-03-102023-04-18安徽蔚来智驾科技有限公司 Object false detection prevention method, equipment, driving equipment and medium
CN116681932A (en)*2023-05-292023-09-01中国第一汽车股份有限公司Object identification method and device, electronic equipment and storage medium
CN117274651B (en)*2023-11-172024-02-09北京亮道智能汽车技术有限公司Object detection method and device based on point cloud and computer readable storage medium
CN117274651A (en)*2023-11-172023-12-22北京亮道智能汽车技术有限公司Object detection method and device based on point cloud and computer readable storage medium
CN119442381A (en)*2024-08-162025-02-14如你所视(北京)科技有限公司 Plane structure diagram processing method and device

Similar Documents

PublicationPublication DateTitle
CN110866449A (en)Method and device for identifying target object in road
CN112132108A (en)Ground point cloud data extraction method, device, equipment and storage medium
CN115540896B (en)Path planning method and device, electronic equipment and computer readable medium
CN110632608B (en)Target detection method and device based on laser point cloud
CN113761999A (en)Target detection method and device, electronic equipment and storage medium
CN112258519B (en)Automatic extraction method and device for way-giving line of road in high-precision map making
CN114764778A (en)Target detection method, target detection model training method and related equipment
CN110390706B (en)Object detection method and device
CN111339876B (en) Method and device for identifying types of regions in a scene
CN114219770A (en) Ground detection method, device, electronic device and storage medium
CN115331099A (en)Method and device for acquiring pavement point cloud data, electronic equipment and storage medium
CN115115597A (en) A target detection method, device, equipment and medium
CN110163900B (en)Method and device for adjusting point cloud data
CN115331214A (en)Sensing method and system for target detection
CN111563398A (en)Method and device for determining information of target object
CN110363847B (en)Map model construction method and device based on point cloud data
Sameen et al.A simplified semi-automatic technique for highway extraction from high-resolution airborne LiDAR data and orthophotos
Zhang et al.Efficient approach to automated pavement manhole cover detection with modified faster R-CNN
CN111967332A (en)Visibility information generation method and device for automatic driving
CN115240154A (en)Method, device, equipment and medium for extracting point cloud features of parking lot
CN110120075B (en)Method and apparatus for processing information
CN110377776B (en)Method and device for generating point cloud data
CN112435224B (en)Confidence evaluation method and device for stop line extraction
US11294384B2 (en)Vehicle navigation using point cloud decimation
Tang et al.Accuracy test of point-based and object-based urban building feature classification and extraction applying airborne LiDAR data

Legal Events

DateCodeTitleDescription
PB01Publication
PB01Publication
SE01Entry into force of request for substantive examination
SE01Entry into force of request for substantive examination
TA01Transfer of patent application right

Effective date of registration:20210226

Address after:101, 1st floor, building 2, yard 20, Suzhou street, Haidian District, Beijing 100080

Applicant after:Beijing Jingbangda Trading Co.,Ltd.

Address before:100086 8th Floor, 76 Zhichun Road, Haidian District, Beijing

Applicant before:BEIJING JINGDONG SHANGKE INFORMATION TECHNOLOGY Co.,Ltd.

Applicant before:BEIJING JINGDONG CENTURY TRADING Co.,Ltd.

Effective date of registration:20210226

Address after:Room a1905, 19 / F, building 2, No. 18, Kechuang 11th Street, Daxing District, Beijing, 100176

Applicant after:Beijing Jingdong Qianshi Technology Co.,Ltd.

Address before:101, 1st floor, building 2, yard 20, Suzhou street, Haidian District, Beijing 100080

Applicant before:Beijing Jingbangda Trading Co.,Ltd.

TA01Transfer of patent application right
RJ01Rejection of invention patent application after publication

Application publication date:20200306

RJ01Rejection of invention patent application after publication

[8]ページ先頭

©2009-2025 Movatter.jp