Movatterモバイル変換


[0]ホーム

URL:


CN113376638A - Unmanned logistics trolley environment sensing method and system - Google Patents

Unmanned logistics trolley environment sensing method and system
Download PDF

Info

Publication number
CN113376638A
CN113376638ACN202110639869.0ACN202110639869ACN113376638ACN 113376638 ACN113376638 ACN 113376638ACN 202110639869 ACN202110639869 ACN 202110639869ACN 113376638 ACN113376638 ACN 113376638A
Authority
CN
China
Prior art keywords
point cloud
unmanned logistics
target
ground point
environment
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110639869.0A
Other languages
Chinese (zh)
Inventor
杨波
杨涛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan University of Technology WUT
Original Assignee
Wuhan University of Technology WUT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan University of Technology WUTfiledCriticalWuhan University of Technology WUT
Priority to CN202110639869.0ApriorityCriticalpatent/CN113376638A/en
Publication of CN113376638ApublicationCriticalpatent/CN113376638A/en
Pendinglegal-statusCriticalCurrent

Links

Images

Classifications

Landscapes

Abstract

Translated fromChinese

本发明提供一种无人物流小车环境感知方法及系统,所述方法包括:通过至少两个激光雷达扫描无人物流小车周围环境信息,生成环境点云图像,环境点云图像包括多个点云数据,至少两个激光雷达的测距范围不相同;确定多个点云数据中的目标地面点云集和目标非地面点云集;通过超声波雷达获得无人物流小车第一局部环境信息,并将第一局部环境信息与所述目标非地面点云集进行融合,生成目标点云图像;根据至少两个激光雷达分别采集目标点云图像中同一障碍物信息的至少两个坐标值;将至少两个坐标值按照最优分布式估计融合算法进行融合,生成融合坐标值,并将融合坐标值作为障碍物信息的坐标值。本发明提高了无人物流小车环境感知的鲁棒性和精度。

Figure 202110639869

The present invention provides a method and system for perceiving the environment of an unmanned logistics vehicle. The method includes: scanning the surrounding environment information of the unmanned logistics vehicle through at least two laser radars to generate an environmental point cloud image, and the environmental point cloud image includes a plurality of point clouds Data, at least two lidars have different ranging ranges; determine the target ground point cloud and target non-ground point cloud in multiple point cloud data; obtain the first local environment information of the unmanned logistics car through ultrasonic radar, and use the first local environment information of the unmanned logistics vehicle. A local environment information is fused with the target non-ground point cloud set to generate a target point cloud image; at least two coordinate values of the same obstacle information in the target point cloud image are collected according to at least two lidars respectively; The values are fused according to the optimal distributed estimation fusion algorithm to generate a fusion coordinate value, and the fusion coordinate value is used as the coordinate value of the obstacle information. The invention improves the robustness and precision of the environment perception of the unmanned logistics trolley.

Figure 202110639869

Description

Unmanned logistics trolley environment sensing method and system
Technical Field
The invention relates to the technical field of unmanned driving, in particular to an unmanned logistics trolley environment sensing method and system.
Background
In recent years, with the development of e-commerce and online shopping, more and more people choose to shop online. At the same time, due to the acceleration of information exchange and goods circulation, more and more document transfer and goods circulation are also more and more dependent on transportation by logistics. Thus, logistics front-line personnel are increasingly working. Often, some violent express events occur, which causes people to be dissatisfied with the logistics industry, particularly couriers and even the logistics industry. Therefore, the unmanned logistics trolley is produced by transportation.
However, the unmanned logistics trolley in the prior art has the technical problems that the sensing detection range is small, and blind areas are large around the trolley body, so that the environmental sensing technology of the unmanned logistics trolley is poor in robustness, low in detection precision and the like.
Disclosure of Invention
The invention provides an unmanned logistics trolley environment sensing method and system, and aims to solve the technical problems that an unmanned logistics trolley environment sensing technology in the prior art is poor in robustness, low in detection precision and the like.
In one aspect, the invention provides an environment sensing method for an unmanned logistics trolley, which comprises the following steps:
scanning surrounding environment information of the unmanned logistics trolley by at least two laser radars to generate an environment point cloud image, wherein the environment point cloud image comprises a plurality of point cloud data, and the ranging ranges of the at least two laser radars are different;
determining a target ground point cloud set and a target non-ground point cloud set in the plurality of point cloud data;
acquiring first local environment information of the unmanned logistics trolley through an ultrasonic radar, and fusing the first local environment information with the target non-ground point cloud set to generate a target point cloud image;
respectively acquiring at least two coordinate values of the same obstacle information in the target point cloud image according to the at least two laser radars;
and fusing the at least two coordinate values according to an optimal distributed estimation fusion algorithm to generate a fused coordinate value, and taking the fused coordinate value as the coordinate value of the obstacle information.
In a possible implementation manner of the present invention, the generating the point cloud image of the unmanned logistics trolley environment by at least two laser radars includes:
scanning surrounding environment information of the unmanned logistics trolley through a first laser radar to generate an initial point cloud image;
carrying out distortion correction on the initial point cloud image through a GPS/IMU (global positioning system/inertial measurement unit) to generate a corrected point cloud image;
and acquiring second local environment information of the unmanned logistics trolley through a second laser radar, and fusing the second local environment information and the correction point cloud image to generate the environment point cloud image.
In a possible implementation manner of the present invention, the distortion correction of the initial point cloud image by the GPS/IMU includes:
respectively acquiring first position and attitude information of a current frame and second position and attitude information of a previous frame adjacent to the current frame of the unmanned logistics trolley in a scanning period of the first laser radar through a GPS/IMU;
and carrying out distortion correction on the initial point cloud image through preset coordinate conversion, the first position and posture information and the second position and posture information.
In a possible implementation manner of the present invention, the first attitude information includes a first yaw angle, a first pitch angle, a first heading angle, and a displacement, and the second attitude information includes a second yaw angle, a second pitch angle, and a second heading angle; the distortion correction of the initial point cloud image through the preset coordinate conversion, the first position and posture information and the second position and posture information specifically comprises the following steps:
p’=Ripi+Ti
Figure BDA0003105872760000021
Figure BDA0003105872760000022
Figure BDA0003105872760000023
in the formula, p' is a coordinate value of each point cloud data in the initial point cloud image after distortion correction; riIs a total rotation matrix; p is a radical ofiThe coordinate values of each point cloud data in the initial point cloud image before distortion correction are obtained; t isiIs a displacement matrix;
Figure BDA0003105872760000024
is an attitude matrix; c is the scanning period of the first laser radar; []TIs a transposed matrix; t is tiA time difference between a current frame and a previous frame adjacent to the current frame;
Figure BDA0003105872760000025
is a first yaw angle;
Figure BDA0003105872760000026
is a first pitch angle;
Figure BDA0003105872760000027
is a first course angle quantity; α is a second yaw angle; beta is a second pitch angle; gamma is a second course angle; x, y, z are the three coordinate components of the displacement.
In one possible implementation manner of the present invention, the determining a target ground point cloud set and a target non-ground point cloud set in the plurality of point cloud data includes:
determining an initial plane model;
determining a seed ground point cloud set and a seed non-ground point cloud set;
optimizing the initial plane model;
calculating an orthogonal projection distance between the point cloud data in the seed ground point cloud set and the optimized plane model, wherein if the orthogonal projection distance is smaller than a threshold distance, the point cloud data belongs to the seed ground point cloud set, and if the orthogonal projection distance is larger than or equal to the threshold distance, the point cloud data belongs to the seed non-ground point cloud set;
judging whether the optimization times of the initial plane model are smaller than threshold times, if so, optimizing the initial plane model again; if not, stopping optimizing the initial plane model; the seed ground point cloud set and the seed non-ground point cloud set are the target ground point cloud set and the target non-ground point cloud set respectively.
In one possible implementation manner of the present invention, the determining the seed ground point cloud set and the seed non-ground point cloud set includes:
sequencing the plurality of point cloud data according to a preset height sequence;
calculating an average height of the plurality of point cloud data;
traversing the plurality of point cloud data, and taking the point cloud data with the height smaller than the average height as an initial ground point set;
and calculating the orthogonal projection distance between the point cloud data in the initial ground point set and the initial plane model, wherein if the orthogonal projection distance is less than a threshold distance, the point cloud data belongs to the seed ground point cloud set, and if the orthogonal projection distance is greater than or equal to the threshold distance, the point cloud data belongs to the seed non-ground point cloud set.
In a possible implementation manner of the present invention, the first laser radar is a sixteen-line mechanical laser radar, and the second laser radar is a four-line solid state laser radar.
In a possible implementation manner of the present invention, the range of the sixteen-line mechanical laser radar is 20 m; the range of the four-line solid laser radar is 50 m.
On the other hand, the invention also provides an environment sensing system of the unmanned logistics trolley, which comprises the following components:
the system comprises a first perception module, a second perception module and a third perception module, wherein the first perception module is used for scanning surrounding environment information of the unmanned logistics trolley through at least two laser radars to generate an environment point cloud image, the environment point cloud image comprises a plurality of point cloud data, and the distance measurement ranges of the at least two laser radars are different;
a segmentation module for determining a target ground point cloud set and a target non-ground point cloud set in the plurality of point cloud data;
the second sensing module is used for obtaining first local environment information of the unmanned logistics trolley through an ultrasonic radar, and fusing the first local environment information with the target non-ground point cloud set to generate a target point cloud image;
the target detection module is used for respectively acquiring at least two coordinate values of the same obstacle information in the target point cloud image according to the at least two laser radars;
and the fusion module is used for fusing the at least two coordinate values according to an optimal distributed estimation fusion algorithm to generate a fusion coordinate value, and the fusion coordinate value is used as the coordinate value of the obstacle information.
The method comprises the steps of scanning surrounding environment information of the unmanned logistics trolley through at least two laser radars to generate an environment point cloud image, obtaining first local environment information of the unmanned logistics trolley through an ultrasonic radar after determining a target ground point cloud set and a target non-ground point cloud set in a plurality of point cloud data, and fusing the first local environment information and the target non-ground point cloud set to generate the target point cloud image. The target point cloud image is generated by at least two laser radars and the ultrasonic radar together, so that the robustness of environment perception of the unmanned logistics trolley can be improved, and the detection precision is improved.
Furthermore, the method and the device fuse at least two coordinate values according to a pre-optimal distributed estimation fusion algorithm to generate a fused coordinate value, and the fused coordinate value is used as the coordinate value of the obstacle information, so that the robustness and the accuracy of obstacle detection can be further improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
FIG. 1 is a schematic flow chart of an embodiment of an environment sensing method for an unmanned logistics trolley according to an embodiment of the present invention;
fig. 2 is a schematic flowchart of an embodiment of S101 according to the present invention;
fig. 3 is a flowchart illustrating an embodiment of S202 according to the present invention;
fig. 4 is a schematic flowchart of an embodiment of S102 according to the present invention;
fig. 5 is a flowchart illustrating an embodiment of S402 according to the present invention;
fig. 6 is a schematic structural diagram of an embodiment of a hardware platform of an unmanned logistics trolley environment sensing method provided by the embodiment of the invention;
fig. 7 is a schematic structural diagram of an embodiment of an unmanned logistics trolley environment perception system provided by the embodiment of the invention.
Detailed Description
The technical solution in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention. It is to be understood that the described embodiments are merely exemplary of the invention, and not restrictive of the full scope of the invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
In the description of the present invention, "a plurality" means two or more unless specifically defined otherwise.
Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment of the invention. The appearances of the phrase in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. It is explicitly and implicitly understood by one skilled in the art that the embodiments described herein can be combined with other embodiments.
The invention provides an unmanned logistics trolley environment sensing method and system, which are respectively explained in detail below.
Fig. 1 is a schematic flow diagram of an embodiment of an environment sensing method for an unmanned logistics trolley according to an embodiment of the present invention, and as shown in fig. 1, the environment sensing method for an unmanned logistics trolley includes:
s101, scanning surrounding environment information of the unmanned logistics trolley through at least two laser radars to generate an environment point cloud image, wherein the environment point cloud image comprises a plurality of point cloud data, and the ranging ranges of the at least two laser radars are different;
s102, determining a target ground point cloud set and a target non-ground point cloud set in the plurality of point cloud data;
s103, obtaining first local environment information of the unmanned logistics trolley through an ultrasonic radar, and fusing the first local environment information and the target non-ground point cloud set to generate a target point cloud image;
s104, respectively acquiring at least two coordinate values of the same obstacle information in the target point cloud image according to at least two laser radars;
specifically, data correlation of at least two lidar is achieved by a Global Nearest Neighbor (GNN) algorithm.
And S105, fusing the at least two coordinate values according to an optimal distributed estimation fusion algorithm to generate a fused coordinate value, and taking the fused coordinate value as a coordinate value of the obstacle information.
The unmanned logistics trolley environment sensing method provided by the embodiment of the invention scans the surrounding environment information of the unmanned logistics trolley through at least two laser radars to generate an environment point cloud image, obtains the first local environment information of the unmanned logistics trolley through an ultrasonic radar after determining a target ground point cloud set and a target non-ground point cloud set in a plurality of point cloud data, and fuses the first local environment information and the target non-ground point cloud set to generate the target point cloud image. Namely: the target point cloud image is generated by at least two laser radars and the ultrasonic radar with different ranging ranges, so that the robustness of environment perception of the unmanned logistics trolley can be improved, and the detection precision is improved.
Furthermore, the method and the device fuse at least two coordinate values according to a pre-optimal distributed estimation fusion algorithm to generate a fused coordinate value, and the fused coordinate value is used as the coordinate value of the obstacle information, so that multi-target tracking can be realized, and the robustness and the accuracy of obstacle detection are further improved.
Further, in some embodiments of the present invention, the environmental information around the unmanned logistics trolley is scanned by a first laser radar and a second laser radar, and the first laser radar is a sixteen-line mechanical laser radar and the second laser radar is a four-line solid state laser radar.
Specifically, the range of the sixteen-line mechanical laser radar is 20m, the vertical detection angle is 30 °, and the horizontal detection angle is 360 °. The range of the four-line solid laser radar is 50m, the vertical detection angle is 3.2 degrees, and the horizontal detection angle is 110 degrees. The range of the ultrasonic radar is 5m, and the horizontal detection angle is 60 degrees.
The introduction of the ultrasonic radar can compensate the driving blind area of the sixteen-line laser radar; the four-wire solid laser radar solves the problems that sixteen-wire laser radar is low in vertical resolution, road edge information cannot be effectively extracted, and effective return rate of remote point clouds is low.
Specifically, at least two coordinate values are fused according to an optimal distributed estimation fusion algorithm, and the generated fusion coordinate value is as follows: and respectively calculating at least two weights which are in one-to-one correspondence with the at least two coordinate values, and fusing according to the weights and the corresponding coordinate values to generate fused coordinate values.
It should be understood that: when the distance between the obstacle information and the unmanned logistics trolley is smaller than 20m, the weight of the sixteen-line mechanical laser radar is larger than that of the four-line solid-state laser radar, and when the distance between the obstacle information and the unmanned logistics trolley is larger than 20m, the weight of the sixteen-line mechanical laser radar is smaller than that of the four-line solid-state laser radar.
Further, in some embodiments of the present invention, as shown in fig. 2, S101 includes:
s201, scanning surrounding environment information of the unmanned logistics trolley through a first laser radar to generate an initial point cloud image;
specifically, the first laser radar scans the surrounding environment information of the unmanned logistics trolley, and generates an initial point cloud image by adopting Euclidean clustering of different clustering radiuses and clustering point transportation according to the distance.
S202, distortion correction is carried out on the initial point cloud image through a GPS/IMU, and a corrected point cloud image is generated;
s203, collecting second local environment information of the unmanned logistics trolley through a second laser radar, and fusing the second local environment information and the correction point cloud image to generate an environment point cloud image.
Through the setting, the distortion of the initial point cloud image can be corrected, and the environment perception capability of the unmanned logistics trolley is further improved.
Further, as shown in fig. 3, S202 includes:
s301, respectively acquiring first position information of a current frame and second position information of a previous frame adjacent to the current frame of the unmanned logistics trolley in a scanning period of a first laser radar through a GPS/IMU;
s302, distortion correction is carried out on the initial point cloud image through preset coordinate conversion, the first position and posture information and the second position and posture information.
Specifically, the first attitude information includes a first yaw angle, a first pitch angle, a first course angle and displacement, and the second attitude information includes a second yaw angle, a second pitch angle and a second course angle; s302 specifically comprises the following steps:
p’=Ripi+Ti
Figure BDA0003105872760000071
Figure BDA0003105872760000072
Figure BDA0003105872760000081
in the formula, p' is a coordinate value of each point cloud data in the initial point cloud image after distortion correction; riIs a total rotation matrix; p is a radical ofiThe coordinate values of each point cloud data in the initial point cloud image before distortion correction are obtained; t isiIs a displacement matrix;
Figure BDA0003105872760000082
is an attitude matrix; c is the scanning period of the first laser radar; []TIs a transposed matrix; t is tiA time difference between a current frame and a previous frame adjacent to the current frame;
Figure BDA0003105872760000083
is a first yaw angle;
Figure BDA0003105872760000084
is a first pitch angle;
Figure BDA0003105872760000085
is a first course angle quantity; α is a second yaw angle; beta is a second pitch angle; gamma is a second course angle; x, y, z are three coordinate components of the displacement; rx,Ry,RzThree coordinate components of displacement are respectively a rotation matrix around an X-axis, a Y-axis and a Z-axis.
Further, in some embodiments of the present invention, as shown in fig. 4, S102 includes:
s401, determining an initial plane model;
wherein, in some embodiments of the invention, the initial planar model is a linear model.
S402, determining a seed ground point cloud set and a seed non-ground point cloud set;
s403, optimizing the initial plane model;
s404, calculating an orthogonal projection distance between the point cloud data in the seed ground point cloud set and the optimized plane model, wherein if the orthogonal projection distance is smaller than a threshold distance, the point cloud data belongs to the seed ground point cloud set, and if the orthogonal projection distance is larger than or equal to the threshold distance, the point cloud data belongs to the seed non-ground point cloud set;
s405, judging whether the optimization times of the initial plane model are smaller than the threshold times, if so, repeating S403-S404; if not, stopping optimizing the initial plane model; the seed ground point cloud set and the seed non-ground point cloud set are respectively a target ground point cloud set and a non-ground point cloud set.
Further, as shown in fig. 5, S402 includes:
s501, sequencing the plurality of point cloud data according to a preset height sequence;
s502, calculating the average height of a plurality of point cloud data;
s503, traversing a plurality of point cloud data, and taking the point cloud data with the height smaller than the average height as an initial ground point set;
s504, calculating an orthogonal projection distance between the point cloud data in the initial ground point set and the initial plane model, wherein if the orthogonal projection distance is smaller than a threshold distance, the point cloud data belongs to a seed ground point cloud set, and if the orthogonal projection distance is larger than or equal to the threshold distance, the point cloud data belongs to a seed non-ground point cloud set.
Through the arrangement, the drivable area of the unmanned logistics trolley can be determined for subsequent path planning of the unmanned logistics trolley.
By identifying the outliers and deleting the outliers, the details of the environmental characteristics can be kept while the noise points of rain and snow are effectively removed, and the robustness and the accuracy of the environment perception of the unmanned logistics trolley are further improved.
Further, as shown in fig. 6, for the hardware platform of the unmanned logistics trolley environment sensing method provided by the embodiment of the invention, the hardware platform processes sensing data from sixteen-line mechanical lidar, four-line solid-state lidar and ultrasonic radar by using NVIDIA TX2 of a Robot Operating System (ROS), and one CAN channel of TX2 is connected with the ultrasonic radar and the GPS/IMU. The notification data of the sixteen-wire mechanical lidar, the four-wire solid-state lidar, is transmitted to TX2 through ethernet. Finally, the detected environment information is output to CompactRIO, which is a lower computer for path planning and motion control, through another CAN channel of TX 2.
On the other hand, in order to better implement the method for sensing the environment of the unmanned logistics trolley in the embodiment of the present invention, on the basis of the method for sensing the environment of the unmanned logistics trolley, as shown in fig. 7, correspondingly, the embodiment of the present invention further provides a system for sensing the environment of the unmanned logistics trolley, where the system 700 for sensing the environment of the unmanned logistics trolley includes:
the first perception module 701 is used for scanning surrounding environment information of the unmanned logistics trolley through at least two laser radars to generate an environment point cloud image, wherein the environment point cloud image comprises a plurality of point cloud data, and the distance measurement ranges of the at least two laser radars are different;
a segmentation module 702 configured to determine a target ground point cloud set and a target non-ground point cloud set in the plurality of point cloud data;
the second sensing module 703 is configured to obtain first local environment information of the unmanned logistics trolley through an ultrasonic radar, and fuse the first local environment information with a target non-ground point cloud set to generate a target point cloud image;
a target detection module 704, configured to collect at least two coordinate values of the same obstacle information in the target point cloud image according to at least two laser radars, respectively;
the fusion module 705 is configured to fuse the at least two coordinate values according to an optimal distributed estimation fusion algorithm to generate a fusion coordinate value, and use the fusion coordinate value as the coordinate value of the obstacle information.
On the other hand, an embodiment of the present invention further provides a computer device, which integrates any one of the unmanned logistics trolley environment sensing systems provided by the embodiment of the present invention, where the computer device includes:
one or more processors;
a memory; and
one or more application programs, wherein the one or more application programs are stored in the memory and configured to be executed by the processor to perform the steps of the unmanned logistics car environment awareness method in any of the above embodiments of the unmanned logistics car environment awareness method.
Specifically, in this embodiment, the computer device can thus implement various functions, as follows:
scanning surrounding environment information of the unmanned logistics trolley by at least two laser radars to generate an environment point cloud image, wherein the environment point cloud image comprises a plurality of point cloud data, and the ranging ranges of the at least two laser radars are different;
determining a target ground point cloud set and a target non-ground point cloud set in the plurality of point cloud data;
acquiring first local environment information of the unmanned logistics trolley through an ultrasonic radar, and fusing the first local environment information with the target non-ground point cloud set to generate a target point cloud image;
respectively acquiring at least two coordinate values of the same obstacle information in the target point cloud image according to the at least two laser radars;
and fusing the at least two coordinate values according to an optimal distributed estimation fusion algorithm to generate a fused coordinate value, and taking the fused coordinate value as the coordinate value of the obstacle information.
It will be understood by those skilled in the art that all or part of the steps of the methods of the above embodiments may be performed by instructions or by associated hardware controlled by the instructions, which may be stored in a computer readable storage medium and loaded and executed by a processor.
To this end, an embodiment of the present invention provides a computer-readable storage medium, which may include: read Only Memory (ROM), Random Access Memory (RAM), magnetic or optical disks, and the like. The system comprises a storage device, a processor and a computer program, wherein the storage device stores the computer program, and the computer program is loaded by the processor to execute the steps of any one of the unmanned logistics trolley environment perception methods provided by the embodiment of the invention. For example, the computer program may be loaded by a processor to perform the steps of:
scanning surrounding environment information of the unmanned logistics trolley by at least two laser radars to generate an environment point cloud image, wherein the environment point cloud image comprises a plurality of point cloud data, and the ranging ranges of the at least two laser radars are different;
determining a target ground point cloud set and a target non-ground point cloud set in the plurality of point cloud data;
acquiring first local environment information of the unmanned logistics trolley through an ultrasonic radar, and fusing the first local environment information with the target non-ground point cloud set to generate a target point cloud image;
respectively acquiring at least two coordinate values of the same obstacle information in the target point cloud image according to the at least two laser radars;
and fusing the at least two coordinate values according to an optimal distributed estimation fusion algorithm to generate a fused coordinate value, and taking the fused coordinate value as the coordinate value of the obstacle information.
In a specific implementation, each unit or structure may be implemented as an independent entity, or may be combined arbitrarily to be implemented as one or several entities, and the specific implementation of each unit or structure may refer to the foregoing method embodiment, which is not described herein again.
The method and the system for sensing the environment of the unmanned logistics trolley provided by the invention are described in detail, a specific example is applied in the method to explain the principle and the implementation mode of the invention, and the description of the embodiment is only used for helping to understand the method and the core idea of the invention; meanwhile, for those skilled in the art, according to the idea of the present invention, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present invention.

Claims (9)

Translated fromChinese
1.一种无人物流小车环境感知方法,其特征在于,包括:1. an unmanned logistics trolley environment perception method, is characterized in that, comprises:通过至少两个激光雷达扫描无人物流小车周围环境信息,生成环境点云图像,所述环境点云图像包括多个点云数据,所述至少两个激光雷达的测距范围不相同;The environment information around the unmanned logistics vehicle is scanned by at least two laser radars to generate an environment point cloud image, the environment point cloud image includes a plurality of point cloud data, and the ranging ranges of the at least two laser radars are different;确定所述多个点云数据中的目标地面点云集和目标非地面点云集;determining a target ground point cloud set and a target non-ground point cloud set in the plurality of point cloud data;通过超声波雷达获得无人物流小车第一局部环境信息,并将所述第一局部环境信息与所述目标非地面点云集进行融合,生成目标点云图像;Obtain the first local environment information of the unmanned logistics car through ultrasonic radar, and fuse the first local environment information with the target non-ground point cloud set to generate a target point cloud image;根据所述至少两个激光雷达分别采集所述目标点云图像中同一障碍物信息的至少两个坐标值;Collect at least two coordinate values of the same obstacle information in the target point cloud image according to the at least two lidars respectively;将所述至少两个坐标值按照最优分布式估计融合算法进行融合,生成融合坐标值,并将所述融合坐标值作为所述障碍物信息的坐标值。The at least two coordinate values are fused according to an optimal distributed estimation fusion algorithm to generate a fusion coordinate value, and the fusion coordinate value is used as the coordinate value of the obstacle information.2.根据权利要求1所述的无人物流小车环境感知方法,其特征在于,所述通过至少两个激光雷达生成无人物流小车环境点云图像包括:2. The method for perceiving the environment of an unmanned logistics trolley according to claim 1, wherein the generating an unmanned logistics trolley environment point cloud image by at least two laser radars comprises:通过第一激光雷达扫描无人物流小车周围环境信息,生成初始点云图像;Scan the surrounding environment information of the unmanned logistics car through the first lidar to generate an initial point cloud image;通过GPS/IMU对所述初始点云图像进行畸变校正,生成校正点云图像;Perform distortion correction on the initial point cloud image through GPS/IMU to generate a corrected point cloud image;通过第二激光雷达采集无人物流小车第二局部环境信息,并将所述第二局部环境信息和所述校正点云图像进行融合,生成所述环境点云图像。The second local environment information of the unmanned logistics trolley is collected by the second lidar, and the second local environment information and the corrected point cloud image are fused to generate the environment point cloud image.3.根据权利要求2所述的无人物流小车环境感知方法,其特征在于,所述通过GPS/IMU对所述初始点云图像进行畸变校正包括:3. The method for perceiving the environment of an unmanned logistics vehicle according to claim 2, wherein the performing distortion correction on the initial point cloud image through GPS/IMU comprises:通过GPS/IMU分别采集无人物流小车在所述第一激光雷达的扫描周期内当前帧的第一位姿信息以及与所述当前帧相邻的前一帧的第二位姿信息;Collect the first pose information of the current frame and the second pose information of the previous frame adjacent to the current frame in the scanning period of the first lidar by the GPS/IMU respectively;通过预设的坐标转换、第一位姿信息和第二位姿信息对初始点云图像进行畸变校正。Distortion correction is performed on the initial point cloud image through preset coordinate transformation, first pose information and second pose information.4.根据权利要求3所述的无人物流小车环境感知方法,其特征在于,所述第一位姿信息包括第一横摆角、第一俯仰角、第一航向角、位移,所述第二位姿信息包括第二横摆角、第二俯仰角、第二航向角;所述通过预设的坐标转换、第一位姿信息和第二位姿信息对初始点云图像进行畸变校正具体为:4. The method for perceiving the environment of an unmanned logistics vehicle according to claim 3, wherein the first attitude information comprises a first yaw angle, a first pitch angle, a first heading angle, and a displacement, and the first The two pose information includes a second yaw angle, a second pitch angle, and a second heading angle. The specific for:p’=Ripi+Tip'=Ri pi +Ti
Figure FDA0003105872750000021
Figure FDA0003105872750000021
Figure FDA0003105872750000022
Figure FDA0003105872750000022
Figure FDA0003105872750000023
Figure FDA0003105872750000023
式中,p’为畸变校正后的初始点云图像中各个点云数据的坐标值;Ri为总旋转矩阵;pi为畸变校正前的初始点云图像中各个点云数据的坐标值;Ti为位移矩阵;
Figure FDA0003105872750000024
为姿态矩阵;C为第一激光雷达的扫描周期;[]T为转置矩阵;ti当前帧和与所述当前帧相邻的前一帧的时间差;
Figure FDA0003105872750000025
为第一横摆角;
Figure FDA0003105872750000026
为第一俯仰角;
Figure FDA0003105872750000027
为第一航向角量;α为第二横摆角;β为第二俯仰角;γ为第二航向角;x,y,z为位移的三个坐标分量;Rx,Ry,Rz分别为位移的三个坐标分量围绕着X轴,Y轴,Z轴的旋转矩阵。
In the formula, p' is the coordinate value of each point cloud data in the initial point cloud image after distortion correction; Ri is the total rotation matrix;pi is the coordinate value of each point cloud data in the initial point cloud image before distortion correction; Ti is the displacement matrix;
Figure FDA0003105872750000024
is the attitude matrix; C is the scanning period of the first lidar; []T is the transposition matrix; ti is the time difference between the current frame and the previous frame adjacent to the current frame;
Figure FDA0003105872750000025
is the first yaw angle;
Figure FDA0003105872750000026
is the first pitch angle;
Figure FDA0003105872750000027
is the first heading angle; α is the second yaw angle; β is the second pitch angle; γ is the second heading angle; x, y, z are the three coordinate components of the displacement; Rx , Ry , Rz The three coordinate components of the displacement are the rotation matrices around the X axis, the Y axis, and the Z axis.
5.根据权利要求4所述的无人物流小车环境感知方法,其特征在于,所述确定所述多个点云数据中的目标地面点云集和目标非地面点云集包括:5. The method for perceiving the environment of an unmanned logistics vehicle according to claim 4, wherein the determining the target ground point cloud set and the target non-ground point cloud set in the plurality of point cloud data comprises:确定初始平面模型;Determine the initial plane model;确定种子地面点云集和种子非地面点云集;Determine the seed ground point cloud set and the seed non-ground point cloud set;对所述初始平面模型进行优化;optimizing the initial plane model;计算所述种子地面点云集中的点云数据与优化后的平面模型之间的正交投影距离,若所述正交投影距离小于阈值距离,则所述点云数据属于种子地面点云集,若所述正交投影距离大于或等于阈值距离,则所述点云数据属于种子非地面点云集;Calculate the orthogonal projection distance between the point cloud data in the seed ground point cloud set and the optimized plane model. If the orthogonal projection distance is less than the threshold distance, the point cloud data belongs to the seed ground point cloud set. If If the orthogonal projection distance is greater than or equal to the threshold distance, the point cloud data belongs to the seed non-ground point cloud set;判断所述初始平面模型的优化次数是否小于阈值次数,若小于,则再次对所述初始平面模型进行优化;若不小于,则停止对所述初始平面模型进行优化;所述种子地面点云集和所述种子非地面点云集分别为所述目标地面点云集和目标非地面点云集。Determine whether the optimization times of the initial plane model are less than the threshold times, if it is less than, then optimize the initial plane model again; if not less than, stop optimizing the initial plane model; the seed ground point cloud collection and The seed non-ground point cloud sets are respectively the target ground point cloud set and the target non-ground point cloud set.6.根据权利要求5所述的无人物流小车环境感知方法,其特征在于,所述确定种子地面点云集和种子非地面点云集包括:6. The method for perceiving the environment of an unmanned logistics vehicle according to claim 5, wherein the determining the seed ground point cloud set and the seed non-ground point cloud set comprises:将所述多个点云数据按照预设的高度顺序进行排序;sorting the plurality of point cloud data according to a preset height order;计算所述多个点云数据的平均高度;calculating the average height of the plurality of point cloud data;遍历所述多个点云数据,将高度小于所述平均高度的点云数据作为初始地面点集;Traverse the multiple point cloud data, and use the point cloud data whose height is less than the average height as the initial ground point set;计算所述初始地面点集中的点云数据与所述初始平面模型的正交投影距离,若所述正交投影距离小于阈值距离,则所述点云数据属于所述种子地面点云集,若所述正交投影距离大于或等于阈值距离,则所述点云数据属于所述种子非地面点云集。Calculate the orthogonal projection distance between the point cloud data in the initial ground point set and the initial plane model. If the orthogonal projection distance is less than the threshold distance, the point cloud data belongs to the seed ground point cloud set. If the orthogonal projection distance is greater than or equal to the threshold distance, the point cloud data belongs to the seed non-ground point cloud set.7.根据权利要求2所述的无人物流小车环境感知方法,其特征在于,所述第一激光雷达为十六线机械式激光雷达,所述第二激光雷达为四线固态激光雷达。7 . The environment perception method for an unmanned logistics vehicle according to claim 2 , wherein the first laser radar is a sixteen-line mechanical laser radar, and the second laser radar is a four-line solid-state laser radar. 8 .8.根据权利要求7所述的无人物流小车环境感知方法,其特征在于,所述十六线机械式激光雷达的测距范围为20m;所述四线固态激光雷达的测距范围为50m。8 . The environment perception method for an unmanned logistics vehicle according to claim 7 , wherein the ranging range of the sixteen-line mechanical laser radar is 20 m; the ranging range of the four-line solid-state laser radar is 50 m. 9 . .9.一种无人物流小车环境感知系统,其特征在于,包括:9. An unmanned logistics car environment perception system, characterized in that, comprising:第一感知模块,用于通过至少两个激光雷达扫描无人物流小车周围环境信息,生成环境点云图像,所述环境点云图像包括多个点云数据,所述至少两个激光雷达的测距范围不相同;The first perception module is used to scan the surrounding environment information of the unmanned logistics vehicle through at least two laser radars, and generate an environmental point cloud image, where the environmental point cloud image includes a plurality of point cloud data, and the measurement results of the at least two laser radars are The distance range is not the same;分割模块,用于确定所述多个点云数据中的目标地面点云集和目标非地面点云集;a segmentation module, configured to determine the target ground point cloud set and the target non-ground point cloud set in the plurality of point cloud data;第二感知模块,用于通过超声波雷达获得无人物流小车第一局部环境信息,并将所述第一局部环境信息与所述目标非地面点云集进行融合,生成目标点云图像;The second perception module is used to obtain the first local environment information of the unmanned logistics trolley through the ultrasonic radar, and fuse the first local environment information with the target non-ground point cloud set to generate a target point cloud image;目标检测模块,用于根据所述至少两个激光雷达分别采集所述目标点云图像中同一障碍物信息的至少两个坐标值;a target detection module, configured to separately collect at least two coordinate values of the same obstacle information in the target point cloud image according to the at least two lidars;融合模块,用于将所述至少两个坐标值按照最优分布式估计融合算法进行融合,生成融合坐标值,并将所述融合坐标值作为所述障碍物信息的坐标值。A fusion module, configured to fuse the at least two coordinate values according to an optimal distributed estimation fusion algorithm to generate a fusion coordinate value, and use the fusion coordinate value as the coordinate value of the obstacle information.
CN202110639869.0A2021-06-082021-06-08Unmanned logistics trolley environment sensing method and systemPendingCN113376638A (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
CN202110639869.0ACN113376638A (en)2021-06-082021-06-08Unmanned logistics trolley environment sensing method and system

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN202110639869.0ACN113376638A (en)2021-06-082021-06-08Unmanned logistics trolley environment sensing method and system

Publications (1)

Publication NumberPublication Date
CN113376638Atrue CN113376638A (en)2021-09-10

Family

ID=77573062

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN202110639869.0APendingCN113376638A (en)2021-06-082021-06-08Unmanned logistics trolley environment sensing method and system

Country Status (1)

CountryLink
CN (1)CN113376638A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN113607166A (en)*2021-10-082021-11-05广东省科学院智能制造研究所Indoor and outdoor positioning method and device for autonomous mobile robot based on multi-sensor fusion
CN113759947A (en)*2021-09-102021-12-07中航空管系统装备有限公司Airborne flight obstacle avoidance auxiliary method, device and system based on laser radar
CN114676789A (en)*2022-04-112022-06-28广州赛特智能科技有限公司Point cloud fusion method and device, computer equipment and storage medium
WO2025025479A1 (en)*2023-08-022025-02-06奥比中光科技集团股份有限公司Point cloud fusion method and apparatus, and 3d printer

Citations (8)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN108037515A (en)*2017-12-272018-05-15清华大学苏州汽车研究院(吴江)A kind of laser radar and ultrasonic radar information fusion system and method
KR20180061929A (en)*2016-11-302018-06-08주식회사 모디엠MOBILE 3D MAPPING SYSTEM OF RAILWAY FACILITIES EQUIPPED WITH DUAL LIDAR and 3D MAPPING METHOD USING THE SAME
CN108333589A (en)*2018-03-132018-07-27苏州青飞智能科技有限公司A kind of automatic driving vehicle obstacle detector
WO2018205119A1 (en)*2017-05-092018-11-15深圳市速腾聚创科技有限公司Roadside detection method and system based on laser radar scanning
CN110244321A (en)*2019-04-222019-09-17武汉理工大学 A detection method of road passable area based on 3D lidar
DE102019123483A1 (en)*2019-09-022021-03-04Audi Ag Method and motor vehicle control unit for detecting the surroundings of a motor vehicle by merging sensor data at point cloud level
CN112731449A (en)*2020-12-232021-04-30深圳砺剑天眼科技有限公司Laser radar obstacle identification method and system
CN112859051A (en)*2021-01-112021-05-28桂林电子科技大学Method for correcting laser radar point cloud motion distortion

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
KR20180061929A (en)*2016-11-302018-06-08주식회사 모디엠MOBILE 3D MAPPING SYSTEM OF RAILWAY FACILITIES EQUIPPED WITH DUAL LIDAR and 3D MAPPING METHOD USING THE SAME
WO2018205119A1 (en)*2017-05-092018-11-15深圳市速腾聚创科技有限公司Roadside detection method and system based on laser radar scanning
CN108037515A (en)*2017-12-272018-05-15清华大学苏州汽车研究院(吴江)A kind of laser radar and ultrasonic radar information fusion system and method
CN108333589A (en)*2018-03-132018-07-27苏州青飞智能科技有限公司A kind of automatic driving vehicle obstacle detector
CN110244321A (en)*2019-04-222019-09-17武汉理工大学 A detection method of road passable area based on 3D lidar
DE102019123483A1 (en)*2019-09-022021-03-04Audi Ag Method and motor vehicle control unit for detecting the surroundings of a motor vehicle by merging sensor data at point cloud level
CN112731449A (en)*2020-12-232021-04-30深圳砺剑天眼科技有限公司Laser radar obstacle identification method and system
CN112859051A (en)*2021-01-112021-05-28桂林电子科技大学Method for correcting laser radar point cloud motion distortion

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
杨波;程维政;朱超;: "小世界效应加速生物地理学优化的社团识别算法", 哈尔滨工业大学学报, vol. 52, no. 3*
潘泉: "机载LiDAR系统原理与点云处理方法", 31 October 2009, 国防工业出版社, pages: 148 - 91*
袁金国编著: "遥感图像数字处理", 28 February 2006, 中国环境科学出版社, pages: 252 - 253*
赵胜强: "车载激光点云典型地物提取技术研究", 铁道勘察, no. 4*
郭鹏飞: "融合LiDAR点云与影像数据的矿区建筑物提取", 31 December 2019, 西安交通大学出版社, pages: 30 - 31*

Cited By (6)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN113759947A (en)*2021-09-102021-12-07中航空管系统装备有限公司Airborne flight obstacle avoidance auxiliary method, device and system based on laser radar
CN113759947B (en)*2021-09-102023-08-08中航空管系统装备有限公司Airborne flight obstacle avoidance assisting method, device and system based on laser radar
CN113607166A (en)*2021-10-082021-11-05广东省科学院智能制造研究所Indoor and outdoor positioning method and device for autonomous mobile robot based on multi-sensor fusion
CN113607166B (en)*2021-10-082022-01-07广东省科学院智能制造研究所 Indoor and outdoor positioning method and device for autonomous mobile robot based on multi-sensor fusion
CN114676789A (en)*2022-04-112022-06-28广州赛特智能科技有限公司Point cloud fusion method and device, computer equipment and storage medium
WO2025025479A1 (en)*2023-08-022025-02-06奥比中光科技集团股份有限公司Point cloud fusion method and apparatus, and 3d printer

Similar Documents

PublicationPublication DateTitle
CN111929699B (en) A Lidar Inertial Navigation Odometer and Mapping Method and System Considering Dynamic Obstacles
CN110645974B (en) An indoor map construction method for mobile robots based on fusion of multi-sensors
CN113376638A (en)Unmanned logistics trolley environment sensing method and system
KR102581263B1 (en)Method, apparatus, computing device and computer-readable storage medium for positioning
CN110675307B (en)Implementation method from 3D sparse point cloud to 2D grid graph based on VSLAM
CN111880191B (en)Map generation method based on multi-agent laser radar and visual information fusion
EP3264364B1 (en)Method and apparatus for obtaining range image with uav, and uav
Weon et al.Object Recognition based interpolation with 3d lidar and vision for autonomous driving of an intelligent vehicle
Hebel et al.Change detection in urban areas by object-based analysis and on-the-fly comparison of multi-view ALS data
CN112101092A (en) Automatic driving environment perception method and system
CN113985405B (en) Obstacle detection method and obstacle detection device for vehicle
CN112414417B (en)Automatic driving map generation method and device, electronic equipment and readable storage medium
CN110889808A (en)Positioning method, device, equipment and storage medium
CN115993825A (en)Unmanned vehicle cluster control system based on air-ground cooperation
CN114964212A (en) Multi-machine collaborative fusion positioning and mapping method for unknown space exploration
CN114398455B (en)Heterogeneous multi-robot collaborative SLAM map fusion method
CN111257882B (en)Data fusion method and device, unmanned equipment and readable storage medium
CN118603111A (en) Multi-source sensor information fusion and verification method, device and computing equipment for road sweeper
CN115457070A (en)Multi-sensor fusion-based target detection method and medium for water surface traffic
CN112486172A (en)Road edge detection method and robot
CN115082562A (en)External parameter calibration method, device, equipment, server and vehicle-mounted computing equipment
CN119399282B (en)Positioning and mapping method and system based on multi-source data
Wang et al.A Path Planning Algorithm for UAV 3D Surface Inspection Based on Normal Vector Filtering and Integrated Viewpoint Evaluation
CN119879897A (en)Efficient mapped navigation positioning method and system based on multi-sensor fusion
CN119124146A (en) A dynamic environment positioning method based on lidar and IMU

Legal Events

DateCodeTitleDescription
PB01Publication
PB01Publication
SE01Entry into force of request for substantive examination
SE01Entry into force of request for substantive examination
RJ01Rejection of invention patent application after publication

Application publication date:20210910

RJ01Rejection of invention patent application after publication

[8]ページ先頭

©2009-2025 Movatter.jp