Movatterモバイル変換


[0]ホーム

URL:


CN112254729B - Mobile robot positioning method based on multi-sensor fusion - Google Patents

Mobile robot positioning method based on multi-sensor fusion
Download PDF

Info

Publication number
CN112254729B
CN112254729BCN202011071053.4ACN202011071053ACN112254729BCN 112254729 BCN112254729 BCN 112254729BCN 202011071053 ACN202011071053 ACN 202011071053ACN 112254729 BCN112254729 BCN 112254729B
Authority
CN
China
Prior art keywords
pose
robot
global
time
space
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011071053.4A
Other languages
Chinese (zh)
Other versions
CN112254729A (en
Inventor
李梦凡
李位星
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Institute of Technology BIT
Original Assignee
Beijing Institute of Technology BIT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Institute of Technology BITfiledCriticalBeijing Institute of Technology BIT
Priority to CN202011071053.4ApriorityCriticalpatent/CN112254729B/en
Publication of CN112254729ApublicationCriticalpatent/CN112254729A/en
Application grantedgrantedCritical
Publication of CN112254729BpublicationCriticalpatent/CN112254729B/en
Activelegal-statusCriticalCurrent
Anticipated expirationlegal-statusCritical

Links

Classifications

Landscapes

Abstract

The invention discloses a mobile robot positioning method based on multi-sensor fusion, which solves the problems of poor positioning accuracy of single visual inertial positioning in a weak texture environment, weak positioning robustness of a laser odometer in a dynamic or structurally similar environment and the like by fusing the advantages of multiple sensor positioning, has higher reliability and positioning accuracy, and has lower hardware cost on the whole and lower calculation force requirement on an operation platform.

Description

Mobile robot positioning method based on multi-sensor fusion
Technical Field
The invention belongs to the technical field of mobile robot navigation and positioning, and particularly relates to a mobile robot positioning method based on multi-sensor fusion.
Background
The positioning navigation technology is one of key technologies for the robot to realize autonomous movement, and is mainly used for providing guidance and guarantee for the robot to realize the functions of avoiding obstacles, moving to target points, autonomous movement operation and the like.
Due to the problems of lack of GPS active positioning signals, low precision and the like in a space environment, the sensors commonly used in the positioning of the current robot mainly comprise a laser radar, an IMU, a vision sensor and the like. However, due to the complex application scene of the robot, the use of a single sensor often has difficulty in realizing the universality of the positioning method, such as poor performance of the 2D laser radar in a dynamic or geometrical structure similar environment, low positioning precision of the visual positioning method in a weak texture scene, and the like.
Disclosure of Invention
In view of the above, the invention provides a mobile robot positioning method based on multi-sensor fusion, which can fuse global positioning information to realize the active positioning of a mobile robot with high precision and high reliability and realize global positioning.
The invention provides a mobile robot positioning method based on multi-sensor fusion, which comprises the following steps:
Step 1, installing a plurality of sensors on a robot body to be positioned; according to the data acquired by the sensor, estimating a preliminary space pose of the robot to be positioned by adopting visual inertia combined pose estimation, and acquiring preliminary positioning data of the robot to be positioned by adopting a laser odometer;
Step 2, fusing the preliminary spatial pose and the preliminary positioning data into the spatial pose of the robot to be positioned by adopting an extended Kalman filter; if the current calculation is the first calculation or the update period is reached, executing the step 3; otherwise, executing the step 4;
Step 3, calculating the global measurement pose of the robot to be positioned according to a global space pose coordinate system, and solving a transformation matrix between the space pose and the global measurement pose at the same moment;
And 4, converting the space pose into a real-time global pose in the moving process of the robot to be positioned by adopting the transformation matrix.
Further, the global space pose coordinate system in the step 3 is a global space pose coordinate system established by adopting a two-dimensional code.
Further, the step 3 of solving the transformation matrix between the spatial pose and the global measurement pose at the same moment includes the following steps:
and solving a transformation matrix between the space pose and the global measurement pose by adopting a least square method, and setting the following residual function:
Wherein Pi is the spatial pose of the robot to be positioned at time i, Pi-1 is the spatial pose of the robot to be positioned at time i-1,For the increment of the space pose at two adjacent moments of the robot to be positioned, Pm is the real-time global pose obtained by calculating the space pose at m moment through the transformation matrix, Gm is the global measurement pose of the robot to be positioned at m moment, sigmai is the covariance corresponding to the space pose at i moment, sigmag is the covariance corresponding to the global measurement pose,And the residual error between the space pose at m time and n time and the real-time global pose obtained through calculation of the transformation matrix is obtained.
The beneficial effects are that:
1. The invention solves the problems of poor positioning accuracy of single visual inertia positioning in a weak texture environment, weak positioning robustness of a laser odometer in a dynamic or structurally similar environment and the like by integrating the advantages of positioning of various sensors, has higher reliability and positioning accuracy, and has lower hardware cost and lower calculation force requirement on an operation platform on the whole.
2. The invention provides a simple and effective global pose estimation integration mode, and can effectively reduce the accumulated error in the odometer by expanding and integrating other active positioning information, realize global positioning and improve the universality of the method.
Drawings
Fig. 1 is a flowchart of a mobile robot positioning method based on multi-sensor fusion provided by the invention.
Fig. 2 is a flowchart of a mobile robot positioning method based on multi-sensor fusion, which utilizes extended kalman filtering to fuse multi-source pose information.
Fig. 3 is a flowchart of adding global positioning information fusion estimation in a mobile robot positioning method based on multi-sensor fusion.
Detailed Description
The invention will now be described in detail by way of example with reference to the accompanying drawings.
The invention provides a mobile robot positioning method based on multi-sensor fusion, which is shown in fig. 1 and specifically comprises the following steps:
Step 1, installing a plurality of sensors on a robot body to be positioned; and according to the data acquired by the sensor, estimating the initial spatial pose of the robot to be positioned by adopting visual inertia combined pose estimation, and acquiring the initial positioning data of the robot to be positioned by adopting a laser odometer.
Specifically, a process of first mounting a plurality of sensors to a robot body to be positioned includes:
and 1.1, installing and calibrating the sensor.
The 2D laser radar, the IMU and the monocular camera are adopted as positioning sensors, and can be also integrated into other sensors such as a wheel speed meter, so that the IMU and the monocular camera are positioned at the central position of the robot body as much as possible when the IMU and the monocular camera are installed. And (3) calibrating the random error and the data noise of the IMU by using a calibration tool, and simultaneously, calibrating external parameters between the IMU and the monocular sensor and between the IMU and the laser radar.
And 1.2, collecting, reading and synchronizing multi-sensor information.
The industrial personal computer collects data of the plurality of sensors in real time through the USB interface and marks a time stamp on the collected data. The operation of time stamp alignment is performed to ensure that there is matching IMU data between two adjacent frames of image data and laser point cloud data.
Then, based on the information acquired by the sensor, the preliminary space pose of the robot to be positioned is estimated by adopting the visual inertia combined pose, and the preliminary positioning data of the robot to be positioned is acquired by adopting a laser odometer, and the specific process is as follows:
And 1.3, transmitting the time-aligned IMU data and the image data into a visual inertial pose estimation module, and establishing a motion model. The FAST corner points in the camera image are extracted and the positions of these corner points in the next adjacent image are tracked by optical flow. And pre-integrating the IMU data to obtain motion estimation of the IMU data in the time interval of two adjacent frames of images.
In the module initialization stage, a five-point method is used to calculate motion estimation between adjacent image frames. And (3) aligning visual motion estimation with the IMU pre-integration quantity, establishing a constructed residual function, obtaining information such as rotation external parameters between a camera and the IMU, random error estimation of a gyroscope in the IMU, gravity direction in the camera, initial motion speed of a robot, scale of a system module and the like by using a nonlinear optimization mode, and realizing alignment of a camera coordinate system and a world coordinate system, and finishing initialization.
In the subsequent motion estimation, the prediction of the camera pose is realized through IMU pre-integration, and a residual function shown in the following formula can be constructed for a fixed number of pose estimation variables by utilizing a sliding window algorithm through displacement triangulation feature points between adjacent image frames:
The residual error items in the above formula correspond to the priori residual error items, the space point re-projection errors and the IMU residual errors in sequence from left to right, and more accurate estimation of the pose of the robot can be realized through a graph optimization mode.
And 1.4, after the initialization of the visual inertia pose estimation module is completed, waiting for the acquired laser point cloud data of the next frame, and according to the time stamp of the point cloud data, utilizing the IMU random walk error parameters estimated in the third step to realize more accurate IMU pre-integration, and obtaining the corresponding laser odometer initial pose from the pre-integration pose in a linear interpolation mode to complete the initialization. And then, obtaining the displacement variation of the pose of a frame of point cloud at different moments relative to the initial acquisition moment by performing linear interpolation on the IMU pre-integral pose, and removing the motion distortion in the information of each laser point by using a spherical linear interpolation method. After the point cloud distortion calibration is completed, the method of IMLS-ICP is adopted to estimate the motion information of the point cloud between adjacent frames, and the pose coordinates x and y and the yaw angle are updated.
And 2, fusing the preliminary spatial pose and the preliminary positioning data into the spatial pose of the robot to be positioned by adopting an extended Kalman filter, as shown in fig. 2.
A prediction model and an observation model of the system need to be established, the state variables of the system are 6-dimensional, x, y, z, roll, pitch, yaw are respectively corresponding, and the state prediction function is set as follows:
Wherein, vel [0] and vel [1] are control amounts input by the system, vel [0] is a variable amount of a yaw angle in the pose of the robot between the time t-1 and the time t, and vel [1] is the displacement between the time t-1 and the time t. The partial data can be obtained from IMU pre-integral data, or can be measured by extended wheel speed measurement.
For the observation model, the update of the measurement data is linear, and the observation transformation matrix H is a 6×6 unit matrix. The laser odometer part brings about observation updating of x, y and yaw state quantities, and the visual inertia combined pose estimation module brings about observation updating of x, y, z, roll, pitch, yaw state quantities. Meanwhile, in order to avoid the conversion of a coordinate system between different pose estimation modules, the nonlinear Kalman filtering is fused to estimate pose increment of the pose estimation modules in the same time interval.
And 3, calculating the global measurement pose of the robot to be positioned according to the global space pose coordinate system, and solving a transformation matrix between the space pose and the global measurement pose at the same moment, as shown in fig. 3.
And manufacturing a limited number of Tags two-dimensional codes, measuring the size of the manufactured Tags and distributing unique ID to each two-dimensional code, and calibrating internal parameters of the camera to obtain focal length information of the camera. Establishing a global coordinate system of an indoor positioning space, dispersedly disposing a limited number of two-dimensional codes in the global space, and measuring the three-dimensional space coordinate of each two-dimensional code. And the robot extracts information in the two-dimensional code by means of AprilTag library in the working process, calculates the pose relationship between the two-dimensional code and the robot, and intermittently realizes the calculation of the global coordinate of the machine body in a time-sharing manner. The acquisition of global measurement pose coordinates by the robot is not limited to the manner described in this example, and may be provided by using an active positioning system such as UWB.
The method and the device have the advantages that the global measurement pose of the robot to be positioned is calculated by only adopting the two-dimensional code, so that calculation errors exist, calculation amount and calculation time are increased, the space pose which is actively positioned and is acquired in the step 3 is aligned with the global measurement pose by adopting a mode of solving a transformation matrix between the space pose and the global measurement pose, and the space pose which is acquired in the step 3 is converted into the real-time global pose by adopting the transformation matrix, so that the real-time performance and accuracy of acquiring the global pose of the robot are effectively improved.
Specifically, the invention adopts a least square method to solve a transformation matrix between the space pose and the global measurement pose, and constructs the following residual function:
Wherein Pi is the spatial pose of the robot to be positioned at time i, Pi-1 is the spatial pose of the robot to be positioned at time i-1,For the increment of the space pose at two adjacent moments of the robot to be positioned, Pm is the real-time global pose obtained by calculating the transformation matrix of the space pose at m moments, Gm is the global measurement pose of the robot to be positioned at m moments, sigmai is the covariance corresponding to the space pose at i moments, sigmag is the covariance corresponding to the global measurement pose,And the residual error between the m-time and n-time spatial poses and the real-time global pose obtained through transformation matrix calculation is obtained.
When the real-time global pose of the robot to be positioned is calculated for the first time, a transformation matrix needs to be calculated, and in addition, along with measurement updating, when a calculation error reaches a set threshold value or a calculation period reaches a set time limit, the transformation matrix needs to be recalculated so as to continuously and iteratively update the transformation matrix, so that the calculation precision of the global pose meets the requirement.
And 4, converting the space pose into a real-time global pose in the moving process of the robot to be positioned by adopting a transformation matrix.
And (3) converting the space pose obtained in the step (2) into a real-time global pose by adopting the transformation matrix obtained in the step (3).
In summary, the above embodiments are only preferred embodiments of the present invention, and are not intended to limit the scope of the present invention. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (2)

CN202011071053.4A2020-10-092020-10-09Mobile robot positioning method based on multi-sensor fusionActiveCN112254729B (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
CN202011071053.4ACN112254729B (en)2020-10-092020-10-09Mobile robot positioning method based on multi-sensor fusion

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN202011071053.4ACN112254729B (en)2020-10-092020-10-09Mobile robot positioning method based on multi-sensor fusion

Publications (2)

Publication NumberPublication Date
CN112254729A CN112254729A (en)2021-01-22
CN112254729Btrue CN112254729B (en)2024-08-06

Family

ID=74233666

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN202011071053.4AActiveCN112254729B (en)2020-10-092020-10-09Mobile robot positioning method based on multi-sensor fusion

Country Status (1)

CountryLink
CN (1)CN112254729B (en)

Families Citing this family (10)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN113160130A (en)*2021-03-092021-07-23北京航空航天大学Loop detection method and device and computer equipment
CN113358134A (en)*2021-04-202021-09-07重庆知至科技有限公司Visual inertial odometer system
CN113327289A (en)*2021-05-182021-08-31中山方显科技有限公司Method for simultaneously calibrating internal and external parameters of multi-source heterogeneous sensor
CN113758491B (en)*2021-08-052024-02-23重庆长安汽车股份有限公司Relative positioning method and system based on multi-sensor fusion unmanned vehicle and vehicle
CN113790728B (en)*2021-09-292024-07-16佛山市南海区广工大数控装备协同创新研究院Loose coupling multi-sensor fusion positioning algorithm based on visual odometer
WO2023082050A1 (en)*2021-11-092023-05-19浙江大学High-precision mileage estimation method based on double-layer filter framework
CN114111791B (en)*2021-11-222024-05-17国网江苏省电力有限公司信息通信分公司 An intelligent robot indoor autonomous navigation method, system and storage medium
CN114199281B (en)*2021-12-132024-12-13长三角哈特机器人产业技术研究院 Multi-sensor joint calibration method and system based on speed optimization
CN116125967A (en)*2022-10-142023-05-16中国科学技术大学Agricultural robot positioning method and device and control system
CN115950414B (en)*2023-01-292025-07-25北京工商大学Adaptive multi-fusion SLAM method for different sensor data

Citations (2)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN109099901A (en)*2018-06-262018-12-28苏州路特工智能科技有限公司Full-automatic road roller localization method based on multisource data fusion
CN109211251A (en)*2018-09-212019-01-15北京理工大学A kind of instant positioning and map constructing method based on laser and two dimensional code fusion

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US10782137B2 (en)*2019-01-282020-09-22Qfeeltech (Beijing) Co., Ltd.Methods, apparatus, and systems for localization and mapping
CN110243358B (en)*2019-04-292023-01-03武汉理工大学Multi-source fusion unmanned vehicle indoor and outdoor positioning method and system
CN110285806A (en)*2019-07-052019-09-27电子科技大学 Fast and precise positioning algorithm for mobile robot based on multiple pose correction
CN111561923B (en)*2020-05-192022-04-15北京数字绿土科技股份有限公司SLAM (simultaneous localization and mapping) mapping method and system based on multi-sensor fusion

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN109099901A (en)*2018-06-262018-12-28苏州路特工智能科技有限公司Full-automatic road roller localization method based on multisource data fusion
CN109211251A (en)*2018-09-212019-01-15北京理工大学A kind of instant positioning and map constructing method based on laser and two dimensional code fusion

Also Published As

Publication numberPublication date
CN112254729A (en)2021-01-22

Similar Documents

PublicationPublication DateTitle
CN112254729B (en)Mobile robot positioning method based on multi-sensor fusion
CN107356252B (en)Indoor robot positioning method integrating visual odometer and physical odometer
CN106017463B (en) A kind of aircraft positioning method based on positioning sensor device
CN114526745A (en)Drawing establishing method and system for tightly-coupled laser radar and inertial odometer
CN113945206A (en)Positioning method and device based on multi-sensor fusion
CN112987065B (en)Multi-sensor-integrated handheld SLAM device and control method thereof
Panahandeh et al.Vision-aided inertial navigation based on ground plane feature detection
CN112347840A (en) Vision sensor lidar fusion UAV positioning and mapping device and method
CN103412565B (en)A kind of robot localization method with the quick estimated capacity of global position
Cai et al.Mobile robot localization using gps, imu and visual odometry
CN115479598A (en)Positioning and mapping method based on multi-sensor fusion and tight coupling system
CN112967392A (en)Large-scale park mapping and positioning method based on multi-sensor contact
CN113804184A (en)Ground robot positioning method based on multiple sensors
CN113503872B (en) A low-speed unmanned vehicle positioning method based on the fusion of camera and consumer-grade IMU
CN114459467B (en)VI-SLAM-based target positioning method in unknown rescue environment
WO2022193508A1 (en)Method and apparatus for posture optimization, electronic device, computer-readable storage medium, computer program, and program product
CN115371665B (en) A Mobile Robot Localization Method Based on Depth Camera and Inertial Fusion
CN115049910B (en)Foot robot map building and navigation method based on binocular vision odometer
CN115930948A (en) A Fusion Positioning Method for Orchard Robot
CN112556681B (en)Vision-based navigation and positioning method for orchard machine
CN118746293A (en) High-precision positioning method based on multi-sensor fusion SLAM
Xu et al.Research on indoor AGV fusion localization based on adaptive weight EKF using multi-sensor
CN116734840A (en)Mowing robot positioning method and device, electronic equipment and readable storage medium
Leng et al.An improved method for odometry estimation based on EKF and Temporal Convolutional Network
Conway et al.Vision-based Velocimetry over Unknown Terrain with a Low-Noise IMU

Legal Events

DateCodeTitleDescription
PB01Publication
PB01Publication
SE01Entry into force of request for substantive examination
SE01Entry into force of request for substantive examination
GR01Patent grant
GR01Patent grant

[8]ページ先頭

©2009-2025 Movatter.jp