Disclosure of Invention
In view of the above, the invention provides a mobile robot positioning method based on multi-sensor fusion, which can fuse global positioning information to realize the active positioning of a mobile robot with high precision and high reliability and realize global positioning.
The invention provides a mobile robot positioning method based on multi-sensor fusion, which comprises the following steps:
Step 1, installing a plurality of sensors on a robot body to be positioned; according to the data acquired by the sensor, estimating a preliminary space pose of the robot to be positioned by adopting visual inertia combined pose estimation, and acquiring preliminary positioning data of the robot to be positioned by adopting a laser odometer;
Step 2, fusing the preliminary spatial pose and the preliminary positioning data into the spatial pose of the robot to be positioned by adopting an extended Kalman filter; if the current calculation is the first calculation or the update period is reached, executing the step 3; otherwise, executing the step 4;
Step 3, calculating the global measurement pose of the robot to be positioned according to a global space pose coordinate system, and solving a transformation matrix between the space pose and the global measurement pose at the same moment;
And 4, converting the space pose into a real-time global pose in the moving process of the robot to be positioned by adopting the transformation matrix.
Further, the global space pose coordinate system in the step 3 is a global space pose coordinate system established by adopting a two-dimensional code.
Further, the step 3 of solving the transformation matrix between the spatial pose and the global measurement pose at the same moment includes the following steps:
and solving a transformation matrix between the space pose and the global measurement pose by adopting a least square method, and setting the following residual function:
Wherein Pi is the spatial pose of the robot to be positioned at time i, Pi-1 is the spatial pose of the robot to be positioned at time i-1,For the increment of the space pose at two adjacent moments of the robot to be positioned, Pm is the real-time global pose obtained by calculating the space pose at m moment through the transformation matrix, Gm is the global measurement pose of the robot to be positioned at m moment, sigmai is the covariance corresponding to the space pose at i moment, sigmag is the covariance corresponding to the global measurement pose,And the residual error between the space pose at m time and n time and the real-time global pose obtained through calculation of the transformation matrix is obtained.
The beneficial effects are that:
1. The invention solves the problems of poor positioning accuracy of single visual inertia positioning in a weak texture environment, weak positioning robustness of a laser odometer in a dynamic or structurally similar environment and the like by integrating the advantages of positioning of various sensors, has higher reliability and positioning accuracy, and has lower hardware cost and lower calculation force requirement on an operation platform on the whole.
2. The invention provides a simple and effective global pose estimation integration mode, and can effectively reduce the accumulated error in the odometer by expanding and integrating other active positioning information, realize global positioning and improve the universality of the method.
Detailed Description
The invention will now be described in detail by way of example with reference to the accompanying drawings.
The invention provides a mobile robot positioning method based on multi-sensor fusion, which is shown in fig. 1 and specifically comprises the following steps:
Step 1, installing a plurality of sensors on a robot body to be positioned; and according to the data acquired by the sensor, estimating the initial spatial pose of the robot to be positioned by adopting visual inertia combined pose estimation, and acquiring the initial positioning data of the robot to be positioned by adopting a laser odometer.
Specifically, a process of first mounting a plurality of sensors to a robot body to be positioned includes:
and 1.1, installing and calibrating the sensor.
The 2D laser radar, the IMU and the monocular camera are adopted as positioning sensors, and can be also integrated into other sensors such as a wheel speed meter, so that the IMU and the monocular camera are positioned at the central position of the robot body as much as possible when the IMU and the monocular camera are installed. And (3) calibrating the random error and the data noise of the IMU by using a calibration tool, and simultaneously, calibrating external parameters between the IMU and the monocular sensor and between the IMU and the laser radar.
And 1.2, collecting, reading and synchronizing multi-sensor information.
The industrial personal computer collects data of the plurality of sensors in real time through the USB interface and marks a time stamp on the collected data. The operation of time stamp alignment is performed to ensure that there is matching IMU data between two adjacent frames of image data and laser point cloud data.
Then, based on the information acquired by the sensor, the preliminary space pose of the robot to be positioned is estimated by adopting the visual inertia combined pose, and the preliminary positioning data of the robot to be positioned is acquired by adopting a laser odometer, and the specific process is as follows:
And 1.3, transmitting the time-aligned IMU data and the image data into a visual inertial pose estimation module, and establishing a motion model. The FAST corner points in the camera image are extracted and the positions of these corner points in the next adjacent image are tracked by optical flow. And pre-integrating the IMU data to obtain motion estimation of the IMU data in the time interval of two adjacent frames of images.
In the module initialization stage, a five-point method is used to calculate motion estimation between adjacent image frames. And (3) aligning visual motion estimation with the IMU pre-integration quantity, establishing a constructed residual function, obtaining information such as rotation external parameters between a camera and the IMU, random error estimation of a gyroscope in the IMU, gravity direction in the camera, initial motion speed of a robot, scale of a system module and the like by using a nonlinear optimization mode, and realizing alignment of a camera coordinate system and a world coordinate system, and finishing initialization.
In the subsequent motion estimation, the prediction of the camera pose is realized through IMU pre-integration, and a residual function shown in the following formula can be constructed for a fixed number of pose estimation variables by utilizing a sliding window algorithm through displacement triangulation feature points between adjacent image frames:
The residual error items in the above formula correspond to the priori residual error items, the space point re-projection errors and the IMU residual errors in sequence from left to right, and more accurate estimation of the pose of the robot can be realized through a graph optimization mode.
And 1.4, after the initialization of the visual inertia pose estimation module is completed, waiting for the acquired laser point cloud data of the next frame, and according to the time stamp of the point cloud data, utilizing the IMU random walk error parameters estimated in the third step to realize more accurate IMU pre-integration, and obtaining the corresponding laser odometer initial pose from the pre-integration pose in a linear interpolation mode to complete the initialization. And then, obtaining the displacement variation of the pose of a frame of point cloud at different moments relative to the initial acquisition moment by performing linear interpolation on the IMU pre-integral pose, and removing the motion distortion in the information of each laser point by using a spherical linear interpolation method. After the point cloud distortion calibration is completed, the method of IMLS-ICP is adopted to estimate the motion information of the point cloud between adjacent frames, and the pose coordinates x and y and the yaw angle are updated.
And 2, fusing the preliminary spatial pose and the preliminary positioning data into the spatial pose of the robot to be positioned by adopting an extended Kalman filter, as shown in fig. 2.
A prediction model and an observation model of the system need to be established, the state variables of the system are 6-dimensional, x, y, z, roll, pitch, yaw are respectively corresponding, and the state prediction function is set as follows:
Wherein, vel [0] and vel [1] are control amounts input by the system, vel [0] is a variable amount of a yaw angle in the pose of the robot between the time t-1 and the time t, and vel [1] is the displacement between the time t-1 and the time t. The partial data can be obtained from IMU pre-integral data, or can be measured by extended wheel speed measurement.
For the observation model, the update of the measurement data is linear, and the observation transformation matrix H is a 6×6 unit matrix. The laser odometer part brings about observation updating of x, y and yaw state quantities, and the visual inertia combined pose estimation module brings about observation updating of x, y, z, roll, pitch, yaw state quantities. Meanwhile, in order to avoid the conversion of a coordinate system between different pose estimation modules, the nonlinear Kalman filtering is fused to estimate pose increment of the pose estimation modules in the same time interval.
And 3, calculating the global measurement pose of the robot to be positioned according to the global space pose coordinate system, and solving a transformation matrix between the space pose and the global measurement pose at the same moment, as shown in fig. 3.
And manufacturing a limited number of Tags two-dimensional codes, measuring the size of the manufactured Tags and distributing unique ID to each two-dimensional code, and calibrating internal parameters of the camera to obtain focal length information of the camera. Establishing a global coordinate system of an indoor positioning space, dispersedly disposing a limited number of two-dimensional codes in the global space, and measuring the three-dimensional space coordinate of each two-dimensional code. And the robot extracts information in the two-dimensional code by means of AprilTag library in the working process, calculates the pose relationship between the two-dimensional code and the robot, and intermittently realizes the calculation of the global coordinate of the machine body in a time-sharing manner. The acquisition of global measurement pose coordinates by the robot is not limited to the manner described in this example, and may be provided by using an active positioning system such as UWB.
The method and the device have the advantages that the global measurement pose of the robot to be positioned is calculated by only adopting the two-dimensional code, so that calculation errors exist, calculation amount and calculation time are increased, the space pose which is actively positioned and is acquired in the step 3 is aligned with the global measurement pose by adopting a mode of solving a transformation matrix between the space pose and the global measurement pose, and the space pose which is acquired in the step 3 is converted into the real-time global pose by adopting the transformation matrix, so that the real-time performance and accuracy of acquiring the global pose of the robot are effectively improved.
Specifically, the invention adopts a least square method to solve a transformation matrix between the space pose and the global measurement pose, and constructs the following residual function:
Wherein Pi is the spatial pose of the robot to be positioned at time i, Pi-1 is the spatial pose of the robot to be positioned at time i-1,For the increment of the space pose at two adjacent moments of the robot to be positioned, Pm is the real-time global pose obtained by calculating the transformation matrix of the space pose at m moments, Gm is the global measurement pose of the robot to be positioned at m moments, sigmai is the covariance corresponding to the space pose at i moments, sigmag is the covariance corresponding to the global measurement pose,And the residual error between the m-time and n-time spatial poses and the real-time global pose obtained through transformation matrix calculation is obtained.
When the real-time global pose of the robot to be positioned is calculated for the first time, a transformation matrix needs to be calculated, and in addition, along with measurement updating, when a calculation error reaches a set threshold value or a calculation period reaches a set time limit, the transformation matrix needs to be recalculated so as to continuously and iteratively update the transformation matrix, so that the calculation precision of the global pose meets the requirement.
And 4, converting the space pose into a real-time global pose in the moving process of the robot to be positioned by adopting a transformation matrix.
And (3) converting the space pose obtained in the step (2) into a real-time global pose by adopting the transformation matrix obtained in the step (3).
In summary, the above embodiments are only preferred embodiments of the present invention, and are not intended to limit the scope of the present invention. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present invention should be included in the protection scope of the present invention.