Movatterモバイル変換


[0]ホーム

URL:


CN114396943A - Fusion positioning method and terminal - Google Patents

Fusion positioning method and terminal
Download PDF

Info

Publication number
CN114396943A
CN114396943ACN202210039694.4ACN202210039694ACN114396943ACN 114396943 ACN114396943 ACN 114396943ACN 202210039694 ACN202210039694 ACN 202210039694ACN 114396943 ACN114396943 ACN 114396943A
Authority
CN
China
Prior art keywords
positioning data
error
equation
information
attitude
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210039694.4A
Other languages
Chinese (zh)
Other versions
CN114396943B (en
Inventor
刘宝升
李强
李洪斌
刘超
赵峰
毛舒乐
赵建伟
王佩光
赵光
周航帆
傅宁
马胜
马名阳
胡鑫
张永欣
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
State Grid Information and Telecommunication Group Co Ltd
State Grid Corp of China SGCC
State Grid Siji Location Services Co Ltd
Original Assignee
State Grid Information and Telecommunication Group Co Ltd
State Grid Corp of China SGCC
State Grid Siji Location Services Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by State Grid Information and Telecommunication Group Co Ltd, State Grid Corp of China SGCC, State Grid Siji Location Services Co LtdfiledCriticalState Grid Information and Telecommunication Group Co Ltd
Priority to CN202210039694.4ApriorityCriticalpatent/CN114396943B/en
Publication of CN114396943ApublicationCriticalpatent/CN114396943A/en
Application grantedgrantedCritical
Publication of CN114396943BpublicationCriticalpatent/CN114396943B/en
Activelegal-statusCriticalCurrent
Anticipated expirationlegal-statusCritical

Links

Images

Classifications

Landscapes

Abstract

The invention discloses a fusion positioning method and a terminal; receiving environment information of the monocular camera, and estimating the position and the posture of the monocular camera according to the environment information; receiving first positioning data of an inertia measurement unit, and optimizing the first positioning data according to the position and the posture of the monocular camera; receiving second positioning data of the inertial navigation system, and assisting a global navigation satellite system to carry out multi-frequency multi-mode real-time differential positioning according to the second positioning data to obtain third positioning data; establishing a measurement equation and a state equation according to the first positioning data and the third positioning data; estimating a state quantity error by adopting an extended Kalman filtering algorithm according to the first positioning data, a measurement equation and a state equation, and compensating the first positioning data based on the state quantity error to obtain final positioning data; the advantages of long-term absolute position, IMU and monocular vision short-term accurate position calculation of the global navigation satellite system are integrated, and the positioning accuracy is improved.

Description

Translated fromChinese
一种融合定位方法与终端A fusion positioning method and terminal

技术领域technical field

本发明涉及定位技术领域,特别涉及一种融合定位方法与终端。The present invention relates to the technical field of positioning, and in particular, to a fusion positioning method and a terminal.

背景技术Background technique

受限于作业现场复杂的环境,GNSS定位连续性和可信度带来了很大的挑战,受到遮挡和电磁干扰,卫星信号经常中断,由于频繁的重新初始化,浮点模糊度的精度受到限制,产生较大的定位偏差。为了提高在复杂环境下的定位性能,集成卫星导航和惯性导航系统被广泛采用,用以提供连续定位、速度测量、姿态测量。然而MEMS-IMU的主要缺点是在缺乏有效控制的情况下,其导航误差会在短时间内迅速发散,导致定位不够精确。Constrained by the complex environment of the job site, GNSS positioning continuity and reliability have brought great challenges, subject to occlusion and electromagnetic interference, satellite signals are often interrupted, and the accuracy of floating-point ambiguity is limited due to frequent re-initialization , resulting in a large positioning deviation. In order to improve positioning performance in complex environments, integrated satellite navigation and inertial navigation systems are widely used to provide continuous positioning, velocity measurement, and attitude measurement. However, the main disadvantage of MEMS-IMU is that in the absence of effective control, its navigation error will rapidly diverge in a short time, resulting in inaccurate positioning.

发明内容SUMMARY OF THE INVENTION

本发明所要解决的技术问题是:提供一种融合定位方法与终端,提高定位精度。The technical problem to be solved by the present invention is to provide a fusion positioning method and terminal to improve the positioning accuracy.

为了解决上述技术问题,本发明采用的技术方案为:In order to solve the above-mentioned technical problems, the technical scheme adopted in the present invention is:

一种融合定位方法,包括步骤:A fusion positioning method, comprising the steps of:

S1、向单目摄像头获取环境信息,根据所述环境信息估计所述单目摄像头的位置和姿态;S1, obtain environmental information from a monocular camera, and estimate the position and attitude of the monocular camera according to the environmental information;

S2、向惯性测量单元获取第一定位数据,根据所述单目摄像头的位置和姿态对所述第一定位数据进行优化;S2, obtain first positioning data from the inertial measurement unit, and optimize the first positioning data according to the position and attitude of the monocular camera;

S3、向惯性导航系统获取第二定位数据,根据所述第二定位数据辅助全球导航卫星系统进行多频多模的实时差分定位,得到第三定位数据;S3, acquiring second positioning data from the inertial navigation system, assisting the global navigation satellite system to perform multi-frequency and multi-mode real-time differential positioning according to the second positioning data, and obtaining third positioning data;

S4、根据所述第一定位数据以及所述第三定位数据建立量测方程和状态方程;S4, establishing a measurement equation and a state equation according to the first positioning data and the third positioning data;

S5、根据所述第一定位数据、所述量测方程和所述状态方程,采用扩展卡尔曼滤波算法估计状态量误差,并基于所述状态量误差对所述第一定位数据进行补偿,得到最终定位数据。S5. According to the first positioning data, the measurement equation and the state equation, use the extended Kalman filter algorithm to estimate the state quantity error, and compensate the first positioning data based on the state quantity error to obtain Final positioning data.

为了解决上述技术问题,本发明采用的另一种技术方案为:In order to solve the above-mentioned technical problems, another technical scheme adopted by the present invention is:

一种融合定位终端,包括处理器、存储器以及存储在所述存储器中并可在所述处理器上运行的计算机程序,所述处理器执行所述计算机程序时实现以上方案中的步骤。A fusion positioning terminal includes a processor, a memory, and a computer program stored in the memory and executable on the processor, and the processor implements the steps in the above solutions when the processor executes the computer program.

本发明的有益效果在于:本发明的技术方案将多频多模实时差分定位、惯性测量单元和单目摄像头所获取的数据采用扩展卡尔曼算法进行融合,综合全球导航卫星系统的长时绝对位置与IMU和单目视觉短时精确推算位置的优点,以获得最佳位置状态的估计,采用IMU恢复单目视觉的度量尺度,提高运动跟踪性能,用视觉测量限制IMU的误差漂移,获得更高精度且可信的位置、姿态,从而提高了定位精度。The beneficial effects of the present invention are as follows: the technical scheme of the present invention uses the extended Kalman algorithm to fuse the data obtained by the multi-frequency multi-mode real-time differential positioning, the inertial measurement unit and the monocular camera, and integrates the long-term absolute position of the global navigation satellite system. With the advantages of IMU and monocular vision to accurately estimate the position in a short time, to obtain the estimation of the best position state, use IMU to restore the metric scale of monocular vision, improve motion tracking performance, use visual measurement to limit the error drift of IMU, and obtain higher Accurate and reliable position and attitude, thus improving the positioning accuracy.

附图说明Description of drawings

图1为本发明实施例的一种融合定位方法的流程图;1 is a flowchart of a fusion positioning method according to an embodiment of the present invention;

图2为本发明实施例的一种融合定位终端的结构图;2 is a structural diagram of a fusion positioning terminal according to an embodiment of the present invention;

图3为本发明实施例的一种融合定位方法的数据处理示意图;3 is a schematic diagram of data processing of a fusion positioning method according to an embodiment of the present invention;

标号说明:Label description:

1、一种融合定位终端;2、处理器;3、存储器。1. A fusion positioning terminal; 2. A processor; 3. A memory.

具体实施方式Detailed ways

为详细说明本发明的技术内容、所实现目的及效果,以下结合实施方式并配合附图予以说明。In order to describe in detail the technical content, achieved objects and effects of the present invention, the following descriptions are given with reference to the embodiments and the accompanying drawings.

请参照图1以及图2,一种融合定位方法,包括步骤:Please refer to Figure 1 and Figure 2, a fusion positioning method, comprising the steps:

S1、向单目摄像头获取环境信息,根据所述环境信息估计所述单目摄像头的位置和姿态;S1, obtain environmental information from a monocular camera, and estimate the position and attitude of the monocular camera according to the environmental information;

S2、向惯性测量单元获取第一定位数据,根据所述单目摄像头的位置和姿态对所述第一定位数据进行优化;S2, obtain first positioning data from the inertial measurement unit, and optimize the first positioning data according to the position and attitude of the monocular camera;

S3、向惯性导航系统获取第二定位数据,根据所述第二定位数据辅助全球导航卫星系统进行多频多模的实时差分定位,得到第三定位数据;S3, acquiring second positioning data from the inertial navigation system, assisting the global navigation satellite system to perform multi-frequency and multi-mode real-time differential positioning according to the second positioning data, and obtaining third positioning data;

S4、根据所述第一定位数据以及所述第三定位数据建立量测方程和状态方程;S4, establishing a measurement equation and a state equation according to the first positioning data and the third positioning data;

S5、根据所述第一定位数据、所述量测方程和所述状态方程,采用扩展卡尔曼滤波算法估计状态量误差,并基于所述状态量误差对所述第一定位数据进行补偿,得到最终定位数据。S5. According to the first positioning data, the measurement equation and the state equation, use the extended Kalman filter algorithm to estimate the state quantity error, and compensate the first positioning data based on the state quantity error to obtain Final positioning data.

从上述描述可知,本发明的有益效果在于:本发明的技术方案将多频多模实时差分定位、惯性测量单元和单目摄像头所获取的数据采用扩展卡尔曼算法进行融合,综合全球导航卫星系统的长时绝对位置与IMU和单目视觉短时精确推算位置的优点,以获得最佳位置状态的估计,采用IMU恢复单目视觉的度量尺度,提高运动跟踪性能,用视觉测量限制IMU的误差漂移,获得更高精度且可信的位置、姿态,从而提高了定位精度。As can be seen from the above description, the beneficial effects of the present invention are: the technical scheme of the present invention uses the extended Kalman algorithm to fuse the data obtained by the multi-frequency multi-mode real-time differential positioning, the inertial measurement unit and the monocular camera, and integrates the global navigation satellite system. The advantages of long-term absolute position and IMU and short-term accurate estimation of monocular vision position to obtain the best position state estimation, use IMU to restore the metric scale of monocular vision, improve motion tracking performance, and use visual measurement to limit the error of IMU Drift to obtain a more accurate and credible position and attitude, thereby improving the positioning accuracy.

进一步地,所述步骤S1具体包括步骤:Further, the step S1 specifically includes the steps:

S11、接收单目摄像头的环境信息,对所述环境信息进行预处理;S11. Receive environmental information from a monocular camera, and preprocess the environmental information;

S12、在所述环境信息的一段连续图像中提取Harris角点特征和SIFT尺度不变特征的特征点,并采用LK稀疏光流算法跟踪相邻图像中的所述特征点;S12, extracting the feature points of Harris corner feature and SIFT scale-invariant feature from a continuous image of the environmental information, and using the LK sparse optical flow algorithm to track the feature points in adjacent images;

S13、根据所述相邻图像中的所述特征点,估计路标点以及所述单目摄像头的相对运动,并结合所述路标点和所述单目摄像头的相对运动估计所述单目摄像头的位置和姿态。S13. Estimate the relative motion of landmark points and the monocular camera according to the feature points in the adjacent images, and estimate the relative motion of the monocular camera in combination with the landmark points and the relative motion of the monocular camera. position and attitude.

由上述描述可知,在得到单目摄像头采集的环境信息后,通过对其中相邻图像帧中的特征点进行跟踪,能够准确地得到单目摄像头的位置和姿态。As can be seen from the above description, after obtaining the environmental information collected by the monocular camera, by tracking the feature points in the adjacent image frames, the position and attitude of the monocular camera can be accurately obtained.

进一步地,所述步骤S12和步骤S13之间还包括步骤:Further, the steps between the step S12 and the step S13 also include:

S121、通过仿射检验方法检验特征点的正确性并判断特征点数量是否充足,将正确性验证错误的特征点进行去除,并在特征点数量不足时,添加新的特征点。S121 , verifying the correctness of the feature points by an affine test method and judging whether the number of feature points is sufficient, remove the feature points with errors in the correctness verification, and add new feature points when the number of feature points is insufficient.

由上述描述可知,本发明在提取特征点之后,通过仿射检验方法检验特征点的正确性,并去除错误的特征点,从而使后续所确定的单目摄像头的位置和姿态更加准确。It can be seen from the above description that after extracting the feature points, the present invention checks the correctness of the feature points through the affine verification method, and removes the wrong feature points, thereby making the subsequently determined position and posture of the monocular camera more accurate.

进一步地,所述第一定位数据包括第一位置信息、第一速度信息和第一姿态信息;Further, the first positioning data includes first position information, first speed information and first attitude information;

所述步骤S2具体包括步骤:The step S2 specifically includes the steps:

S21、接收惯性测量单元的三轴的角速度、加速度和静止时的重力向量,并对预设时间内的所述角速度、所述加速度和所述重力向量进行积分,得到所述第一位置信息和所述第一速度信息,并根据四元数的更新得到所述第一姿态信息;S21. Receive the three-axis angular velocity, acceleration, and the gravitational vector at rest of the inertial measurement unit, and integrate the angular velocity, the acceleration, and the gravitational vector within a preset time to obtain the first position information and the first speed information, and obtain the first attitude information according to the update of the quaternion;

S22、根据所述第一位置信息、所述第一速度信息、所述第一姿态信息以及所述预设时间内所述单目摄像头估计的位置和姿态,建立位置误差方程和姿态误差方程;S22, establishing a position error equation and an attitude error equation according to the first position information, the first speed information, the first attitude information, and the estimated position and attitude of the monocular camera within the preset time;

S23、对所述位置误差方程和所述姿态误差方程进行优化求解,并根据计算结果对所述第一位置信息和所述第一姿态信息进行优化补偿。S23. Optimally solve the position error equation and the attitude error equation, and perform optimal compensation on the first position information and the first attitude information according to the calculation result.

由上述描述可知,根据单目摄像头估计的位置和姿态信息,对由惯性测量单元采集数据计算的第一位置信息和第一姿态信息进行优化补偿,即利用视觉测量限制了惯性测量单元的误差漂移,以得到更高精度且可信的位置和姿态数据。As can be seen from the above description, according to the position and attitude information estimated by the monocular camera, the first position information and the first attitude information calculated by the data collected by the inertial measurement unit are optimized and compensated, that is, the error drift of the inertial measurement unit is limited by visual measurement. , in order to obtain more accurate and reliable position and attitude data.

进一步地,所述全球导航卫星系统进行实时差分定位的解算时还包括步骤:根据采集的伪距、载波和多普勒观测值,得到接收机和卫星间的第一伪距双差模型和第一相位双差模型,并进行整周模糊度解;Further, when the global navigation satellite system performs the real-time differential positioning solution, it also includes the steps of: obtaining the first pseudorange double difference model and the first pseudorange double difference model between the receiver and the satellite according to the collected pseudorange, carrier and Doppler observations. The first phase double-difference model, and the integer ambiguity solution is carried out;

所述步骤S3具体包括步骤:The step S3 specifically includes the steps:

S31、接收惯性导航系统的第二定位数据,判断所述整周模糊度解中所述全球导航卫星系统采集的所述伪距、所述载波和所述多普勒观测值是否存在因信号中断产生的缺失,若存在缺失则将所述第一伪距双差模型和所述第一相位双差模型在中断处根据所述第二定位数据进行线性化处理,并结合历史模糊度和协方差采用最小二乘法估计最优浮点模糊度解,采用LAMBDA方法计算得到整周模糊度解;S31. Receive the second positioning data of the inertial navigation system, and determine whether the pseudorange, the carrier wave, and the Doppler observation value collected by the global navigation satellite system in the integer ambiguity solution exist due to signal interruption The resulting missing, if there is a missing, linearize the first pseudorange double-difference model and the first phase double-difference model according to the second positioning data at the interruption, and combine the historical ambiguity and covariance The optimal floating-point ambiguity solution is estimated by the least squares method, and the integer ambiguity solution is calculated by the LAMBDA method;

S32、根据所述整周模糊度解继续多频多模的实时差分定位的解算,最终得到所述第三定位数据。S32. Continue the solution of multi-frequency and multi-mode real-time differential positioning according to the solution of the integer ambiguity, and finally obtain the third positioning data.

由上述描述可知,本方案在全球导航卫星系统的微型信号存在中断丢失时,根据惯性导航系统的定位数据进行线性处理,并通过最小二乘法以及LAMBDA方法计算整周模糊度解,弥补了全球导航卫星系统的易受环境影响信号失锁的缺陷,提高了定位准确性。It can be seen from the above description that this scheme performs linear processing according to the positioning data of the inertial navigation system when the micro-signal of the global navigation satellite system is interrupted and lost, and calculates the whole-week ambiguity solution by the least square method and the LAMBDA method, which makes up for the global navigation system. The defect of the satellite system that is susceptible to environmental influence and the signal loss of lock improves the positioning accuracy.

进一步地,所述步骤S31和S32之间还包括步骤:Further, the steps between S31 and S32 also include steps:

S311、对所述整周模糊度解进行正确性校验,根据全球导航卫星系统信号正常时保存的历史整周模糊度解,进行假设检验统计量的统计分析,计算得到预设置信水平下的模糊度阈值;S311. Perform the correctness check on the integer ambiguity solution, perform statistical analysis of hypothesis testing statistics according to the historical integer ambiguity solutions saved when the GNSS signal is normal, and calculate to obtain the ambiguity under the preset confidence level. ambiguity threshold;

S312、判断所述整周模糊度解是否落入所述模糊度阈值的范围内,若是则校验正确,否则校验错误,采用伪距测量的方法,计算整周模糊度解。S312: Determine whether the integer ambiguity solution falls within the range of the ambiguity threshold, and if so, the verification is correct, otherwise the verification is incorrect, and a pseudorange measurement method is used to calculate the integer ambiguity solution.

由上述描述可知,本发明还采用假设检验的方法确定模糊度解是否可取,以提高模糊度解决方案的可靠性。As can be seen from the above description, the present invention also adopts the method of hypothesis testing to determine whether the ambiguity solution is desirable, so as to improve the reliability of the ambiguity solution.

进一步地,所述第一定位数据包括第一位置信息、第一速度信息和第一姿态信息;Further, the first positioning data includes first position information, first speed information and first attitude information;

所述步骤S4具体包括步骤:The step S4 specifically includes the steps:

S41、根据所述第一位置信息构建第二伪距双差模型,并根据所述第一伪距双差模型和所述第二伪距双差模型分别计算得到第一双差伪距、第二双差伪距、第一双差伪距率和第二双差伪距率,根据所述第一双差伪距和所述第二双差伪距间的误差以及所述第一双差伪距率和所述第二双差伪距率间的误差作为数据融合的观测量,构造量测方程;S41. Construct a second pseudorange double-difference model according to the first position information, and calculate and obtain a first double-difference pseudorange and a second pseudorange pseudorange according to the first pseudorange double-difference model and the second pseudorange double-difference model, respectively. The second double-difference pseudorange, the first double-difference pseudorange rate, and the second double-difference pseudorange rate are based on the error between the first double-difference pseudorange and the second double-difference pseudorange and the first double-difference pseudorange The error between the pseudorange rate and the second double-difference pseudorange rate is used as the observation amount for data fusion, and a measurement equation is constructed;

S42、根据所述第一位置信息、所述第一速度信息和所述第一姿态信息建立状态方程,所述状态方程包括位置方程、速度方程以及姿态方程。S42. Establish a state equation according to the first position information, the first velocity information, and the first attitude information, where the state equation includes a position equation, a velocity equation, and an attitude equation.

进一步地,所述状态量误差包括速度误差、姿态误差以及位置误差;Further, the state quantity error includes velocity error, attitude error and position error;

所述步骤S5具体包括步骤:The step S5 specifically includes the steps:

S51、根据所述第一位置信息、所述第一速度信息、所述第一姿态信息、所述量测方程、位置方程、速度方程以及姿态方程,采用扩展卡尔曼滤波算法估计所述速度误差、所述姿态误差以及所述位置误差;S51. According to the first position information, the first velocity information, the first attitude information, the measurement equation, the position equation, the velocity equation and the attitude equation, use an extended Kalman filter algorithm to estimate the velocity error , the attitude error and the position error;

S52、根据所述位置误差、所述速度误差以及所述姿态误差分别对所述第一位置信息、所述第一速度信息以及所述第一姿态信息进行补偿优化,得到最终定位数据。S52. Compensate and optimize the first position information, the first speed information, and the first attitude information according to the position error, the speed error, and the attitude error, respectively, to obtain final positioning data.

由上述描述可知,本申请的技术方案基于单目视觉优化后的惯性测量单元的定位数据、全球导航卫星系统的定位数据和惯性测量单元的量测方程和状态方程,采用扩展卡尔曼滤波算法对状态量各误差进行估计,对惯性测量单元得到的位置、速度、姿态信息进行补偿,从而得到较高精度的信息,提高了定位的精确度。It can be seen from the above description that the technical solution of the present application is based on the positioning data of the inertial measurement unit after monocular vision optimization, the positioning data of the global navigation satellite system, and the measurement equation and the state equation of the inertial measurement unit. Each error of the state quantity is estimated, and the position, velocity, and attitude information obtained by the inertial measurement unit are compensated, so as to obtain higher-precision information and improve the positioning accuracy.

进一步地,所述步骤S5之后还包括步骤:Further, after the step S5, it also includes the steps:

S6、将所述最终定位数据更新至所述惯性测量单元的所述第一定位数据。S6. Update the final positioning data to the first positioning data of the inertial measurement unit.

由上述描述可知,将最终得到的较高精度的位置、速度、姿态信息向惯性测量单元推算的初始值进行更新,减少惯性测量单元的漂移影响。As can be seen from the above description, the finally obtained higher-precision position, velocity, and attitude information is updated to the initial value estimated by the inertial measurement unit, so as to reduce the influence of drift of the inertial measurement unit.

请参照图2,一种融合定位终端,包括处理器、存储器以及存储在所述存储器中并可在所述处理器上运行的计算机程序,所述处理器执行所述计算机程序时实现以上方案的步骤。Please refer to FIG. 2 , a fusion positioning terminal includes a processor, a memory, and a computer program stored in the memory and running on the processor. When the processor executes the computer program, the above solutions are implemented. step.

本发明的一种融合定位方法与终端,适用于需要进行定位,但因作业现场较为复杂,全球导航卫星系统常因受到遮挡、电磁干扰发生卫星信号中断,导致定位数据不够准确的场景。The fusion positioning method and terminal of the present invention are suitable for scenarios where positioning is required, but because the operation site is relatively complex, the global navigation satellite system is often blocked and the satellite signal is interrupted due to electromagnetic interference, resulting in inaccurate positioning data.

请参照图1和图3,本发明的实施例一为:Please refer to FIG. 1 and FIG. 3, the first embodiment of the present invention is:

一种融合定位方法,包括步骤:A fusion positioning method, comprising the steps of:

S1、向单目摄像头获取环境信息,根据所述环境信息估计所述单目摄像头的位置和姿态;S1, obtain environmental information from a monocular camera, and estimate the position and attitude of the monocular camera according to the environmental information;

所述步骤S1具体包括步骤:The step S1 specifically includes the steps:

S11、接收单目摄像头的环境信息,对所述环境信息进行预处理;S11. Receive environmental information from a monocular camera, and preprocess the environmental information;

S12、在所述环境信息的一段连续图像中提取Harris角点特征和SIFT尺度不变特征的特征点,并采用LK稀疏光流算法跟踪相邻图像中的所述特征点;S12, extracting the feature points of Harris corner feature and SIFT scale-invariant feature from a continuous image of the environmental information, and using the LK sparse optical flow algorithm to track the feature points in adjacent images;

所述步骤S12和步骤S13之间还包括步骤:The steps between the step S12 and the step S13 also include:

S 121、通过仿射检验方法检验特征点的正确性并判断特征点数量是否充足,将正确性验证错误的特征点进行去除,并在特征点数量不足时,添加新的特征点;S 121. Check the correctness of the feature points by using an affine test method and judge whether the number of feature points is sufficient, remove the feature points with errors in the correctness verification, and add new feature points when the number of feature points is insufficient;

S13、根据所述相邻图像中的所述特征点,估计路标点以及所述单目摄像头的相对运动,并结合所述路标点和所述单目摄像头的相对运动估计所述单目摄像头的位置和姿态。S13. Estimate the relative motion of landmark points and the monocular camera according to the feature points in the adjacent images, and estimate the relative motion of the monocular camera in combination with the landmark points and the relative motion of the monocular camera. position and attitude.

本实施例中,采用单目摄像头识别环境地图,读取单目摄像头采集的信息并进行预处理,包括滤波、矫正和灰度处理。在一段连续的相邻帧图像中提取Harris角点特征和SIFT尺度不变特征,采用LK稀疏光流算法跟踪相邻帧特征,同时采用仿射检验方法检验特征点的正确性,去除跟踪错误的特征点,当特征点数量不足时,添加新的特征点。继而根据相邻图像中特征点估计路标点以及所述单目摄像头的相对运动,并结合所述路标点和所述单目摄像头的相对运动估计所述单目摄像头的位置和姿态,得到视觉里程计。视觉里程计包括位置、姿态、路标点坐标和相对运动。In this embodiment, a monocular camera is used to identify the environment map, and the information collected by the monocular camera is read and preprocessed, including filtering, correction, and grayscale processing. Harris corner features and SIFT scale invariant features are extracted from a continuous adjacent frame image, LK sparse optical flow algorithm is used to track adjacent frame features, and affine test method is used to check the correctness of feature points and remove tracking errors. Feature points, when the number of feature points is insufficient, add new feature points. Then, according to the feature points in the adjacent images, the relative motion of the landmark point and the monocular camera is estimated, and the relative motion of the landmark point and the monocular camera is used to estimate the position and posture of the monocular camera, and the visual mileage is obtained. count. Visual odometry includes position, attitude, landmark coordinates and relative motion.

S2、向惯性测量单元获取第一定位数据,根据所述单目摄像头的位置和姿态对所述第一定位数据进行优化;S2, obtain first positioning data from the inertial measurement unit, and optimize the first positioning data according to the position and attitude of the monocular camera;

所述第一定位数据包括第一位置信息、第一速度信息和第一姿态信息;The first positioning data includes first position information, first speed information and first attitude information;

所述步骤S2具体包括步骤:The step S2 specifically includes the steps:

S21、接收惯性测量单元的三轴的角速度、加速度和静止时的重力向量,并对预设时间内的所述角速度、所述加速度和所述重力向量进行积分,得到所述第一位置信息和所述第一速度信息,并根据四元数的更新得到所述第一姿态信息;S21. Receive the three-axis angular velocity, acceleration, and the gravitational vector at rest of the inertial measurement unit, and integrate the angular velocity, the acceleration, and the gravitational vector within a preset time to obtain the first position information and the first speed information, and obtain the first attitude information according to the update of the quaternion;

S22、根据所述第一位置信息、所述第一速度信息、所述第一姿态信息以及所述预设时间内所述单目摄像头估计的位置和姿态,建立位置误差方程和姿态误差方程;S22, establishing a position error equation and an attitude error equation according to the first position information, the first speed information, the first attitude information, and the estimated position and attitude of the monocular camera within the preset time;

S23、对所述位置误差方程和所述姿态误差方程进行优化求解,并根据计算结果对所述第一位置信息和所述第一姿态信息进行优化补偿。S23. Optimally solve the position error equation and the attitude error equation, and perform optimal compensation on the first position information and the first attitude information according to the calculation result.

本实施例中,MEMS-IMU(惯性测量单元,IMU)可测量得到三轴的角速度、加速度和静止时的重力向量,在短时间内对其积分可得到位置、速度信息,根据四元数的更新可获得其姿态的信息。在连续帧图像间,采用IMU数据获得相对于惯性空间的位置、速度和旋转,结合单目摄像头估计的位置和姿态建立误差方程,对误差方程进行优化求解来补偿IMU的积分项,从而得到优化后的第一位置信息和第一姿态信息。In this embodiment, the MEMS-IMU (Inertial Measurement Unit, IMU) can measure the three-axis angular velocity, acceleration and the gravity vector at rest, and integrate it in a short time to obtain the position and velocity information. According to the quaternion Update to get information about its pose. Between consecutive frames of images, the IMU data is used to obtain the position, velocity and rotation relative to the inertial space, and the error equation is established by combining the position and attitude estimated by the monocular camera, and the error equation is optimized and solved to compensate the integral term of the IMU. After the first position information and the first attitude information.

其中,四元数更新方程为:Among them, the quaternion update equation is:

Figure BDA0003466937400000081
Figure BDA0003466937400000081

Figure BDA0003466937400000082
为陀螺仪输出角速度右乘矩阵形式,
Figure BDA0003466937400000083
表示t时刻世界坐标系(其上标W)下的IMU旋转四元数,其下标B表示IMU坐标系;
Figure BDA0003466937400000082
is the matrix form of the right multiplication of the output angular velocity of the gyroscope,
Figure BDA0003466937400000083
Represents the IMU rotation quaternion under the world coordinate system (its superscript W) at time t, and its subscript B represents the IMU coordinate system;

根据单目摄像头采集的数据和IMU采集的数据估计对应两相邻图像间的位置、速度和旋转。According to the data collected by the monocular camera and the data collected by the IMU, the position, velocity and rotation between the corresponding two adjacent images are estimated.

Figure BDA0003466937400000084
Figure BDA0003466937400000084

其中,

Figure BDA0003466937400000085
表示世界坐标系k帧图像IMU的位置;
Figure BDA0003466937400000086
表示世界坐标系k帧图像IMU的速度;Δtk为k帧图像的时间步长;
Figure BDA0003466937400000087
表示IMU测量的加速度;
Figure BDA0003466937400000088
表示加速度偏差;gw表示重力加速度;
Figure BDA0003466937400000089
表示IMU测量的角速度;
Figure BDA00034669374000000810
表示角速度偏差;
Figure BDA00034669374000000811
表示k帧图像下IMU的四元数。in,
Figure BDA0003466937400000085
Represents the position of the k-frame image IMU in the world coordinate system;
Figure BDA0003466937400000086
Indicates the speed of the k-frame image IMU in the world coordinate system; Δtk is the time step of the k-frame image;
Figure BDA0003466937400000087
Indicates the acceleration measured by the IMU;
Figure BDA0003466937400000088
represents the acceleration deviation; gw represents the gravitational acceleration;
Figure BDA0003466937400000089
Indicates the angular velocity measured by the IMU;
Figure BDA00034669374000000810
represents the angular velocity deviation;
Figure BDA00034669374000000811
Indicates the quaternion of the IMU under k-frame images.

即对相邻两图像间对应的IMU采集的数据进行预积分:That is to pre-integrate the data collected by the corresponding IMU between two adjacent images:

Figure BDA00034669374000000812
Figure BDA00034669374000000812

其中,

Figure BDA00034669374000000813
表示k帧图像到k+1帧图像间推算的相对位移、相对速度,
Figure BDA00034669374000000814
Figure BDA00034669374000000815
为预积分项,反应两图像帧间的相对位移、速度、旋转:in,
Figure BDA00034669374000000813
Represents the relative displacement and relative velocity estimated from k frames of images to k+1 frames of images,
Figure BDA00034669374000000814
and
Figure BDA00034669374000000815
is the pre-integration term, reflecting the relative displacement, velocity, and rotation between the two image frames:

Figure BDA0003466937400000091
Figure BDA0003466937400000091

Figure BDA0003466937400000092
Figure BDA0003466937400000092

Figure BDA0003466937400000093
Figure BDA0003466937400000093

其中,

Figure BDA0003466937400000094
表示四元数乘法运算的左乘矩阵形式。in,
Figure BDA0003466937400000094
Represents the left-multiplying matrix form of the quaternion multiplication operation.

根据预积分项建立方程:Build the equation from the preintegration term:

Figure BDA0003466937400000095
Figure BDA0003466937400000095

其中,I3×1表示3乘1阶单位阵;

Figure BDA0003466937400000096
表示摄像头系到IMU系的旋转矩阵;
Figure BDA0003466937400000097
表示摄像头k+1帧图像下计算的位移。Among them, I3×1 represents a unit matrix oforder 3times 1;
Figure BDA0003466937400000096
Represents the rotation matrix of the camera system to the IMU system;
Figure BDA0003466937400000097
Indicates the displacement calculated under the camera k+1 frame image.

表示为:Expressed as:

Figure BDA00034669374000000911
Figure BDA00034669374000000911

在多帧图像间可表示为求解最小二乘方程:It can be expressed as solving the least squares equation between multiple frames of images:

Figure BDA0003466937400000098
Figure BDA0003466937400000098

其中,argmin表示取最小值;Γ表示滑动窗口中所有的图像帧。Among them, argmin represents the minimum value; Γ represents all image frames in the sliding window.

通过这种方式可求得每一帧图像的运动速度

Figure BDA0003466937400000099
重力向量为
Figure BDA00034669374000000910
以及视觉尺度因子S。根据重力向量与传感器各轴的夹角可计算出姿态角。In this way, the motion speed of each frame of image can be obtained
Figure BDA0003466937400000099
The gravity vector is
Figure BDA00034669374000000910
and the visual scale factor S. The attitude angle can be calculated according to the angle between the gravity vector and each axis of the sensor.

IMU的推算部分如下:The calculation part of the IMU is as follows:

xk+1=Axkxk+1 = Axk

xk为K时刻的位置速度加速度:xk is the position velocity acceleration at time K:

xk=[L B H vE vN vU ae an au];xk = [LBH vE vN vU ae an au ];

其中,下标E、e表示ENU坐标系E方向;L B H vE vN vU ae an au依次分别为经度、纬度、高、E向速度、N向速度、U向速度、E向加速度、N向加速度、U向加速度。Among them, the subscripts E and e represent the E direction of the ENU coordinate system; LBH vE vN vU ae an au are respectively longitude, latitude, altitude, E-direction speed, N-direction speed, U-direction speed, E-direction Acceleration, N-direction acceleration, U-direction acceleration.

A的值为:The value of A is:

Figure BDA0003466937400000101
Figure BDA0003466937400000101

其中,RM为卯酉圈曲率半径,RN为子午圈曲率半径,Δt为上一时刻与当前时刻的间隔。Among them,RM is the radius of curvature of the unitary circle, RN is the radius of curvature of the meridian circle, and Δt is the interval between the previous moment and the current moment.

姿态的计算采用四元数更新的方法,然后根据四元数计算出姿态角:The attitude calculation adopts the method of quaternion update, and then calculates the attitude angle according to the quaternion:

Figure BDA0003466937400000102
Figure BDA0003466937400000102

其中,γ、θ、ψ分别为俯仰、横滚和航向的姿态角,四元数q=(q0,q1,q2,q3)。Among them, γ, θ, and ψ are the attitude angles of pitch, roll and heading, respectively, and the quaternion q=(q0 , q1 , q2 , q3 ).

S3、向惯性导航系统获取第二定位数据,根据所述第二定位数据辅助全球导航卫星系统进行多频多模的实时差分定位,得到第三定位数据;S3, acquiring second positioning data from the inertial navigation system, assisting the global navigation satellite system to perform multi-frequency and multi-mode real-time differential positioning according to the second positioning data, and obtaining third positioning data;

所述全球导航卫星系统进行实时差分定位的解算时还包括步骤:根据采集的伪距、载波和多普勒观测值,得到接收机和卫星间的第一伪距双差模型和第一相位双差模型,并进行整周模糊度解;When the global navigation satellite system performs the real-time differential positioning calculation, it also includes the steps of: obtaining the first pseudorange double difference model and the first phase between the receiver and the satellite according to the collected pseudorange, carrier and Doppler observation values. Double-difference model, and perform integer ambiguity solution;

所述步骤S3具体包括步骤:The step S3 specifically includes the steps:

S31、接收惯性导航系统的第二定位数据,判断所述整周模糊度解中所述全球导航卫星系统采集的所述伪距、所述载波和所述多普勒观测值是否存在因信号中断产生的缺失,若存在缺失则将所述第一伪距双差模型和所述第一相位双差模型在中断处根据所述第二定位数据进行线性化处理,并结合历史模糊度和协方差采用最小二乘法估计最优浮点模糊度解,采用LAMBDA方法计算得到整周模糊度解;S31. Receive the second positioning data of the inertial navigation system, and determine whether the pseudorange, the carrier wave, and the Doppler observation value collected by the global navigation satellite system in the integer ambiguity solution exist due to signal interruption The resulting missing, if there is a missing, linearize the first pseudorange double-difference model and the first phase double-difference model according to the second positioning data at the interruption, and combine the historical ambiguity and covariance The optimal floating-point ambiguity solution is estimated by the least squares method, and the integer ambiguity solution is calculated by the LAMBDA method;

所述步骤S31和S32之间还包括步骤:The steps between S31 and S32 also include steps:

S311、对所述整周模糊度解进行正确性校验,根据全球导航卫星系统信号正常时保存的历史整周模糊度解,进行假设检验统计量的统计分析,计算得到预设置信水平下的模糊度阈值;S311. Perform the correctness check on the integer ambiguity solution, perform statistical analysis of hypothesis testing statistics according to the historical integer ambiguity solutions saved when the GNSS signal is normal, and calculate to obtain the ambiguity under the preset confidence level. ambiguity threshold;

S312、判断所述整周模糊度解是否落入所述模糊度阈值的范围内,若是则校验正确,否则校验错误,采用伪距测量的方法,计算整周模糊度解。S312: Determine whether the integer ambiguity solution falls within the range of the ambiguity threshold, and if so, the verification is correct, otherwise the verification is incorrect, and a pseudorange measurement method is used to calculate the integer ambiguity solution.

S32、根据所述整周模糊度解继续多频多模的实时差分定位的解算,最终得到所述第三定位数据。S32. Continue the solution of multi-frequency and multi-mode real-time differential positioning according to the solution of the integer ambiguity, and finally obtain the third positioning data.

本实施例中,在GNSS(全球导航卫星系统)解算中,由伪距、载波、多普勒观测值作为原始数据,得到接收机和卫星间的伪距和相位双差模型。在进行整周模糊度解中,基于GNSS信号易受到外界干扰,而INS(惯性导航系统)不易受到外界干扰的特点,使用INS输出的位置信息辅助GNSS固定整周模糊度,在GNSS信息中断时,将伪距和相位双差模型在INS推算的位置处进行线性化处理,结合历史模糊度和协方差采用最小二乘方法估计最优浮点模糊度解,然后采用LAMBDA方法计算整周模糊度向量,并进行正确性验证。In this embodiment, in the GNSS (Global Navigation Satellite System) calculation, pseudorange, carrier, and Doppler observations are used as raw data to obtain pseudorange and phase double-difference models between the receiver and the satellite. In the whole cycle ambiguity solution, based on the characteristics that GNSS signals are susceptible to external interference, while INS (Inertial Navigation System) is not easily susceptible to external interference, the position information output by INS is used to assist GNSS to fix the whole cycle ambiguity. When the GNSS information is interrupted , linearize the pseudorange and phase double-difference model at the position estimated by the INS, and use the least squares method to estimate the optimal floating-point ambiguity solution combining the historical ambiguity and covariance, and then use the LAMBDA method to calculate the integer ambiguity vector, and verify the correctness.

具体的,在出现GNSS信号中断,系统初始化后,INS提供高速率的导航信息输出,包括位置、速度、姿态(PVA)。由于多频GNSS的基准站固定不变且已知地理位置,在基准站和流动站共同接收多频GNSS信号、流动站接收基准站信息后,可采用RTK(实时差分定位)的定位方法,确定流动站的位置。根据RTK原理,在流动站接收到的基准值载波相位观测值可用前提下,形成双差伪距和载波相位观测。使用最小二乘平差计算双差浮点模糊解和它们相应的协方差最优解,得出位置约束。然后采用LAMBDA方法进行模糊度解算,并进行假设检验以验证模糊度解的正确性,即正确性校验,在GNSS信号正常时保存LAMBDA解算的模糊度,然后根据一般假设检验统计量的统计分析,计算得到一定置信水平条件下的合理阈值,根据与阈值的比较,以确定是否应接受或不接受搜索到的模糊解。如果通过正确性校验,则将当前数据用于后续步骤,否则,使用伪距测量。Specifically, after the GNSS signal is interrupted and the system is initialized, the INS provides high-speed navigation information output, including position, velocity, and attitude (PVA). Since the base station of multi-frequency GNSS is fixed and the geographic location is known, after the base station and the rover jointly receive the multi-frequency GNSS signal and the rover receives the base station information, the RTK (real-time differential positioning) positioning method can be used to determine The location of the rover. According to the RTK principle, on the premise that the reference value carrier phase observations received by the rover are available, double-difference pseudorange and carrier phase observations are formed. The double-difference floating-point fuzzy solutions and their corresponding covariance optimal solutions are computed using a least-squares adjustment, resulting in location constraints. Then use the LAMBDA method to solve the ambiguity, and perform a hypothesis test to verify the correctness of the ambiguity solution, that is, correctness check, save the ambiguity solved by LAMBDA when the GNSS signal is normal, and then test the ambiguity according to the general hypothesis. Statistical analysis, calculating a reasonable threshold under the condition of a certain confidence level, and determining whether to accept or not accept the searched fuzzy solution according to the comparison with the threshold. If the correctness check is passed, the current data is used for subsequent steps, otherwise, pseudorange measurements are used.

S4、根据所述第一定位数据以及所述第三定位数据建立量测方程和状态方程;S4, establishing a measurement equation and a state equation according to the first positioning data and the third positioning data;

所述步骤S4具体包括步骤:The step S4 specifically includes the steps:

S41、根据所述第一位置信息构建第二伪距双差模型,并根据所述第一伪距双差模型和所述第二伪距双差模型分别计算得到第一双差伪距、第二双差伪距、第一双差伪距率和第二双差伪距率,根据所述第一双差伪距和所述第二双差伪距间的误差以及所述第一双差伪距率和所述第二双差伪距率间的误差作为数据融合的观测量,构造量测方程;S41. Construct a second pseudorange double-difference model according to the first position information, and calculate and obtain a first double-difference pseudorange and a second pseudorange pseudorange according to the first pseudorange double-difference model and the second pseudorange double-difference model, respectively. The second double-difference pseudorange, the first double-difference pseudorange rate, and the second double-difference pseudorange rate are based on the error between the first double-difference pseudorange and the second double-difference pseudorange and the first double-difference pseudorange The error between the pseudorange rate and the second double-difference pseudorange rate is used as the observation amount for data fusion, and a measurement equation is constructed;

S42、根据所述第一位置信息、所述第一速度信息和所述第一姿态信息建立状态方程,所述状态方程包括位置方程、速度方程以及姿态方程。S42. Establish a state equation according to the first position information, the first velocity information, and the first attitude information, where the state equation includes a position equation, a velocity equation, and an attitude equation.

本实施例中,由GNSS与IMU进行紧组合,建立量测方程和状态方程。基于IMU推算的位置可构造惯导的伪距双差模型,GNSS和IMU数据分别计算得到双差伪距和双差伪距率,两者间的误差作为数据融合的观测量,构造量测方程。In this embodiment, the GNSS and the IMU are tightly combined to establish the measurement equation and the state equation. The pseudorange double-difference model of inertial navigation can be constructed based on the position calculated by IMU. The double-difference pseudorange and double-difference pseudorange rate are calculated respectively from GNSS and IMU data. The error between the two is used as the observation of data fusion, and the measurement equation is constructed .

S5、根据所述第一定位数据、所述量测方程和所述状态方程,采用扩展卡尔曼滤波算法估计状态量误差,并基于所述状态量误差对所述第一定位数据进行补偿,得到最终定位数据。S5. According to the first positioning data, the measurement equation and the state equation, use the extended Kalman filter algorithm to estimate the state quantity error, and compensate the first positioning data based on the state quantity error to obtain Final positioning data.

所述状态量误差包括速度误差、姿态误差以及位置误差;The state quantity error includes velocity error, attitude error and position error;

所述步骤S5具体包括步骤:The step S5 specifically includes the steps:

S51、根据所述第一位置信息、所述第一速度信息、所述第一姿态信息、所述量测方程、位置方程、速度方程以及姿态方程,采用扩展卡尔曼滤波算法估计所述速度误差、所述姿态误差以及所述位置误差;S51. According to the first position information, the first velocity information, the first attitude information, the measurement equation, the position equation, the velocity equation and the attitude equation, use an extended Kalman filter algorithm to estimate the velocity error , the attitude error and the position error;

S52、根据所述位置误差、所述速度误差以及所述姿态误差分别对所述第一位置信息、所述第一速度信息以及所述第一姿态信息进行补偿优化,得到最终定位数据。S52. Compensate and optimize the first position information, the first speed information, and the first attitude information according to the position error, the speed error, and the attitude error, respectively, to obtain final positioning data.

所述步骤S5之后还包括步骤:After the step S5, it also includes the steps:

S6、将所述最终定位数据更新至所述惯性测量单元的所述第一定位数据。S6. Update the final positioning data to the first positioning data of the inertial measurement unit.

本实施例中,状态量误差为姿态误差、速度误差、位置误差、陀螺仪误差、加速度计误差、钟差等效距离误差以及钟漂等效伪距率误差。基于单目视觉优化后的IMU数据、GNSS和IMU的量测方程和状态方程,采用扩展卡尔曼滤波算法对状态量各误差进行估计,最终对位置、速度、姿态进行补偿,得到较高精度的信息。同时将获得的高精度位置、速度、姿态对IMU推算初始值进行更新,减小IMU的漂移影响。In this embodiment, the state quantity errors are attitude error, velocity error, position error, gyroscope error, accelerometer error, clock difference equivalent distance error, and clock drift equivalent pseudorange rate error. Based on the IMU data optimized by monocular vision, the measurement equations and state equations of GNSS and IMU, the extended Kalman filtering algorithm is used to estimate the errors of the state variables, and finally the position, velocity and attitude are compensated to obtain a high-precision information. At the same time, the obtained high-precision position, speed, and attitude are updated to the initial value of the IMU estimation, so as to reduce the influence of the drift of the IMU.

其中,状态方程包括IMU误差状态方程和GNSS误差状态方程,以由姿态误差、速度误差、位置误差、陀螺仪和加速度计误差构成的状态方程为例,如下所示:Among them, the state equation includes the IMU error state equation and the GNSS error state equation. Take the state equation composed of attitude error, velocity error, position error, gyroscope and accelerometer error as an example, as shown below:

Figure BDA0003466937400000131
Figure BDA0003466937400000131

其中,WI为随机噪声,XI为状态:Among them, WI is random noise, XI is the state:

Figure BDA0003466937400000132
Figure BDA0003466937400000132

其中,

Figure BDA0003466937400000133
依次分别表示经度误差、纬度误差、高误差、E向速度误差、N向速度误差、U向速度误差、E轴失准角误差、N轴失准角误差、U轴失准角误差、x轴加速度偏差、y轴加速度偏差、z轴加速度偏差、x轴角速度偏差、y轴角速度偏差、z轴角速度偏差。in,
Figure BDA0003466937400000133
In turn represent longitude error, latitude error, altitude error, E-direction speed error, N-direction speed error, U-direction speed error, E-axis misalignment angle error, N-axis misalignment angle error, U-axis misalignment angle error, x-axis Acceleration deviation, y-axis acceleration deviation, z-axis acceleration deviation, x-axis angular velocity deviation, y-axis angular velocity deviation, z-axis angular velocity deviation.

GI为噪声转换矩阵:GI is the noise transformation matrix:

Figure BDA0003466937400000134
Figure BDA0003466937400000134

其中,zero(3,3)表示3X3阶0矩阵,

Figure BDA0003466937400000135
表示载体坐标系到世界坐标系的旋转矩阵;Among them, zero(3, 3) represents a 3X3 order 0 matrix,
Figure BDA0003466937400000135
Represents the rotation matrix from the carrier coordinate system to the world coordinate system;

FI为状态转移矩阵:FI is the state transition matrix:

Figure BDA0003466937400000141
Figure BDA0003466937400000141

Figure BDA0003466937400000142
Figure BDA0003466937400000142

Figure BDA0003466937400000143
Figure BDA0003466937400000143

Figure BDA0003466937400000144
Figure BDA0003466937400000144

Figure BDA0003466937400000145
Figure BDA0003466937400000145

Figure BDA0003466937400000146
Figure BDA0003466937400000146

Figure BDA0003466937400000147
Figure BDA0003466937400000147

Figure BDA0003466937400000151
Figure BDA0003466937400000151

Figure BDA0003466937400000152
Figure BDA0003466937400000152

其中,RM为卯酉圈曲率半径,RN为子午圈曲率半径,wie为ECEF系下地球旋转角速度,fe、fn、fu分别为e、n、u方向的加速度与其误差之和。Wherein, RM is the radius of curvature of the unitary circle, RN is the radius of curvature of the meridian circle, wie is the angular velocity of the earth's rotation under the ECEF system, fe , fn , fu are the accelerations in the directions e, n, and u and their errors, respectively and.

钟差等效距离误差、钟漂等效伪距率误差方程如下所示:The equations of the equivalent distance error of the clock difference and the equivalent pseudorange rate of the clock drift are as follows:

Figure BDA0003466937400000153
Figure BDA0003466937400000153

其中:in:

XG=[δtu δtru]TXG =[δtu δtru ]T

WG=[wtu wtru]TWG = [wtu wtru ]T

Figure BDA0003466937400000154
Figure BDA0003466937400000154

Figure BDA0003466937400000155
Figure BDA0003466937400000155

其中tu为钟差等效距离,tru为钟漂等效伪距率,βtru为误差相关时间,wtu、wtru为随机白噪声。where tu is the equivalent distance of the clock difference, tru is the equivalent pseudo-range rate of the clock drift, βtru is the error correlation time, and wtu and wtru are random white noise.

量测方程,即观测方程包括IMU误差观测方程和伪距伪距率误差观测方程,其中,IMU观测方程为:The measurement equation, that is, the observation equation, includes the IMU error observation equation and the pseudorange rate error observation equation, where the IMU observation equation is:

ZI=HIXI+VIZI =HI XI +VI

其中,ZI为观测向量,HI为观测矩阵,VI为观测误差:Among them, ZI is the observation vector, HI is the observation matrix, and VI is the observation error:

Figure BDA0003466937400000161
Figure BDA0003466937400000161

Figure BDA0003466937400000162
Figure BDA0003466937400000162

其中,eye(3,3)表示3X3单位阵,zero(3,3)表示3X3阶0矩阵。Among them, eye(3, 3) represents a 3X3 unit matrix, and zero(3, 3) represents a 3X3 order 0 matrix.

伪距、伪距率误差观测方程为:The observation equations of pseudorange and pseudorange rate error are:

ZG=HGXG+UG+VGZG =HG XG +UG +VG

其中,in,

Figure BDA0003466937400000163
Figure BDA0003466937400000163

Figure BDA0003466937400000164
Figure BDA0003466937400000164

Figure BDA0003466937400000165
Figure BDA0003466937400000165

其中,δx为IMU推算的运动载体位置与卫星星历得到的卫星位置之差,

Figure BDA0003466937400000166
为IMU推算的运动载体速度与卫星星历得到的卫星速度之差,VG为伪距、伪距率观测误差。Among them, δxis the difference between the position of the moving carrier calculated by the IMU and the position of the satellite obtained from the satellite ephemeris,
Figure BDA0003466937400000166
is the difference between the velocity of the moving carrier calculated by the IMU and the velocity of the satellite obtained from the satellite ephemeris, and VG is the observation error of the pseudorange and pseudorange rate.

结合以上状态方程和观测方程,采用扩展卡尔曼进行估计具体为:Combining the above state equation and observation equation, the extended Kalman is used to estimate the specific:

(1)对系统模型线性化处理:(1) Linearize the system model:

Figure BDA0003466937400000167
Figure BDA0003466937400000167

其中,fk-1表示非线性函数;ωk-1表示过程噪声;xk为k时刻真值,

Figure BDA0003466937400000168
为k-1时刻状态估计值,
Figure BDA0003466937400000169
为k-1时刻预测误差值。Among them, fk-1 represents the nonlinear function; ωk-1 represents the process noise; xk is the true value at time k,
Figure BDA0003466937400000168
is the estimated state value at time k-1,
Figure BDA0003466937400000169
is the prediction error value at time k-1.

一阶泰勒展开:First-order Taylor expansion:

Figure BDA00034669374000001610
Figure BDA00034669374000001610

其中,

Figure BDA0003466937400000171
表示非线性函数对状态x的偏导,
Figure BDA0003466937400000172
表示非线性函数对误差w的偏导。in,
Figure BDA0003466937400000171
represents the partial derivative of the nonlinear function with respect to the state x,
Figure BDA0003466937400000172
represents the partial derivative of the nonlinear function with respect to the error w.

(2)通过k-1时刻的状态值对时刻k进行一步预测:(2) One-step prediction of time k by the state value at time k-1:

Figure BDA0003466937400000173
Figure BDA0003466937400000173

一步状态预测所产生的误差为:The error generated by one-step state prediction is:

Figure BDA0003466937400000174
Figure BDA0003466937400000174

状态预测误差的协方差阵是:The covariance matrix of the state prediction error is:

Figure BDA0003466937400000175
Figure BDA0003466937400000175

其中,Pk-1|k-1表示k-1时刻的状态协方差阵,Qk-1表示k-1时刻的误差协方差,上标T表示转置矩阵。Among them, Pk-1|k-1 represents the state covariance matrix at time k-1, Qk-1 represents the error covariance at time k-1, and the superscript T represents the transposed matrix.

(3)k时刻量测的线性化方程为:(3) The linearized equation of the measurement at time k is:

Figure BDA0003466937400000176
Figure BDA0003466937400000176

其中,zk表示观测量,hk表示观测函数,vk表示观测噪声,

Figure BDA0003466937400000177
表示观测函数对状态x的偏导,
Figure BDA0003466937400000178
表示观测函数对噪声v的偏导。Among them, zk is the observation quantity, hk is the observation function, vk is the observation noise,
Figure BDA0003466937400000177
represents the partial derivative of the observation function with respect to the state x,
Figure BDA0003466937400000178
represents the partial derivative of the observation function with respect to the noise v.

(4)k时刻量测的一步预测:(4) One-step prediction of measurement at time k:

Figure BDA0003466937400000179
Figure BDA0003466937400000179

量测预测误差为:The measurement prediction error is:

Figure BDA00034669374000001710
Figure BDA00034669374000001710

状态预测误差与量测预测误差协方差阵为:The covariance matrix of state prediction error and measurement prediction error is:

Figure BDA00034669374000001711
Figure BDA00034669374000001711

(5)状态滤波的更新公式为:(5) The update formula of state filtering is:

状态更新公式:Status update formula:

Figure BDA0003466937400000181
Figure BDA0003466937400000181

协方差更新公式:Covariance update formula:

Figure BDA0003466937400000182
Figure BDA0003466937400000182

其中,I表示单位阵。Among them, I represents the unit matrix.

k时刻卡尔曼增益K为:The Kalman gain K at time k is:

Figure BDA0003466937400000183
Figure BDA0003466937400000183

请参照图2,本发明的实施例二为:Please refer to Fig. 2, the second embodiment of the present invention is:

一种融合定位终端1,包括处理器2、存储器3以及存储在所述存储器3中并可在所述处理器2上运行的计算机程序,所述处理器2执行所述计算机程序时实现以上实施例一中的步骤。Afusion positioning terminal 1, comprising aprocessor 2, amemory 3 and a computer program stored in thememory 3 and running on theprocessor 2, theprocessor 2 implements the above implementation when executing the computer program Steps in Example 1.

本发明的主要原理在于,将多频多模实时差分定位、惯性测量单元和单目摄像头所获取的数据采用扩展卡尔曼算法进行融合,来提高定位精度。The main principle of the present invention is that the data obtained by the multi-frequency and multi-mode real-time differential positioning, the inertial measurement unit and the monocular camera are fused using the extended Kalman algorithm to improve the positioning accuracy.

综上所述,本发明提供的一种融合定位方法与终端,将多频多模实时差分定位、惯性测量单元和单目摄像头所获取的数据采用扩展卡尔曼算法进行融合,综合全球导航卫星系统的长时绝对位置与IMU和单目视觉短时精确推算位置的优点,以获得最佳位置状态的估计,采用IMU恢复单目视觉的度量尺度,提高运动跟踪性能,用视觉测量限制IMU的误差漂移,获得更高精度且可信的位置、姿态,从而提高了定位精度。To sum up, a fusion positioning method and terminal provided by the present invention fuse the data obtained by the multi-frequency multi-mode real-time differential positioning, the inertial measurement unit and the monocular camera using the extended Kalman algorithm, and integrate the global navigation satellite system. The advantages of long-term absolute position and IMU and short-term accurate estimation of monocular vision position to obtain the best position state estimation, use IMU to restore the metric scale of monocular vision, improve motion tracking performance, and use visual measurement to limit the error of IMU Drift to obtain a more accurate and credible position and attitude, thereby improving the positioning accuracy.

以上所述仅为本发明的实施例,并非因此限制本发明的专利范围,凡是利用本发明说明书及附图内容所作的等同变换,或直接或间接运用在相关的技术领域,均同理包括在本发明的专利保护范围内。The above descriptions are only examples of the present invention, and are not intended to limit the scope of the patent of the present invention. Any equivalent transformations made by using the contents of the description and drawings of the present invention, or directly or indirectly applied in related technical fields, are similarly included in the within the scope of patent protection of the present invention.

Claims (10)

Translated fromChinese
1.一种融合定位方法,其特征在于,包括步骤:1. a fusion positioning method, is characterized in that, comprises the steps:S1、接收单目摄像头的环境信息,根据所述环境信息估计所述单目摄像头的位置和姿态;S1, receive the environmental information of the monocular camera, and estimate the position and posture of the monocular camera according to the environmental information;S2、接收惯性测量单元的第一定位数据,根据所述单目摄像头的位置和姿态对所述第一定位数据进行优化;S2, receiving the first positioning data of the inertial measurement unit, and optimizing the first positioning data according to the position and attitude of the monocular camera;S3、接收惯性导航系统的第二定位数据,根据所述第二定位数据辅助全球导航卫星系统进行多频多模的实时差分定位,得到第三定位数据;S3, receiving the second positioning data of the inertial navigation system, assisting the global navigation satellite system to perform multi-frequency and multi-mode real-time differential positioning according to the second positioning data, and obtaining third positioning data;S4、根据所述第一定位数据以及所述第三定位数据建立量测方程和状态方程;S4, establishing a measurement equation and a state equation according to the first positioning data and the third positioning data;S5、根据所述第一定位数据、所述量测方程和所述状态方程,采用扩展卡尔曼滤波算法估计状态量误差,并基于所述状态量误差对所述第一定位数据进行补偿,得到最终定位数据。S5. According to the first positioning data, the measurement equation and the state equation, use the extended Kalman filter algorithm to estimate the state quantity error, and compensate the first positioning data based on the state quantity error to obtain Final positioning data.2.根据权利要求1所述的一种融合定位方法,其特征在于,所述步骤S1具体包括步骤:2. a kind of fusion positioning method according to claim 1, is characterized in that, described step S1 specifically comprises steps:S11、接收单目摄像头的环境信息,对所述环境信息进行预处理;S11. Receive environmental information from a monocular camera, and preprocess the environmental information;S12、在所述环境信息的一段连续图像中提取Harris角点特征和SIFT尺度不变特征的特征点,并采用LK稀疏光流算法跟踪相邻图像中的所述特征点;S12, extracting the feature points of Harris corner feature and SIFT scale-invariant feature from a continuous image of the environmental information, and using the LK sparse optical flow algorithm to track the feature points in adjacent images;S13、根据所述相邻图像中的所述特征点,估计路标点以及所述单目摄像头的相对运动,并结合所述路标点和所述单目摄像头的相对运动估计所述单目摄像头的位置和姿态。S13. Estimate the relative motion of landmark points and the monocular camera according to the feature points in the adjacent images, and estimate the relative motion of the monocular camera in combination with the landmark points and the relative motion of the monocular camera. position and attitude.3.根据权利要求2所述的一种融合定位方法,其特征在于,所述步骤S12和步骤S13之间还包括步骤:3. A kind of fusion positioning method according to claim 2, is characterized in that, between described step S12 and step S13 also comprises steps:S121、通过仿射检验方法检验特征点的正确性并判断特征点数量是否充足,将正确性验证错误的特征点进行去除,并在特征点数量不足时,添加新的特征点。S121 , verifying the correctness of the feature points by an affine test method and judging whether the number of feature points is sufficient, remove the feature points with errors in the correctness verification, and add new feature points when the number of feature points is insufficient.4.根据权利要求1所述的一种融合定位方法,其特征在于,所述第一定位数据包括第一位置信息、第一速度信息和第一姿态信息;4. A fusion positioning method according to claim 1, wherein the first positioning data comprises first position information, first speed information and first attitude information;所述步骤S2具体包括步骤:The step S2 specifically includes the steps:S21、接收惯性测量单元的三轴的角速度、加速度和静止时的重力向量,并对预设时间内的所述角速度、所述加速度和所述重力向量进行积分,得到所述第一位置信息和所述第一速度信息,并根据四元数的更新得到所述第一姿态信息;S21. Receive the three-axis angular velocity, acceleration, and the gravitational vector at rest of the inertial measurement unit, and integrate the angular velocity, the acceleration, and the gravitational vector within a preset time to obtain the first position information and the first speed information, and obtain the first attitude information according to the update of the quaternion;S22、根据所述第一位置信息、所述第一速度信息、所述第一姿态信息以及所述预设时间内所述单目摄像头估计的位置和姿态,建立位置误差方程和姿态误差方程;S22, establishing a position error equation and an attitude error equation according to the first position information, the first speed information, the first attitude information, and the estimated position and attitude of the monocular camera within the preset time;S23、对所述位置误差方程和所述姿态误差方程进行优化求解,并根据计算结果对所述第一位置信息和所述第一姿态信息进行优化补偿。S23. Optimally solve the position error equation and the attitude error equation, and perform optimal compensation on the first position information and the first attitude information according to the calculation result.5.根据权利要求1所述的一种融合定位方法,其特征在于,所述全球导航卫星系统进行实时差分定位的解算时还包括步骤:根据采集的伪距、载波和多普勒观测值,得到接收机和卫星间的第一伪距双差模型和第一相位双差模型,并进行整周模糊度解;5. a kind of fusion positioning method according to claim 1, is characterized in that, when described global navigation satellite system carries out the solution of real-time differential positioning, also comprises the step: according to the pseudorange, carrier wave and Doppler observation value collected , obtain the first pseudo-range double-difference model and the first phase double-difference model between the receiver and the satellite, and perform an integer ambiguity solution;所述步骤S3具体包括步骤:The step S3 specifically includes the steps:S31、接收惯性导航系统的第二定位数据,判断所述整周模糊度解中所述全球导航卫星系统采集的所述伪距、所述载波和所述多普勒观测值是否存在因信号中断产生的缺失,若存在缺失则将所述第一伪距双差模型和所述第一相位双差模型在中断处根据所述第二定位数据进行线性化处理,并结合历史模糊度和协方差采用最小二乘法估计最优浮点模糊度解,采用LAMBDA方法计算得到整周模糊度解;S31. Receive the second positioning data of the inertial navigation system, and determine whether the pseudorange, the carrier wave, and the Doppler observation value collected by the global navigation satellite system in the integer ambiguity solution exist due to signal interruption The resulting missing, if there is a missing, linearize the first pseudorange double difference model and the first phase double difference model according to the second positioning data at the interruption, and combine the historical ambiguity and covariance The optimal floating-point ambiguity solution is estimated by the least squares method, and the integer ambiguity solution is obtained by the LAMBDA method;S32、根据所述整周模糊度解继续多频多模的实时差分定位的解算,最终得到所述第三定位数据。S32. Continue the multi-frequency multi-mode real-time differential positioning solution according to the integer ambiguity solution, and finally obtain the third positioning data.6.根据权利要求5所述的一种融合定位方法,其特征在于,所述步骤S31和S32之间还包括步骤:6. a kind of fusion positioning method according to claim 5, is characterized in that, between described steps S31 and S32 also comprises steps:S311、对所述整周模糊度解进行正确性校验,根据全球导航卫星系统信号正常时保存的历史整周模糊度解,进行假设检验统计量的统计分析,计算得到预设置信水平下的模糊度阈值;S311. Perform correctness verification on the integer ambiguity solution, and perform statistical analysis of hypothesis test statistics according to the historical integer ambiguity solutions saved when the GNSS signal is normal, and calculate to obtain the ambiguity under the preset confidence level. ambiguity threshold;S312、判断所述整周模糊度解是否落入所述模糊度阈值的范围内,若是则校验正确,否则校验错误,采用伪距测量的方法,计算整周模糊度解。S312: Determine whether the integer ambiguity solution falls within the range of the ambiguity threshold, and if so, the verification is correct, otherwise the verification is incorrect, and a pseudorange measurement method is used to calculate the integer ambiguity solution.7.根据权利要求5所述的一种融合定位方法,其特征在于,所述第一定位数据包括第一位置信息、第一速度信息和第一姿态信息;7. A fusion positioning method according to claim 5, wherein the first positioning data comprises first position information, first speed information and first attitude information;所述步骤S4具体包括步骤:The step S4 specifically includes the steps:S41、根据所述第一位置信息构建第二伪距双差模型,并根据所述第一伪距双差模型和所述第二伪距双差模型分别计算得到第一双差伪距、第二双差伪距、第一双差伪距率和第二双差伪距率,根据所述第一双差伪距和所述第二双差伪距间的误差以及所述第一双差伪距率和所述第二双差伪距率间的误差作为数据融合的观测量,构造量测方程;S41. Construct a second pseudorange double-difference model according to the first position information, and calculate and obtain a first double-difference pseudorange and a second pseudorange pseudorange according to the first pseudorange double-difference model and the second pseudorange double-difference model, respectively. The second double-difference pseudorange, the first double-difference pseudorange rate, and the second double-difference pseudorange rate are based on the error between the first double-difference pseudorange and the second double-difference pseudorange and the first double-difference pseudorange The error between the pseudorange rate and the second double-difference pseudorange rate is used as the observation amount for data fusion, and a measurement equation is constructed;S42、根据所述第一位置信息、所述第一速度信息和所述第一姿态信息建立状态方程,所述状态方程包括位置方程、速度方程以及姿态方程。S42. Establish a state equation according to the first position information, the first velocity information, and the first attitude information, where the state equation includes a position equation, a velocity equation, and an attitude equation.8.根据权利要求7所述的一种融合定位方法,其特征在于,所述状态量误差包括速度误差、姿态误差以及位置误差;8. A fusion positioning method according to claim 7, wherein the state quantity error comprises a velocity error, an attitude error and a position error;所述步骤S5具体包括步骤:The step S5 specifically includes the steps:S51、根据所述第一位置信息、所述第一速度信息、所述第一姿态信息、所述量测方程、位置方程、速度方程以及姿态方程,采用扩展卡尔曼滤波算法估计所述速度误差、所述姿态误差以及所述位置误差;S51. According to the first position information, the first velocity information, the first attitude information, the measurement equation, the position equation, the velocity equation and the attitude equation, use an extended Kalman filter algorithm to estimate the velocity error , the attitude error and the position error;S52、根据所述位置误差、所述速度误差以及所述姿态误差分别对所述第一位置信息、所述第一速度信息以及所述第一姿态信息进行补偿优化,得到最终定位数据。S52. Compensate and optimize the first position information, the first speed information, and the first attitude information according to the position error, the speed error, and the attitude error, respectively, to obtain final positioning data.9.根据权利要求1所述的一种融合定位方法,其特征在于,所述步骤S5之后还包括步骤:9. a kind of fusion positioning method according to claim 1, is characterized in that, after described step S5 also comprises step:S6、将所述最终定位数据更新至所述惯性测量单元的所述第一定位数据。S6. Update the final positioning data to the first positioning data of the inertial measurement unit.10.一种融合定位终端,包括处理器、存储器以及存储在所述存储器中并可在所述处理器上运行的计算机程序,其特征在于,所述处理器执行所述计算机程序时实现权利要求1至9中任一所述的一种融合定位方法的步骤。10. A fusion positioning terminal, comprising a processor, a memory and a computer program stored in the memory and running on the processor, wherein the processor implements the claims when executing the computer program The steps of a fusion positioning method described in any one of 1 to 9.
CN202210039694.4A2022-01-122022-01-12Fusion positioning method and terminalActiveCN114396943B (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
CN202210039694.4ACN114396943B (en)2022-01-122022-01-12Fusion positioning method and terminal

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN202210039694.4ACN114396943B (en)2022-01-122022-01-12Fusion positioning method and terminal

Publications (2)

Publication NumberPublication Date
CN114396943Atrue CN114396943A (en)2022-04-26
CN114396943B CN114396943B (en)2024-07-23

Family

ID=81230523

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN202210039694.4AActiveCN114396943B (en)2022-01-122022-01-12Fusion positioning method and terminal

Country Status (1)

CountryLink
CN (1)CN114396943B (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN114739404A (en)*2022-04-292022-07-12深圳亿嘉和科技研发有限公司High-precision positioning method and device and positioning system of hot-line work robot
CN116182873A (en)*2023-05-042023-05-30长沙驰芯半导体科技有限公司Indoor positioning method, system and computer readable medium
CN116255974A (en)*2023-01-102023-06-13山东浪潮科学研究院有限公司 A positioning device for mobile equipment in the park based on TinyML
CN116482736A (en)*2023-04-272023-07-25湖南大学 Integrated navigation and positioning method, device and system based on asynchronous differential positioning technology
CN116681759A (en)*2023-04-192023-09-01中国科学院上海微系统与信息技术研究所 A camera pose estimation method based on self-supervised visual-inertial odometry
CN117310756A (en)*2023-11-302023-12-29宁波路特斯机器人有限公司Multi-sensor fusion positioning method and system and machine-readable storage medium
CN117553774A (en)*2023-11-102024-02-13上海代数律动技术有限公司 Multi-sensor fusion carrier positioning and attitude determination method and device
CN120233385A (en)*2025-05-262025-07-01国网甘肃省电力公司天水供电公司 Functional module positioning method and system in power system based on single Beidou system

Citations (13)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US20040150557A1 (en)*2003-01-212004-08-05Ford Thomas JohnInertial GPS navigation system with modified kalman filter
US20150219767A1 (en)*2014-02-032015-08-06Board Of Regents, The University Of Texas SystemSystem and method for using global navigation satellite system (gnss) navigation and visual navigation to recover absolute position and attitude without any prior association of visual features with known coordinates
CN106597480A (en)*2016-12-082017-04-26深圳大学Anti-interference positioning method and system for satellite navigation RTK transmitting station
CN107478221A (en)*2017-08-112017-12-15黄润芳A kind of high-precision locating method for mobile terminal
US20180188032A1 (en)*2017-01-042018-07-05Qualcomm IncorporatedSystems and methods for using a global positioning system velocity in visual-inertial odometry
CN109212573A (en)*2018-10-152019-01-15东南大学For surveying and drawing the positioning system and method for vehicle under a kind of urban canyon environment
CN109376785A (en)*2018-10-312019-02-22东南大学 A Navigation Method Based on Iterative Extended Kalman Filter Fusion Inertial and Monocular Vision
CN109900265A (en)*2019-03-152019-06-18武汉大学A kind of robot localization algorithm of camera/mems auxiliary Beidou
CN109991636A (en)*2019-03-252019-07-09启明信息技术股份有限公司Map constructing method and system based on GPS, IMU and binocular vision
CN110412635A (en)*2019-07-222019-11-05武汉大学 A GNSS/SINS/Vision Tight Combination Method Supported by Environmental Beacons
CN111077556A (en)*2020-01-022020-04-28东南大学Airport luggage tractor positioning device and method integrating Beidou and multiple sensors
CN111413720A (en)*2020-03-212020-07-14哈尔滨工程大学Multi-frequency Beidou carrier phase difference/INS combined positioning method
CN112987065A (en)*2021-02-042021-06-18东南大学Handheld SLAM device integrating multiple sensors and control method thereof

Patent Citations (14)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US20040150557A1 (en)*2003-01-212004-08-05Ford Thomas JohnInertial GPS navigation system with modified kalman filter
US20150219767A1 (en)*2014-02-032015-08-06Board Of Regents, The University Of Texas SystemSystem and method for using global navigation satellite system (gnss) navigation and visual navigation to recover absolute position and attitude without any prior association of visual features with known coordinates
CN106597480A (en)*2016-12-082017-04-26深圳大学Anti-interference positioning method and system for satellite navigation RTK transmitting station
US20180188032A1 (en)*2017-01-042018-07-05Qualcomm IncorporatedSystems and methods for using a global positioning system velocity in visual-inertial odometry
CN107478221A (en)*2017-08-112017-12-15黄润芳A kind of high-precision locating method for mobile terminal
CN109099912A (en)*2017-08-112018-12-28黄润芳Outdoor accurate positioning air navigation aid, device, electronic equipment and storage medium
CN109212573A (en)*2018-10-152019-01-15东南大学For surveying and drawing the positioning system and method for vehicle under a kind of urban canyon environment
CN109376785A (en)*2018-10-312019-02-22东南大学 A Navigation Method Based on Iterative Extended Kalman Filter Fusion Inertial and Monocular Vision
CN109900265A (en)*2019-03-152019-06-18武汉大学A kind of robot localization algorithm of camera/mems auxiliary Beidou
CN109991636A (en)*2019-03-252019-07-09启明信息技术股份有限公司Map constructing method and system based on GPS, IMU and binocular vision
CN110412635A (en)*2019-07-222019-11-05武汉大学 A GNSS/SINS/Vision Tight Combination Method Supported by Environmental Beacons
CN111077556A (en)*2020-01-022020-04-28东南大学Airport luggage tractor positioning device and method integrating Beidou and multiple sensors
CN111413720A (en)*2020-03-212020-07-14哈尔滨工程大学Multi-frequency Beidou carrier phase difference/INS combined positioning method
CN112987065A (en)*2021-02-042021-06-18东南大学Handheld SLAM device integrating multiple sensors and control method thereof

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
戴邵武;刘刚;王朕;张文广;李瑞涛;: "船用低成本GNSS测向系统研究", 导航定位与授时, vol. 4, no. 02, 31 March 2017 (2017-03-31)*

Cited By (10)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN114739404A (en)*2022-04-292022-07-12深圳亿嘉和科技研发有限公司High-precision positioning method and device and positioning system of hot-line work robot
CN116255974A (en)*2023-01-102023-06-13山东浪潮科学研究院有限公司 A positioning device for mobile equipment in the park based on TinyML
CN116681759A (en)*2023-04-192023-09-01中国科学院上海微系统与信息技术研究所 A camera pose estimation method based on self-supervised visual-inertial odometry
CN116681759B (en)*2023-04-192024-02-23中国科学院上海微系统与信息技术研究所Camera pose estimation method based on self-supervision visual inertial odometer
CN116482736A (en)*2023-04-272023-07-25湖南大学 Integrated navigation and positioning method, device and system based on asynchronous differential positioning technology
CN116182873A (en)*2023-05-042023-05-30长沙驰芯半导体科技有限公司Indoor positioning method, system and computer readable medium
CN117553774A (en)*2023-11-102024-02-13上海代数律动技术有限公司 Multi-sensor fusion carrier positioning and attitude determination method and device
CN117310756A (en)*2023-11-302023-12-29宁波路特斯机器人有限公司Multi-sensor fusion positioning method and system and machine-readable storage medium
CN117310756B (en)*2023-11-302024-03-29宁波路特斯机器人有限公司Multi-sensor fusion positioning method and system and machine-readable storage medium
CN120233385A (en)*2025-05-262025-07-01国网甘肃省电力公司天水供电公司 Functional module positioning method and system in power system based on single Beidou system

Also Published As

Publication numberPublication date
CN114396943B (en)2024-07-23

Similar Documents

PublicationPublication DateTitle
CN114396943B (en)Fusion positioning method and terminal
CN112230242B (en)Pose estimation system and method
CN113203418B (en)GNSSINS visual fusion positioning method and system based on sequential Kalman filtering
US20200124421A1 (en)Method and apparatus for estimating position
US20230194306A1 (en)Multi-sensor fusion-based slam method and system
CN114002725B (en) Lane line auxiliary positioning method, device, electronic device and storage medium
CN109991636A (en)Map constructing method and system based on GPS, IMU and binocular vision
CN114136315B (en)Monocular vision-based auxiliary inertial integrated navigation method and system
US20230304802A1 (en)Passive combined indoor positioning system and method based on intelligent terminal sensor
CN109916394A (en)Combined navigation algorithm fusing optical flow position and speed information
CN110057356B (en) Method and device for locating vehicle in tunnel
CN115435779B (en) A method for intelligent body pose estimation based on GNSS/IMU/optical flow information fusion
CN113503872A (en)Low-speed unmanned vehicle positioning method based on integration of camera and consumption-level IMU
CN114519671A (en)Unmanned aerial vehicle remote sensing image dynamic rapid splicing method
US11815356B2 (en)Range image aided INS
CN115388884B (en) A joint initialization method for intelligent body pose estimator
CN117029809A (en)Underwater SLAM system and method integrating visual inertial pressure sensor
Song et al.R²-gvio: A robust, real-time gnss-visual-inertial state estimator in urban challenging environments
CN115790613B (en)Visual information-assisted inertial/odometer combined navigation method and device
Ćwian et al.GNSS-augmented lidar slam for accurate vehicle localization in large scale urban environments
CN112556681B (en)Vision-based navigation and positioning method for orchard machine
CN114895340B (en)Positioning method and device of dual-antenna GNSS/INS integrated navigation system
CN118960787A (en) Fault location detection method, device, equipment, storage medium and program product
CN117687059A (en)Vehicle positioning method and device, electronic equipment and storage medium
CN117760427A (en)Inertial navigation-map fusion positioning method based on environment landmark detection

Legal Events

DateCodeTitleDescription
PB01Publication
PB01Publication
SE01Entry into force of request for substantive examination
SE01Entry into force of request for substantive examination
GR01Patent grant
GR01Patent grant

[8]ページ先頭

©2009-2025 Movatter.jp