技术领域technical field
本发明涉及一种导航方法,属于组合导航领域。The invention relates to a navigation method and belongs to the field of combined navigation.
背景技术Background technique
惯性导航系统具有完全自主性、高隐蔽性、强抗干扰性和信息连续性特点,因此惯性导航系统是载体实现自主导航控制的核心传感器之一。但惯性导航系统误差随时间增大而不断漂移,显然单一导航方式难以满足导航系统对精确度、可靠性及实时性的要求。随着视觉传感器、图像处理技术等不断进步,惯性/视觉组合系统是重要发展趋势之一,另外基于视觉导航对相对位姿等导航参数更为敏感,可以用于封闭或复杂环境,从而进一步完善自主定位导航能力。在载体长时间运动时,假设有场景地图,选用景象匹配进行导航,景象匹配在其匹配区域内定位具有较高的定位精度,但由于匹配区域的不连续性导致难以得到长时间连续的导航定位信息,惯性导航常作为辅助手段与惯性系统组合起来。The inertial navigation system has the characteristics of complete autonomy, high concealment, strong anti-interference and information continuity. Therefore, the inertial navigation system is one of the core sensors for the carrier to realize autonomous navigation control. However, the error of the inertial navigation system keeps drifting with the increase of time. Obviously, a single navigation method cannot meet the requirements of the navigation system for accuracy, reliability and real-time performance. With the continuous progress of visual sensors and image processing technology, inertial/visual combined system is one of the important development trends. In addition, visual-based navigation is more sensitive to navigation parameters such as relative pose, and can be used in closed or complex environments to further improve Autonomous positioning and navigation capability. When the carrier moves for a long time, it is assumed that there is a scene map, and scene matching is used for navigation. Scene matching has high positioning accuracy in its matching area, but it is difficult to obtain long-term continuous navigation and positioning due to the discontinuity of the matching area. information, inertial navigation is often combined with inertial systems as an aid.
在现代化高技术战争条件下,战斗机面临日益严峻的挑战,对导航方面的要求也越来越高,要求既能在恶劣的环境下全天候工作,支持各种任务要求,又可以在任何情况下连续地提供最高精度的导航定位信息。对于视觉和惯性组合模式来说。每一种导航设备都具有各自的工作原理、定位精度及优缺点,它们的导航信息都是对平台部分状态的观测量,如果不加区分地对所有导航信息进行融合处理,所获得的结果势必不是最优的,甚至比不组合的结果还差。Under the conditions of modern high-tech warfare, fighter jets are facing increasingly severe challenges, and the requirements for navigation are also getting higher and higher. They are required to work around the clock in harsh environments, support various mission requirements, and continuously in any situation. provide the highest precision navigation and positioning information. For combined visual and inertial modes. Each type of navigation equipment has its own working principle, positioning accuracy, advantages and disadvantages. Their navigation information is the observation of part of the state of the platform. If all the navigation information is fused without distinction, the result obtained is bound to be Not optimal, even worse than no combination.
发明内容SUMMARY OF THE INVENTION
为了克服现有技术的不足,本发明提供一种基于惯导/视觉智能组合导航方法,对视觉与惯性传感器进行智能化管理和决策,以提高系统的导航定位精度和可靠性,为各类无人系统提供导航参数。In order to overcome the deficiencies of the prior art, the present invention provides a combined navigation method based on inertial navigation/visual intelligence, which performs intelligent management and decision-making on the visual and inertial sensors, so as to improve the navigation and positioning accuracy and reliability of the system, and is a good solution for all kinds of The human system provides navigation parameters.
本发明解决其技术问题所采用的技术方案包括以下步骤:The technical scheme adopted by the present invention to solve its technical problem comprises the following steps:
步骤一,利用陀螺仪和加速度计生成惯性测量单元IMU仿真数据,载体在飞行时输出的姿态角速率为载体在载体坐标系相对于地理坐标系下的角速度其中,为地理坐标系到载体坐标系的姿态转换矩阵;Step 1, use the gyroscope and the accelerometer to generate the IMU simulation data of the inertial measurement unit, and the attitude angular rate output by the carrier during flight is: The angular velocity of the carrier in the carrier coordinate system relative to the geographic coordinate system in, is the attitude transformation matrix from the geographic coordinate system to the carrier coordinate system;
陀螺仪的理想输出其中为地理系相对于惯性系的角速度在载体系的投影,Ideal output of a gyroscope in is the projection of the angular velocity of the geographic system relative to the inertial system on the carrier system,
加速计的理想输出fn为地理系下的比力;The ideal output of an accelerometer fn is the specific force under the geographic system;
在理想陀螺仪的输出上附加随机误差,包括白噪声、一阶马尔科夫误差和随机常数误差,作为仿真的陀螺仪输出数据;Add random errors to the output of the ideal gyroscope, including white noise, first-order Markov error and random constant error, as the simulated gyroscope output data;
在理想加速度计的输出上附加随机误差,包括一阶马尔科夫误差,作为加速度计仿真输出数据;Add random errors to the output of the ideal accelerometer, including the first-order Markov error, as the accelerometer simulation output data;
步骤二,进行图像采集,从图像中提取特征并进行特征点跟踪计算,利用光速平差法计算相对坐标系视觉导航参数;通过预积分将两个相机帧之间的IMU数据积分到一起,将视觉计算的结果与IMU预积分后的结果进行配准,对齐时间戳;In step 2, image acquisition is performed, features are extracted from the image and feature point tracking calculation is performed, and the visual navigation parameters of the relative coordinate system are calculated by the light speed adjustment method; the IMU data between the two camera frames is integrated together through pre-integration, and the The results of the visual calculation are registered with the results of the IMU pre-integration, and the timestamps are aligned;
步骤三,将惯性、视觉导航信息以及性能评估结果信息输入智能导航专家系统,由推理机控制知识库中的规则开始运行,读取当前各传感源的状态,生成中间事实并将其保存至综合数据库;然后推理机根据各传感源的实时状态,对知识库中的规则进行匹配,选择当前最佳的导航融合模式,激活并执行对应的规则,向用户输出当前选用决策依据;推理机根据输入的导航源传感器信息对知识库进行搜索,控制满足设定条件时相应的动作被激活,直至推理完成,得到最优的导航决策信息;Step 3: Input inertial, visual navigation information and performance evaluation result information into the intelligent navigation expert system, and the rules in the knowledge base are controlled by the inference engine to start running, read the current status of each sensing source, generate intermediate facts and save them to Synthesize the database; then the inference engine matches the rules in the knowledge base according to the real-time status of each sensing source, selects the current best navigation fusion mode, activates and executes the corresponding rules, and outputs the current selection decision basis to the user; the inference engine Search the knowledge base according to the input sensor information of the navigation source, and control the corresponding actions to be activated when the set conditions are met, until the reasoning is completed, and the optimal navigation decision information is obtained;
步骤四,把景象匹配结果和IMU惯性数据进行无迹卡尔曼滤波组合。Step 4, combine the scene matching result and the IMU inertial data with unscented Kalman filtering.
本发明的有益效果是:在导航信息组合之前添加了智能决策模式,进行惯性/视觉组合导航工作模式选择策略和切换研究,使组合导航系统智能判断单一传感器及组合导航状态,根据传感器状态完成组合系统自主决策,保证整体导航定位性能,使组合导航获得最高精度。The beneficial effects of the present invention are as follows: an intelligent decision-making mode is added before the combination of navigation information, and research on the selection strategy and switching of the inertial/visual combined navigation work mode is carried out, so that the combined navigation system can intelligently judge the state of a single sensor and the combined navigation, and complete the combination according to the state of the sensor. The system makes independent decisions to ensure the overall navigation and positioning performance, so that the integrated navigation can obtain the highest precision.
附图说明Description of drawings
图1是专家系统的基本结构图;Figure 1 is the basic structure diagram of the expert system;
图2是导航专家系统的推理过程图;Fig. 2 is the reasoning process diagram of the navigation expert system;
图3是本发明步骤流程示意图;Fig. 3 is the step flow schematic diagram of the present invention;
图4是智能决策流程图。Figure 4 is a flowchart of intelligent decision-making.
具体实施方式Detailed ways
下面结合附图和实施例对本发明进一步说明,本发明包括但不仅限于下述实施例。The present invention will be further described below with reference to the accompanying drawings and embodiments, and the present invention includes but is not limited to the following embodiments.
本发明结合各导航源传感器的特点,提出了一种基于专家系统的智能化组合导航处理方法,探索利用基于规则的专家系统实现组合导航系统中的导航源辅助决策功能,以期实现定位定姿精度较高的导航源组合效果。Combining the characteristics of each navigation source sensor, the invention proposes an intelligent integrated navigation processing method based on an expert system, and explores the use of a rule-based expert system to realize the auxiliary decision-making function of the navigation source in the integrated navigation system, so as to achieve the accuracy of positioning and attitude determination. Higher combined effect of navigation sources.
导航专家系统如图1所示,主要由综合数据库、推理机和知识库组成。当导航专家系统接收到导航信息以及异常检测结果等输入信息时,首先由推理机控制知识库中的规则开始运行,读取当前各测距源的状态,生成中间事实并将其保存至综合数据库,然后推理机根据各测距源的实时状态,对知识库中的规则进行匹配,选择当前最佳的导航融合模式,激活并执行对应的规则,选取导航源,解释机可以向用户解释并输出当前选用的导航融合模式等的决策依据,最后若导航融合模式切换,可对导航融合模式的性能变化趋势进行决策。As shown in Figure 1, the navigation expert system is mainly composed of a comprehensive database, an inference engine and a knowledge base. When the navigation expert system receives input information such as navigation information and abnormal detection results, the inference engine controls the rules in the knowledge base to start running, reads the current status of each ranging source, generates intermediate facts and saves them in the comprehensive database , and then the inference engine matches the rules in the knowledge base according to the real-time status of each ranging source, selects the current best navigation fusion mode, activates and executes the corresponding rules, selects the navigation source, and the interpreter can explain and output to the user The decision basis of the currently selected navigation fusion mode, etc., and finally, if the navigation fusion mode is switched, the decision can be made on the performance change trend of the navigation fusion mode.
基于规则的专家系统最重要的构成要素为:知识库、推理机和综合数据库。The most important components of rule-based expert system are: knowledge base, reasoning engine and comprehensive database.
知识库:包含所有规则本文中的知识库采用框架作为知识表示方法,是基于产生式规则的启发式搜索,所以知识库中包含以规则形式存储的大量导航领域的专业知识。Knowledge base: contains all the rules The knowledge base in this paper adopts the framework as the knowledge representation method, which is a heuristic search based on production rules, so the knowledge base contains a large number of professional knowledge in the navigation field stored in the form of rules.
推理机:对运行进行整体控制推理机是基于规则的专家系统的控制中心,决定哪些规则的前件满足条件,从而激活相应规则的后件去执行对应的操作,推理过程实际上是一个搜索和匹配的过程。推理机根据输入的导航源数据对知识库进行搜索,控制满足一定条件时相应的动作会被激活,直至推理完成,得到最优的导航决策信息。本文研究中的推理机选用正向链推理,采用Rete模式匹配算法,从输入的导航源信息出发,使用规则进行启发式搜索,若规则前提匹配,则该规则选中,规则结论加入综合数据库,若问题未完全解出,继续推理,若完成,则退出。该算法极大地提高了推理的效率,为组合导航处理提供快速的决策反应。推理过程如图2所示。Reasoning engine: overall control of the operation. The reasoning engine is the control center of the rule-based expert system, which determines which preconditions of the rules satisfy the conditions, so as to activate the consequent conditions of the corresponding rules to perform the corresponding operations. The reasoning process is actually a search and matching process. The reasoning engine searches the knowledge base according to the input navigation source data, and the corresponding actions will be activated when certain conditions are met, until the reasoning is completed, and the optimal navigation decision information is obtained. The reasoning engine in this paper selects forward chain reasoning, adopts the Rete pattern matching algorithm, starts from the input navigation source information, and uses rules for heuristic search. If the premise of the rule matches, the rule is selected, and the rule conclusion is added to the comprehensive database. If the problem is not completely solved, continue reasoning, if completed, exit. The algorithm greatly improves the efficiency of reasoning and provides fast decision-making responses for combined navigation processing. The reasoning process is shown in Figure 2.
综合数据库:包含推理所需的数据综合数据库用来存储推理过程中得到的各种中间状态、事实、数据、初始状态及目标等。它相当于工作存储器,用来存放用户回答的事实、已知的事实和由推理得到的事实,而且随着问题的不同,数据库的内容也可以是动态变化的。在推理过程中,推理机根据规则的执行情况对事实表进行相应操作,如删除事实、添加事实和修改事实等。Comprehensive database: Contains the data required for reasoning. The comprehensive database is used to store various intermediate states, facts, data, initial states and goals obtained in the inference process. It is equivalent to the working memory, which is used to store the facts answered by the user, the known facts and the facts obtained by reasoning, and the content of the database can also be dynamically changed with the different questions. In the inference process, the inference engine performs corresponding operations on the fact table according to the execution of the rules, such as deleting facts, adding facts, and modifying facts.
组合导航系统智能化主要研究惯性/视觉组合导航工作模式选择策略和切换研究,使组合导航系统智能判断单一传感器及组合导航状态,根据传感器状态完成组合系统自主决策,保证整体导航定位性能。建立了惯性、视觉传感器知识库,通过导航源工作健康状态、当前任务需求、外界干扰环境等因素进行综合的分析,完成工作模式选择策略和切换研究。The intelligence of the integrated navigation system mainly studies the selection strategy and switching research of the inertial/visual integrated navigation working mode, so that the integrated navigation system can intelligently judge the state of a single sensor and the integrated navigation, and complete the autonomous decision-making of the integrated system according to the state of the sensor, so as to ensure the overall navigation and positioning performance. The inertial and visual sensor knowledge base is established, and comprehensive analysis is carried out through factors such as the working health status of the navigation source, the current task requirements, and the external interference environment, and the work mode selection strategy and switching research are completed.
本发明提出智能化组合导航方法,具体步骤如下:The present invention proposes an intelligent combined navigation method, and the specific steps are as follows:
步骤一:生成IMU仿真数据:Step 1: Generate IMU simulation data:
1.1)利用陀螺仪和加速度计测量原理生成惯性测量单元IMU仿真数据,载体在飞行时输出的姿态角速率为根据欧拉角定理可以得到1.1) The inertial measurement unit IMU simulation data is generated by using the gyroscope and accelerometer measurement principle, and the attitude angular rate output by the carrier during flight is According to Euler's angle theorem, we can get
其中为载体坐标系相对于地理坐标系下的角速度,为地理坐标系到载体坐标系的姿态转换矩阵。in is the angular velocity of the carrier coordinate system relative to the geographic coordinate system, It is the attitude transformation matrix from the geographic coordinate system to the carrier coordinate system.
陀螺仪的理想输出为Ideal output of a gyroscope for
其中为地理系相对于惯性系的角速度在载体系的投影,in is the projection of the angular velocity of the geographic system relative to the inertial system on the carrier system,
加速计的理想输出为The ideal output of the accelerometer is
fn为地理系下的比力。 fn is the specific force under the geographic system.
1.2)在理想陀螺仪的输出上附加随机误差(白噪声、一阶马尔科夫误差、随机常数误差)作为仿真的陀螺仪输出数据。1.2) Add random errors (white noise, first-order Markov error, random constant error) to the output of the ideal gyroscope as simulated gyroscope output data.
1.3)在理想加速度计的输出上附加随机误差(一阶马尔科夫误差)作为加速度计仿真输出数据。1.3) Add random error (first-order Markov error) to the output of the ideal accelerometer as the accelerometer simulation output data.
步骤二:视觉数据获取,采用摄像机进行图像采集并存储,生成视觉导航数据。Step 2: Visual data acquisition, using a camera to collect and store images to generate visual navigation data.
步骤三:IMU和视觉传感器初始化:Step 3: IMU and vision sensor initialization:
3.1)对视觉数据的处理是从图像中提取特征并进行特征点跟踪计算,基于Structure From Motion(SFM)的技术利用光速平差法可以计算相对坐标系视觉导航参数。3.1) The processing of visual data is to extract features from the image and perform feature point tracking calculation. The technology based on Structure From Motion (SFM) can calculate the relative coordinate system visual navigation parameters by using the light speed adjustment method.
3.2)由于IMU采集数据频率远高于相机帧率,为了实现数据融合,就需要考虑将时间戳对齐,预积分就是把两个相机帧之间的IMU数据积分到一起。3.2) Since the frequency of IMU data collection is much higher than the camera frame rate, in order to achieve data fusion, it is necessary to consider the time stamp alignment. Pre-integration is to integrate the IMU data between two camera frames.
3.3)利用经过预处理后的数据将视觉计算的结果与IMU预积分后的结果进行配准,对齐其时间戳,并进行初始化。3.3) Use the pre-processed data to register the results of the visual calculation with the results of the IMU pre-integration, align their timestamps, and initialize them.
步骤四:智能决策系统构建Step 4: Building an intelligent decision-making system
惯性/视觉组合导航系统组合了惯性、视觉导航源传感器,这两种传感器的工作状态、实时性能和几何分布等都不相同,需要对各传感器进行管理和决策,利用当前最佳的融合模式进行组合,以提高系统的导航定位精度和可靠性。The inertial/visual integrated navigation system combines inertial and visual navigation source sensors. The working status, real-time performance, and geometric distribution of these two sensors are different. It is necessary to manage and make decisions on each sensor, and use the current best fusion mode. combination to improve the navigation and positioning accuracy and reliability of the system.
4.1)针对导航工作模式选择策略和切换,通过导航源工作健康状态、当前任务需求、外界干扰环境进行综合的分析,给出对导航源管理、融合模式、信息输出模式的判断或决策结果,作为性能评估结果。智能决策流程图如图3所示。4.1) For the selection strategy and switching of the navigation work mode, through a comprehensive analysis of the work health status of the navigation source, the current task requirements, and the external interference environment, the judgment or decision-making results of the navigation source management, fusion mode, and information output mode are given as Performance evaluation results. The intelligent decision-making flowchart is shown in Figure 3.
4.2)将惯性、视觉导航信息以及性能评估结果(异常检测结果等)信息输入智能导航专家系统,4.2) Input inertial, visual navigation information and performance evaluation results (anomaly detection results, etc.) information into the intelligent navigation expert system,
4.3)首先由推理机控制知识库中的规则开始运行,读取当前各传感源的状态,生成中间事实并将其保存至综合数据库,4.3) First, the inference engine controls the rules in the knowledge base to start running, reads the current state of each sensing source, generates intermediate facts and saves them to the comprehensive database,
4.4)然后推理机根据各传感源的实时状态,对知识库中的规则进行匹配,选择当前最佳的导航融合模式,激活并执行对应的规则,解释机可以向用户解释并输出当前选用决策依据,最后若导航融合模式切换,可对导航融合模式的性能变化趋势进行决策。4.4) Then the inference engine matches the rules in the knowledge base according to the real-time status of each sensing source, selects the current best navigation fusion mode, activates and executes the corresponding rules, and the interpreter can explain and output the current selection decision to the user. According to this, if the navigation fusion mode is finally switched, a decision can be made on the performance change trend of the navigation fusion mode.
4.5)推理机根据输入的导航源传感器信息对知识库进行搜索,控制满足一定条件时相应的动作会被激活,直至推理完成,得到最优的导航决策信息,推理过程实际上是一个搜索和匹配的过程。4.5) The reasoning engine searches the knowledge base according to the input sensor information of the navigation source. When the control meets certain conditions, the corresponding action will be activated until the reasoning is completed, and the optimal navigation decision information is obtained. The reasoning process is actually a search and matching process. the process of.
步骤五:惯性/视觉组合仿真:进行松耦合组合模式,即把景象匹配结果和IMU惯性数据进行无迹卡尔曼滤波组合。Step 5: Inertial/Visual Combination Simulation: Perform a loosely coupled combination mode, that is, combine the scene matching results and the IMU inertial data with unscented Kalman filtering.
本发明的实施例利用IMU工作原理生成需要的仿真数据,然后对IMU、视觉传感器初始化,接着进行组合方式智能决策,最后进行了惯性/视觉组合仿真分析。The embodiment of the present invention utilizes the working principle of the IMU to generate the required simulation data, then initializes the IMU and the visual sensor, then performs intelligent decision-making in the combination mode, and finally performs the inertial/visual combination simulation analysis.
具体实施步骤如下:The specific implementation steps are as follows:
步骤一:生成IMU仿真数据:Step 1: Generate IMU simulation data:
1.1)利用陀螺仪和加速度计测量原理生成惯性测量单元IMU仿真数据,导弹载体在飞行时输出的姿态角速率为根据欧拉角定理可以得到1.1) Using the gyroscope and accelerometer measurement principle to generate the IMU simulation data of the inertial measurement unit, the attitude angular rate output by the missile carrier during flight is According to Euler's angle theorem, we can get
其中为载体坐标系相对于地理坐标系下的角速度,为地理坐标系到载体坐标系的姿态转换矩阵。in is the angular velocity of the carrier coordinate system relative to the geographic coordinate system, It is the attitude transformation matrix from the geographic coordinate system to the carrier coordinate system.
陀螺仪的理想输出为Ideal output of a gyroscope for
其中为地理系相对于惯性系的角速度在载体系的投影,in is the projection of the angular velocity of the geographic system relative to the inertial system on the carrier system,
加速计的理想输出为The ideal output of the accelerometer is
fn为地理系下的比力。 fn is the specific force under the geographic system.
1.2)在理想陀螺仪的输出上附加随机误差(白噪声、一阶马尔科夫误差、随机常数误差)作为仿真的陀螺仪输出数据。1.2) Add random errors (white noise, first-order Markov error, random constant error) to the output of the ideal gyroscope as simulated gyroscope output data.
1.3)在理想加速度计的输出上附加随机误差(一阶马尔科夫误差)作为加速度计仿真输出数据。1.3) Add random error (first-order Markov error) to the output of the ideal accelerometer as the accelerometer simulation output data.
步骤二:视觉数据获取,采用摄像机进行图像采集并存储,生成视觉导航数据。Step 2: Visual data acquisition, using a camera to collect and store images to generate visual navigation data.
步骤三:IMU和视觉传感器初始化:Step 3: IMU and vision sensor initialization:
3.1)对视觉数据的处理是从图像中提取特征并进行特征点跟踪计算,基于Structure From Motion(SFM)的技术利用光速平差法可以计算相对坐标系视觉导航参数。3.1) The processing of visual data is to extract features from the image and perform feature point tracking calculation. The technology based on Structure From Motion (SFM) can calculate the visual navigation parameters of the relative coordinate system by using the light speed adjustment method.
3.2)由于IMU采集数据频率远高于相机帧率,为了实现数据融合,就需要考虑将时间戳对齐,预积分就是把两个相机帧之间的IMU数据积分到一起。通过定义两个关键帧之间的姿态、速度、位置的相对变化量这几个量可用IMU的测量数据表达,与最优估计的状态量无关,因此在迭代优化的过程中,上述预积分量可只计算一次,在更新状态时仅仅使用相对变化量来计算下一关键帧的状态量即可,因此能极大的降低计算量,从而提高在嵌入式系统上的运行速度。3.2) Since the frequency of IMU data collection is much higher than the camera frame rate, in order to achieve data fusion, it is necessary to consider the time stamp alignment. Pre-integration is to integrate the IMU data between two camera frames. By defining the relative changes of attitude, velocity, and position between two key frames, these quantities can be expressed by the measurement data of the IMU, which has nothing to do with the optimal estimated state quantity. Therefore, in the process of iterative optimization, the above pre-integration quantity It can be calculated only once, and only the relative change amount can be used to calculate the state amount of the next key frame when updating the state, so the amount of calculation can be greatly reduced, thereby improving the running speed on the embedded system.
3.3)利用经过预处理后的数据将视觉计算的结果与IMU预积分后的结果进行配准,对齐其时间戳,并进行初始化。在配准同时能够从二维的图像序列中恢复出相应的三维信息,其中包括成像相机的运动参数以及场景的结构信息,从而得到一个很基本的状态估计。3.3) Use the pre-processed data to register the results of the visual calculation with the results of the IMU pre-integration, align their timestamps, and initialize them. At the same time of registration, the corresponding three-dimensional information can be recovered from the two-dimensional image sequence, including the motion parameters of the imaging camera and the structure information of the scene, so as to obtain a very basic state estimation.
步骤四:智能决策系统构建Step 4: Building an intelligent decision-making system
惯性/视觉组合导航系统组合了惯性、视觉导航源传感器,这两种传感器的工作状态、实时性能和几何分布等都不相同,需要对各传感器进行管理和决策,利用当前最佳的融合模式进行组合,以提高系统的导航定位精度和可靠性。The inertial/visual integrated navigation system combines inertial and visual navigation source sensors. The working status, real-time performance, and geometric distribution of these two sensors are different. It is necessary to manage and make decisions on each sensor, and use the current best fusion mode. combination to improve the navigation and positioning accuracy and reliability of the system.
4.1)由于惯性、视觉传感器导航定位特性不一样,需要对导航源进行抽象,建立各导航源的描述。针对导航工作模式选择策略和切换,通过导航源工作健康状态、当前任务需求、外界干扰环境等因素进行综合的分析,给出对导航源管理、融合模式、信息输出模式等判断或决策结果。智能决策流程图如图4所示。4.1) Due to the different navigation and positioning characteristics of inertial and visual sensors, it is necessary to abstract the navigation source and establish a description of each navigation source. For the selection strategy and switching of the navigation work mode, comprehensive analysis is carried out through factors such as the work health status of the navigation source, the current task requirements, and the external interference environment, and the judgment or decision results of the navigation source management, fusion mode, and information output mode are given. The intelligent decision-making flow chart is shown in Figure 4.
4.2)将惯性、视觉导航信息以及性能评估结果(异常检测结果等)信息输入智能导航专家系统,4.2) Input inertial, visual navigation information and performance evaluation results (anomaly detection results, etc.) information into the intelligent navigation expert system,
4.3)首先由推理机控制知识库中的规则开始运行,读取当前各传感源的状态,生成中间事实并将其保存至综合数据库,4.3) First, the inference engine controls the rules in the knowledge base to start running, reads the current state of each sensing source, generates intermediate facts and saves them to the comprehensive database,
4.4)然后推理机根据各传感源的实时状态,对知识库中的规则进行匹配,选择当前最佳的导航融合模式,激活并执行对应的规则,解释机可以向用户解释并输出当前选用决策依据,最后若导航融合模式切换,可对导航融合模式的性能变化趋势进行决策。4.4) Then the inference engine matches the rules in the knowledge base according to the real-time status of each sensing source, selects the current best navigation fusion mode, activates and executes the corresponding rules, and the interpreter can explain and output the current selection decision to the user. According to this, if the navigation fusion mode is finally switched, a decision can be made on the performance change trend of the navigation fusion mode.
4.5)推理机根据输入的导航源传感器信息对知识库进行搜索,控制满足一定条件时相应的动作会被激活,直至推理完成,得到最优的导航决策信息,推理过程实际上是一个搜索和匹配的过程。推理机的主要功能是根据相关事实和规则,对各导航源传感器的实时状态进行推理和判断,选择当前可用的最佳融合模式,从而实现系统的融合模式决策。推理机制依赖知识进行。知识以特定的形式存入知识库中,存储的方式应便于用户分析和处理导航源传感器的工作状态,性能和几何分布等状态,实现导航源管理和决策推理。4.5) The reasoning engine searches the knowledge base according to the input sensor information of the navigation source. When the control meets certain conditions, the corresponding action will be activated until the reasoning is completed, and the optimal navigation decision information is obtained. The reasoning process is actually a search and matching process. the process of. The main function of the inference engine is to infer and judge the real-time status of each navigation source sensor according to the relevant facts and rules, and select the best fusion mode currently available, so as to realize the fusion mode decision of the system. The reasoning mechanism relies on knowledge. Knowledge is stored in the knowledge base in a specific form, and the way of storage should be convenient for users to analyze and process the working status, performance and geometric distribution of navigation source sensors, so as to realize navigation source management and decision-making reasoning.
步骤五:惯性/视觉组合仿真:进行松耦合组合模式,即把景象匹配结果和IMU惯性数据进行无迹卡尔曼滤波组合。Step 5: Inertial/Visual Combination Simulation: Perform a loosely coupled combination mode, that is, combine the scene matching results and the IMU inertial data with unscented Kalman filtering.
(一)标定实施例(1) Calibration Example
设定无人机飞行时间2小时,飞行速度300m/s,惯导定位精度1n mile/h(CEP),惯导更新频率200Hz,景象可匹配区定位精度1m(CEP),考虑到景象匹配计算量大,景象匹配定位更新周期1s,在景象匹配失效区域,系统仅工作在纯惯性模式下,此时对系统误差阵进行递推运算,保证导航信息连续性,在景象匹配功能恢复后,整个导航系统又处于组合工作模式,景象匹配更新周期1s时,组合导航定位精度优于2.5m。Set the UAV flight time of 2 hours, the flight speed of 300m/s, the inertial navigation positioning accuracy of 1n mile/h (CEP), the inertial navigation update frequency of 200Hz, and the scene matching area positioning accuracy of 1m (CEP). A large amount, the scene matching positioning update cycle is 1s, in the scene matching failure area, the system only works in pure inertial mode, at this time, the system error matrix is recursively calculated to ensure the continuity of navigation information. After the scene matching function is restored, the entire The navigation system is in the combined working mode again. When the scene matching update period is 1s, the combined navigation positioning accuracy is better than 2.5m.
实际应用中景象匹配不是一直有效,因此需要考虑景象匹配多久不可用仍然能够保证组合导航定位精度,这为规划匹配区间隔长度、选择地面景象提供参考意义。In practical applications, scene matching is not always effective, so it is necessary to consider how long the scene matching is unavailable to still ensure the accuracy of integrated navigation and positioning, which provides a reference for planning the interval length of the matching area and selecting the ground scene.
在满足组合定位指标要求下,调整景象匹配更新周期,通过参数调整使得惯性/视觉组合定位结果大于5m,此时的时间间隔可以认为是景象匹配的最大时间间隔,也就是景象匹配区间隔的最远距离。Under the requirement of the combined positioning index, adjust the scene matching update cycle, and make the inertial/visual combined positioning result greater than 5m through parameter adjustment. The time interval at this time can be considered as the maximum time interval of scene matching, that is, the maximum time interval of the scene matching area. long distance.
以上述设置条件进行仿真,当组合周期小于10s时候,即飞行距离3km,组合导航定位精度优于5m(CEP),也就是说飞行3km时有景象匹配就可以保证整个过程组合定位精度。The simulation is carried out with the above setting conditions. When the combined period is less than 10s, that is, the flight distance is 3km, and the combined navigation positioning accuracy is better than 5m (CEP), that is to say, the combined positioning accuracy of the whole process can be ensured with scene matching when flying 3km.
以上实施例仅用于说明本发明的技术方案,而非对其限制;尽管参照前述实施例对本发明进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述实施例所记载的技术方案进行修改,或者对其中部分技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本发明实施例技术方案的精神和范围。The above embodiments are only used to illustrate the technical solutions of the present invention, but not to limit them; although the present invention has been described in detail with reference to the foregoing embodiments, those of ordinary skill in the art should understand that: it can still be described in the foregoing embodiments. Modifications are made to the technical solutions of the present invention, or some technical features thereof are equivalently replaced; and these modifications or replacements do not make the essence of the corresponding technical solutions deviate from the spirit and scope of the technical solutions of the embodiments of the present invention.
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201811579143.7ACN109655058A (en) | 2018-12-24 | 2018-12-24 | A kind of inertia/Visual intelligent Combinated navigation method |
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201811579143.7ACN109655058A (en) | 2018-12-24 | 2018-12-24 | A kind of inertia/Visual intelligent Combinated navigation method |
| Publication Number | Publication Date |
|---|---|
| CN109655058Atrue CN109655058A (en) | 2019-04-19 |
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN201811579143.7APendingCN109655058A (en) | 2018-12-24 | 2018-12-24 | A kind of inertia/Visual intelligent Combinated navigation method |
| Country | Link |
|---|---|
| CN (1) | CN109655058A (en) |
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN111065043A (en)* | 2019-10-25 | 2020-04-24 | 重庆邮电大学 | System and method for fusion positioning of vehicles in tunnel based on vehicle-road communication |
| CN113031040A (en)* | 2021-03-01 | 2021-06-25 | 宁夏大学 | Positioning method and system for airport ground clothes vehicle |
| CN113587975A (en)* | 2020-04-30 | 2021-11-02 | 伊姆西Ip控股有限责任公司 | Method, apparatus and computer program product for managing application environments |
| CN113949999A (en)* | 2021-09-09 | 2022-01-18 | 之江实验室 | Indoor positioning navigation equipment and method |
| CN116531690A (en)* | 2023-06-25 | 2023-08-04 | 中国人民解放军63863部队 | A kind of forest fire extinguishing bomb with throwing device and throwing control method |
| CN120558242A (en)* | 2025-07-31 | 2025-08-29 | 江苏智慧工场技术研究院有限公司 | Inertial navigation and vision fusion positioning method, device and equipment for humanoid robot |
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20160033280A1 (en)* | 2014-08-01 | 2016-02-04 | Toyota Motor Engineering & Manufacturing North America, Inc. | Wearable earpiece for providing social and environmental awareness |
| US20160305784A1 (en)* | 2015-04-17 | 2016-10-20 | Regents Of The University Of Minnesota | Iterative kalman smoother for robust 3d localization for vision-aided inertial navigation |
| CN106446815A (en)* | 2016-09-14 | 2017-02-22 | 浙江大学 | Simultaneous positioning and map building method |
| CN106679648A (en)* | 2016-12-08 | 2017-05-17 | 东南大学 | Vision-inertia integrated SLAM (Simultaneous Localization and Mapping) method based on genetic algorithm |
| CN106708066A (en)* | 2015-12-20 | 2017-05-24 | 中国电子科技集团公司第二十研究所 | Autonomous landing method of unmanned aerial vehicle based on vision/inertial navigation |
| CN106840196A (en)* | 2016-12-20 | 2017-06-13 | 南京航空航天大学 | A kind of strap-down inertial computer testing system and implementation method |
| CN107014371A (en)* | 2017-04-14 | 2017-08-04 | 东南大学 | UAV integrated navigation method and apparatus based on the adaptive interval Kalman of extension |
| CN107796391A (en)* | 2017-10-27 | 2018-03-13 | 哈尔滨工程大学 | A kind of strapdown inertial navigation system/visual odometry Combinated navigation method |
| CN108375370A (en)* | 2018-07-02 | 2018-08-07 | 江苏中科院智能科学技术应用研究院 | A kind of complex navigation system towards intelligent patrol unmanned plane |
| CN108731670A (en)* | 2018-05-18 | 2018-11-02 | 南京航空航天大学 | Inertia/visual odometry combined navigation locating method based on measurement model optimization |
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20160033280A1 (en)* | 2014-08-01 | 2016-02-04 | Toyota Motor Engineering & Manufacturing North America, Inc. | Wearable earpiece for providing social and environmental awareness |
| US20160305784A1 (en)* | 2015-04-17 | 2016-10-20 | Regents Of The University Of Minnesota | Iterative kalman smoother for robust 3d localization for vision-aided inertial navigation |
| CN106708066A (en)* | 2015-12-20 | 2017-05-24 | 中国电子科技集团公司第二十研究所 | Autonomous landing method of unmanned aerial vehicle based on vision/inertial navigation |
| CN106446815A (en)* | 2016-09-14 | 2017-02-22 | 浙江大学 | Simultaneous positioning and map building method |
| CN106679648A (en)* | 2016-12-08 | 2017-05-17 | 东南大学 | Vision-inertia integrated SLAM (Simultaneous Localization and Mapping) method based on genetic algorithm |
| CN106840196A (en)* | 2016-12-20 | 2017-06-13 | 南京航空航天大学 | A kind of strap-down inertial computer testing system and implementation method |
| CN107014371A (en)* | 2017-04-14 | 2017-08-04 | 东南大学 | UAV integrated navigation method and apparatus based on the adaptive interval Kalman of extension |
| CN107796391A (en)* | 2017-10-27 | 2018-03-13 | 哈尔滨工程大学 | A kind of strapdown inertial navigation system/visual odometry Combinated navigation method |
| CN108731670A (en)* | 2018-05-18 | 2018-11-02 | 南京航空航天大学 | Inertia/visual odometry combined navigation locating method based on measurement model optimization |
| CN108375370A (en)* | 2018-07-02 | 2018-08-07 | 江苏中科院智能科学技术应用研究院 | A kind of complex navigation system towards intelligent patrol unmanned plane |
| Title |
|---|
| 程林: "专家系统在组合导航辅助决策中的应用研究", 《科技视界》* |
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN111065043A (en)* | 2019-10-25 | 2020-04-24 | 重庆邮电大学 | System and method for fusion positioning of vehicles in tunnel based on vehicle-road communication |
| CN113587975A (en)* | 2020-04-30 | 2021-11-02 | 伊姆西Ip控股有限责任公司 | Method, apparatus and computer program product for managing application environments |
| CN113031040A (en)* | 2021-03-01 | 2021-06-25 | 宁夏大学 | Positioning method and system for airport ground clothes vehicle |
| CN113949999A (en)* | 2021-09-09 | 2022-01-18 | 之江实验室 | Indoor positioning navigation equipment and method |
| CN113949999B (en)* | 2021-09-09 | 2024-01-30 | 之江实验室 | Indoor positioning navigation equipment and method |
| CN116531690A (en)* | 2023-06-25 | 2023-08-04 | 中国人民解放军63863部队 | A kind of forest fire extinguishing bomb with throwing device and throwing control method |
| CN116531690B (en)* | 2023-06-25 | 2023-10-20 | 中国人民解放军63863部队 | A forest fire extinguishing bomb with a throwing device and a throwing control method |
| CN120558242A (en)* | 2025-07-31 | 2025-08-29 | 江苏智慧工场技术研究院有限公司 | Inertial navigation and vision fusion positioning method, device and equipment for humanoid robot |
| CN120558242B (en)* | 2025-07-31 | 2025-09-26 | 江苏智慧工场技术研究院有限公司 | Inertial navigation and vision fusion positioning method, device and equipment for humanoid robot |
| Publication | Publication Date | Title |
|---|---|---|
| CN109655058A (en) | A kind of inertia/Visual intelligent Combinated navigation method | |
| Rao et al. | CTIN: Robust contextual transformer network for inertial navigation | |
| CN107888828A (en) | Space-location method and device, electronic equipment and storage medium | |
| CN110717927A (en) | Motion estimation method for indoor robot based on deep learning and visual-inertial fusion | |
| CN108627153A (en) | A kind of rigid motion tracing system and its working method based on inertial sensor | |
| CN115639823B (en) | Method and system for controlling sensing and movement of robot under rugged undulating terrain | |
| CN109461208A (en) | Three-dimensional map processing method, device, medium and calculating equipment | |
| CN105953796A (en) | Stable motion tracking method and stable motion tracking device based on integration of simple camera and IMU (inertial measurement unit) of smart cellphone | |
| CN113221726A (en) | Hand posture estimation method and system based on visual and inertial information fusion | |
| CN107063242A (en) | Have the positioning navigation device and robot of virtual wall function | |
| CN107941167B (en) | Space scanning system based on unmanned aerial vehicle carrier and structured light scanning technology and working method thereof | |
| CN118999591B (en) | Aviation track navigation method and device based on geomagnetic signals and electronic equipment | |
| Guan et al. | A novel feature points tracking algorithm in terms of IMU-aided information fusion | |
| CN117782063A (en) | Multi-sensor fusion positioning method for variable sliding window based on graph optimization | |
| CN114061611A (en) | Target object positioning method, apparatus, storage medium and computer program product | |
| CN117421384A (en) | Visual inertia SLAM system sliding window optimization method based on common view projection matching | |
| CN113449265A (en) | Waist-borne course angle calculation method based on stacked LSTM | |
| CN115690343A (en) | Robot laser radar scanning and mapping method based on visual following | |
| Li et al. | Inspection robot GPS outages localization based on error Kalman filter and deep learning | |
| CN119322530A (en) | Unmanned aerial vehicle control method and system with enhanced precision maintenance and generalization capability | |
| Zeinali et al. | Imunet: Efficient regression architecture for imu navigation and positioning | |
| CN114581616A (en) | Visual inertia SLAM system based on multitask feature extraction network | |
| CN117073697B (en) | Autonomous hierarchical exploration map building method, device and system for ground mobile robot | |
| Zhang et al. | Recent advances in robot visual SLAM | |
| CN114967943B (en) | Method and device for determining 6DOF posture based on 3D gesture recognition |
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| WD01 | Invention patent application deemed withdrawn after publication | Application publication date:20190419 | |
| WD01 | Invention patent application deemed withdrawn after publication |