Movatterモバイル変換


[0]ホーム

URL:


CN114485649B - Unmanned aerial vehicle-oriented inertial, visual and altitude information fusion navigation method - Google Patents

Unmanned aerial vehicle-oriented inertial, visual and altitude information fusion navigation method
Download PDF

Info

Publication number
CN114485649B
CN114485649BCN202210122642.3ACN202210122642ACN114485649BCN 114485649 BCN114485649 BCN 114485649BCN 202210122642 ACN202210122642 ACN 202210122642ACN 114485649 BCN114485649 BCN 114485649B
Authority
CN
China
Prior art keywords
inertial
visual
imu
unmanned aerial
time
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210122642.3A
Other languages
Chinese (zh)
Other versions
CN114485649A (en
Inventor
尚克军
李至
扈光锋
庄广琛
裴新凯
李茜茜
段昊雨
明丽
焦浩
刘崇亮
王海军
王大元
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Automation Control Equipment Institute BACEI
Original Assignee
Beijing Automation Control Equipment Institute BACEI
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Automation Control Equipment Institute BACEIfiledCriticalBeijing Automation Control Equipment Institute BACEI
Priority to CN202210122642.3ApriorityCriticalpatent/CN114485649B/en
Publication of CN114485649ApublicationCriticalpatent/CN114485649A/en
Application grantedgrantedCritical
Publication of CN114485649BpublicationCriticalpatent/CN114485649B/en
Activelegal-statusCriticalCurrent
Anticipated expirationlegal-statusCritical

Links

Classifications

Landscapes

Abstract

Translated fromChinese

本发明提供了一种面向无人机的惯性、视觉和高度信息融合导航方法,该面向无人机的惯性、视觉和高度信息融合导航方法包括:根据无人机实时高度信息获取尺度估计值;根据尺度估计值更新视觉测量信息;视觉传感器采集视觉图像,根据视觉图像进行特征提取和匹配获取惯性传感器的速度初始值;构建惯性残差;根据惯性残差采用两次惯性传感器初始化方法获取IMU惯性参数;根据IMU惯性参数和视觉图像进行惯性视觉联合优化完成位姿估计,进行无人机定位以完成无人机的惯性、视觉和高度信息融合导航。应用本发明的技术方案,能够解决现有技术中导航方法位姿估计精度低、空中飞行时惯性传感器容易发散造成跟踪丢失、计算量复杂等技术问题。

The invention provides a navigation method for integrating inertial, visual and altitude information for unmanned aerial vehicles. The integrated navigation method for inertial, visual and altitude information for unmanned aerial vehicles includes: obtaining a scale estimate based on the real-time altitude information of the unmanned aerial vehicle; Update the visual measurement information based on the scale estimate; the visual sensor collects visual images, performs feature extraction and matching based on the visual images to obtain the initial speed value of the inertial sensor; constructs the inertial residual; uses two inertial sensor initialization methods based on the inertial residual to obtain the IMU inertia Parameters; based on the IMU inertial parameters and visual images, the inertial vision joint optimization is performed to complete the pose estimation, and the UAV is positioned to complete the UAV's inertial, visual and altitude information fusion navigation. The application of the technical solution of the present invention can solve the technical problems in the prior art such as low accuracy of pose estimation in navigation methods, easy divergence of inertial sensors during air flight, resulting in tracking loss, and complex calculations.

Description

Translated fromChinese
面向无人机的惯性、视觉和高度信息融合导航方法Inertial, visual and altitude information fusion navigation method for UAVs

技术领域Technical field

本发明涉及计算机视觉技术领域,尤其涉及一种面向无人机的惯性、视觉和高度信息融合导航方法。The invention relates to the field of computer vision technology, and in particular to a navigation method for integrating inertial, visual and altitude information for unmanned aerial vehicles.

背景技术Background technique

计算机视觉(Computer Vision,简称CV)是使用计算机及相关设备对生物视觉的一种模拟。它的主要任务就是通过对采集的图片或视频进行处理来获得相应场景的三维信息,从而达到对目标进行识别、跟踪或测量等目的。Computer Vision (CV) is a simulation of biological vision using computers and related equipment. Its main task is to obtain three-dimensional information of the corresponding scene by processing the collected pictures or videos, so as to achieve the purpose of identifying, tracking or measuring targets.

无人机导航方法是引导飞行器以一定的速度和方向完成运行过程的技术和方法,其中最关键的技术之一就是对无人机进行实时地跟踪定位。在无人机的飞行过程中,实时获取较高精准度的六自由度位姿信息,转化为无人机的空间经纬度坐标反馈到飞机控制系统,从而帮助无人机实现自主导航和自主着陆功能。The UAV navigation method is a technology and method that guides the aircraft to complete the operation process at a certain speed and direction. One of the most critical technologies is the real-time tracking and positioning of the UAV. During the flight of the UAV, high-precision six-degree-of-freedom pose information is obtained in real time and converted into the UAV's spatial longitude and latitude coordinates and fed back to the aircraft control system, thereby helping the UAV achieve autonomous navigation and autonomous landing functions. .

近年来,研究者通常采用即时定位与地图构建(SLAM,Simultaneouslocalization and mapping)技术,在真实场景中实时获取自身位姿,同时获取真实场景空间几何结构的场景地图。但利用SLAM技术进行实时定位仍存在以下问题:基于单目/双目的方法在快速运动、弱纹理环境下跟踪定位精度较差,甚至会跟踪丢失;基于惯性传感器(IMU)的方法受噪声影响较大,累积误差大,容易发生导航偏移;基于差分GPS的方法通常可以获取较精准的空间位置,但对于大多数无人机应用或者作战场景,场景可能会遭遇卫星拒止的情况,无法实时获取GPS信息。In recent years, researchers usually use Simultaneous localization and mapping (SLAM) technology to obtain their own posture in real-time in real scenes, and at the same time obtain scene maps of the spatial geometric structure of the real scene. However, there are still the following problems in using SLAM technology for real-time positioning: monocular/binary methods have poor tracking and positioning accuracy in fast-moving, weak texture environments, and may even lose tracking; methods based on inertial sensors (IMU) are affected by noise Larger, the cumulative error is large, and navigation deviation is prone to occur; the method based on differential GPS can usually obtain a more accurate spatial position, but for most UAV applications or combat scenarios, the scenario may encounter satellite rejection and cannot Get GPS information in real time.

总的来说,目前采用单个传感器对无人机进行导航定位难以满足实际应用需求。利用惯性/视觉融合导航的方法应用最为广泛,但现有的SLAM方法在无人机场景下位姿轨迹精度较低,极容易跟踪丢失,难以低计算量的实现实时定位与导航。In general, the current use of a single sensor to navigate and position UAVs cannot meet the needs of practical applications. The method of using inertial/visual fusion navigation is the most widely used. However, the existing SLAM method has low pose and trajectory accuracy in UAV scenarios, is easily lost in tracking, and is difficult to achieve real-time positioning and navigation with low computational complexity.

发明内容Contents of the invention

本发明旨在至少解决现有技术中存在的技术问题之一。The present invention aims to solve at least one of the technical problems existing in the prior art.

根据本发明的一方面,提供了一种面向无人机的惯性、视觉和高度信息融合导航方法,该面向无人机的惯性、视觉和高度信息融合导航方法包括:根据无人机实时高度信息获取尺度估计值,根据尺度估计值更新视觉测量信息;视觉传感器采集视觉图像,根据视觉图像进行特征提取和匹配获取惯性传感器的速度初始值;构建惯性残差;根据惯性残差采用两次惯性传感器初始化方法获取IMU惯性参数;根据IMU惯性参数和视觉图像进行惯性视觉联合优化完成位姿估计,进行无人机定位以完成无人机的惯性、视觉和高度信息融合导航。According to one aspect of the present invention, a navigation method for integrating inertial, visual and altitude information for unmanned aerial vehicles is provided. The integrated navigation method for inertial, visual and altitude information for unmanned aerial vehicles includes: based on the real-time altitude information of the unmanned aerial vehicle Obtain the scale estimate and update the visual measurement information based on the scale estimate; the visual sensor collects the visual image, performs feature extraction and matching based on the visual image to obtain the initial speed value of the inertial sensor; constructs the inertial residual; uses two inertial sensors based on the inertial residual The initialization method obtains the IMU inertial parameters; based on the IMU inertial parameters and visual images, the inertial vision joint optimization is performed to complete the pose estimation, and the UAV is positioned to complete the UAV's inertial, visual and altitude information fusion navigation.

进一步地,根据无人机实时高度信息获取尺度估计值具体包括:在视觉传感器跟踪过程中,通过提取图像的ORB特征进行特征匹配,根据两帧匹配获取第i个关键帧的三维地图点深度;导航定位处理系统根据无人机实时高度信息获取第i个关键帧的高度;根据s=h/d获取尺度估计值,其中,s为尺度估计值,h为第i个关键帧的高度,d为第i个关键帧的三维地图点深度,i=1,2,……,n,n为整数。Further, obtaining the scale estimate based on the real-time height information of the UAV specifically includes: during the visual sensor tracking process, feature matching is performed by extracting the ORB feature of the image, and the depth of the three-dimensional map point of the i-th key frame is obtained based on the matching of the two frames; The navigation and positioning processing system obtains the height of the i-th key frame based on the real-time height information of the UAV; obtains the scale estimate based on s=h/d, where s is the scale estimate, h is the height of the i-th key frame, and d is the depth of the 3D map point of the i-th key frame, i=1,2,...,n, n is an integer.

进一步地,根据视觉图像进行特征提取和匹配获取惯性传感器的速度初始值具体包括:根据视觉图像进行特征提取和匹配,获取第i个关键帧的相机位姿;将第i个关键帧的相机位姿转换到IMU的body坐标系下;根据获取惯性body坐标系下惯性传感器的速度初始值,其中,vi为i时刻惯性body坐标系下惯性传感器的速度初始值,pi和pi-1分别为i时刻和i-1时刻的平移量,δt为运动时间。Further, performing feature extraction and matching based on the visual image to obtain the initial speed value of the inertial sensor specifically includes: performing feature extraction and matching based on the visual image to obtain the camera pose of the i-th key frame; The posture is converted to the body coordinate system of the IMU; according to Obtain the initial speed value of the inertial sensor in the inertial body coordinate system, where vi is the initial speed value of the inertial sensor in the inertial body coordinate system at time i, pi and pi-1 are the translations at time i and i-1 respectively. quantity, δt is the movement time.

进一步地,构建惯性残差具体包括:根据IMU运动过程计算IMU运动测量状态,根据IMU运动测量状态获取IMU运动测量状态的预积分增量,根据IMU运动测量状态构建惯性残差。Further, constructing the inertial residual specifically includes: calculating the IMU motion measurement state according to the IMU motion process, obtaining the pre-integrated increment of the IMU motion measurement state according to the IMU motion measurement state, and constructing the inertial residual according to the IMU motion measurement state.

进一步地,根据建立惯性残差,其中,/>为i时刻至j时刻的旋转残差,/>为i时刻至j时刻的速度残差,为i时刻至j时刻的平移残差,/>为传感器测量的角速度,/>为i时刻陀螺仪的零位,Δt为从i时刻到k时刻的时间,Rwbi表示i时刻从body坐标系到世界坐标系下的旋转量,Rwbj表示j时刻从body坐标系到世界坐标系下的旋转量,vi和vj分别为i时刻和j时刻的速度初始值,g为重力加速度,Δtij为从i时刻到j时刻的时间,ΔRbibk为k时刻与i时刻相比旋转量的预积分增量,/>为测量的加速度,/>为i时刻加速度计的零位,pi为i时刻的平移量,pj为j时刻的平移量,Δvik为k时刻与i时刻相比速度量的预积分增量。Further, according to Establish the inertia residual, where,/> is the rotation residual from time i to time j,/> is the velocity residual from time i to time j, is the translation residual from time i to time j,/> is the angular velocity measured by the sensor,/> is the zero position of the gyroscope at time i, Δt is the time from time i to time k, Rwbi represents the amount of rotation from the body coordinate system to the world coordinate system at time i, and Rwbj represents the rotation from the body coordinate system to the world coordinate system at time j The amount of rotation under the system, vi and vj are the initial values of velocity at time i and time j respectively, g is the acceleration of gravity, Δtij is the time from time i to time j, ΔRbibk is the comparison between time k and time i Pre-integrated increment of rotation amount,/> is the measured acceleration,/> is the zero position of the accelerometer at time i, pi is the translation amount at time i, pj is the translation amount at time j, Δvik is the pre-integrated increment of the velocity at time k compared with time i.

进一步地,根据惯性残差采用两次惯性传感器初始化方法获取IMU惯性参数具体包括:根据尺度估计值、更新后的视觉测量信息和惯性残差构建图优化问题进行第一次IMU初始化,获取惯性传感器零位;按照重力方向更新当前位姿信息,根据惯性传感器零位更新惯性残差;根据惯性传感器零位和更新后的惯性残差构建图优化问题进行第二次IMU初始化,获取IMU惯性参数;按照重力方向和尺度更新当前位姿信息,根据IMU惯性参数中的惯性传感器零位更新惯性残差。Further, using two inertial sensor initialization methods based on the inertial residuals to obtain the IMU inertial parameters specifically includes: performing the first IMU initialization based on the scale estimate, the updated visual measurement information and the inertial residual construction graph optimization problem, and obtaining the inertial sensors. Zero position; update the current pose information according to the direction of gravity, and update the inertial residual according to the zero position of the inertial sensor; construct a graph optimization problem based on the zero position of the inertial sensor and the updated inertial residual to perform the second IMU initialization and obtain the IMU inertial parameters; The current pose information is updated according to the gravity direction and scale, and the inertial residual is updated according to the zero position of the inertial sensor in the IMU inertial parameters.

进一步地,在第一次IMU初始化中,固定尺度估计值s和0到第k关键帧之间body坐标系下的速度v0:k,根据进行图优化求解获取标准重力系到世界坐标系下的重力旋转量Rwg、陀螺仪零位/>和加速度计零位/>其中,为第一次IMU初始化实际获取的第k关键帧的IMU参数,包括尺度估计值s、标准重力系到世界坐标系下的重力旋转量Rwg、i时刻陀螺仪的零位/>i时刻加速度计的零位/>以及0到第k关键帧之间body坐标系下的速度v0:k;rp为先验残差,趋向于0;/>为i-1时刻至i时刻的惯性残差,/>Further, in the first IMU initialization, the fixed scale estimate s and the velocity v0 in the body coordinate system between the 0 and kth key frames are: k , according to Perform graph optimization and solve to obtain the gravity rotation amount Rwg and gyroscope zero position from the standard gravity system to the world coordinate system/> and accelerometer zero position/> in, The IMU parameters of the k-th keyframe actually obtained for the first IMU initialization, including the scale estimate s, the gravity rotation amount Rwg from the standard gravity system to the world coordinate system, and the zero position of the gyroscope at time i/> The zero position of the accelerometer at time i/> And the velocity v0 in the body coordinate system between the 0th and kth key frames: k ; rp is the a priori residual, tending to 0;/> is the inertia residual from time i-1 to time i,/>

进一步地,在第二次IMU初始化中,根据进行图优化求解获取IMU惯性参数,其中,/>为第二次IMU初始化实际获取的第k关键帧的IMU参数,/>为i-1时刻至i时刻的包含尺度信息的新的惯性残差,/>为待求解的尺度因子,/>为新的旋转残差,/>为新的速度残差,/>为新的平移残差。Further, in the second IMU initialization, according to Perform graph optimization to obtain the IMU inertial parameters, where,/> The IMU parameters of the k-th keyframe actually obtained for the second IMU initialization,/> is the new inertial residual containing scale information from time i-1 to time i,/> is the scale factor to be solved,/> is the new rotation residual,/> is the new velocity residual,/> is the new translation residual.

进一步地,根据IMU惯性参数和视觉图像进行惯性视觉联合优化完成位姿估计具体包括:根据IMU惯性参数构建惯性误差;根据视觉图像的ORB特征匹配结果,构造视觉重投影误差;根据惯性误差和视觉重投影误差构造视觉惯性联合优化函数进行位姿优化以获取全局最优相机位姿序列和场景地图点。Furthermore, the joint optimization of inertial vision based on the IMU inertial parameters and the visual image to complete the pose estimation specifically includes: constructing the inertial error based on the IMU inertial parameters; constructing the visual reprojection error based on the ORB feature matching results of the visual image; based on the inertial error and visual The reprojection error constructs a visual-inertial joint optimization function for pose optimization to obtain the global optimal camera pose sequence and scene map points.

进一步地,面向无人机的惯性、视觉和高度信息融合导航方法还包括:在完成无人机定位后,判断当前无人机跟踪定位状态,若当前无人机跟踪定位状态丢失,则导航系统进入重定位阶段,判断视觉传感器识别的图像帧是否与当前局部地图匹配,若匹配,则根据当前局部地图的坐标和无人机尺度信息继续进行无人机定位;若不匹配,则重新构建地图,返回根据无人机实时高度信息获取尺度估计值直至完成无人机定位。Furthermore, the integrated navigation method of inertial, visual and altitude information for drones also includes: after completing the positioning of the drone, determining the current tracking and positioning status of the drone. If the current tracking and positioning status of the drone is lost, the navigation system Entering the relocation stage, it is judged whether the image frame recognized by the visual sensor matches the current local map. If it matches, the drone positioning will continue based on the coordinates of the current local map and the scale information of the drone; if it does not match, the map will be rebuilt. , return to obtain the scale estimate based on the real-time height information of the drone until the drone positioning is completed.

应用本发明的技术方案,提供了一种面向无人机的惯性、视觉和高度信息融合导航方法,该方法通过引入无人机实时高度信息,结合视觉图像,采用两次惯性传感器初始化方法获取IMU惯性参数,完成无人机位姿估计和定位。本发明在空中场景下能恢复较精准的尺度信息,IMU零位求解精度较高,跟踪定位过程中对于无人机的位姿估计和轨迹估计具备较高的精准性、实时性和鲁棒性。与现有技术相比,本发明的技术方案能够解决现有技术中导航方法位姿估计精度低、空中飞行时惯性传感器容易发散造成跟踪丢失、计算量复杂等技术问题。By applying the technical solution of the present invention, a navigation method for integrating inertial, visual and altitude information for UAVs is provided. This method introduces real-time altitude information of UAVs, combines visual images, and uses two inertial sensor initialization methods to obtain the IMU. Inertial parameters are used to complete the UAV pose estimation and positioning. This invention can restore more accurate scale information in aerial scenes, has higher IMU zero position solution accuracy, and has higher accuracy, real-time and robustness for UAV pose estimation and trajectory estimation during tracking and positioning. . Compared with the existing technology, the technical solution of the present invention can solve the technical problems in the existing technology such as low accuracy of pose estimation in navigation methods, easy divergence of inertial sensors during air flight, resulting in tracking loss, and complex calculations.

附图说明Description of drawings

所包括的附图用来提供对本发明实施例的进一步的理解,其构成了说明书的一部分,用于例示本发明的实施例,并与文字描述一起来阐释本发明的原理。显而易见地,下面描述中的附图仅仅是本发明的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。The accompanying drawings are included to provide a further understanding of the embodiments of the invention, and constitute a part of this specification, illustrate embodiments of the invention and together with the description, serve to explain the principles of the invention. Obviously, the drawings in the following description are only some embodiments of the present invention. For those of ordinary skill in the art, other drawings can be obtained based on these drawings without exerting creative efforts.

图1示出了根据本发明的具体实施例提供的面向无人机的惯性、视觉和高度信息融合导航方法的流程示意图;Figure 1 shows a schematic flow chart of an inertial, visual and altitude information fusion navigation method for UAVs according to a specific embodiment of the present invention;

图2示出了根据本发明的具体实施例提供的两次惯性传感器初始化方法的流程示意图。Figure 2 shows a schematic flowchart of two inertial sensor initialization methods provided according to a specific embodiment of the present invention.

具体实施方式Detailed ways

需要说明的是,在不冲突的情况下,本申请中的实施例及实施例中的特征可以相互组合。下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本发明一部分实施例,而不是全部的实施例。以下对至少一个示例性实施例的描述实际上仅仅是说明性的,决不作为对本发明及其应用或使用的任何限制。基于本发明中的实施例,本领域普通技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。It should be noted that, as long as there is no conflict, the embodiments and features in the embodiments of this application can be combined with each other. The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention. Obviously, the described embodiments are only some of the embodiments of the present invention, rather than all the embodiments. The following description of at least one exemplary embodiment is merely illustrative in nature and is in no way intended to limit the invention, its application or uses. Based on the embodiments of the present invention, all other embodiments obtained by those of ordinary skill in the art without making creative efforts fall within the scope of protection of the present invention.

需要注意的是,这里所使用的术语仅是为了描述具体实施方式,而非意图限制根据本申请的示例性实施方式。如在这里所使用的,除非上下文另外明确指出,否则单数形式也意图包括复数形式,此外,还应当理解的是,当在本说明书中使用术语“包含”和/或“包括”时,其指明存在特征、步骤、操作、器件、组件和/或它们的组合。It should be noted that the terms used herein are only for describing specific embodiments and are not intended to limit the exemplary embodiments according to the present application. As used herein, the singular forms are also intended to include the plural forms unless the context clearly indicates otherwise. Furthermore, it will be understood that when the terms "comprises" and/or "includes" are used in this specification, they indicate There are features, steps, operations, means, components and/or combinations thereof.

除非另外具体说明,否则在这些实施例中阐述的部件和步骤的相对布置、数字表达式和数值不限制本发明的范围。同时,应当明白,为了便于描述,附图中所示出的各个部分的尺寸并不是按照实际的比例关系绘制的。对于相关领域普通技术人员已知的技术、方法和设备可能不作详细讨论,但在适当情况下,所述技术、方法和设备应当被视为说明书的一部分。在这里示出和讨论的所有示例中,任何具体值应被解释为仅仅是示例性的,而不是作为限制。因此,示例性实施例的其它示例可以具有不同的值。应注意到:相似的标号和字母在下面的附图中表示类似项,因此,一旦某一项在一个附图中被定义,则在随后的附图中不需要对其进行进一步讨论。The relative arrangement of components and steps, numerical expressions, and numerical values set forth in these examples do not limit the scope of the invention unless specifically stated otherwise. At the same time, it should be understood that, for convenience of description, the dimensions of various parts shown in the drawings are not drawn according to actual proportional relationships. Techniques, methods and devices known to those of ordinary skill in the relevant art may not be discussed in detail, but where appropriate, such techniques, methods and devices should be considered a part of the specification. In all examples shown and discussed herein, any specific values are to be construed as illustrative only and not as limiting. Accordingly, other examples of the exemplary embodiments may have different values. It should be noted that similar reference numerals and letters refer to similar items in the following figures, so that once an item is defined in one figure, it does not need further discussion in subsequent figures.

如图1所示,根据本发明的具体实施例提供了一种面向无人机的惯性、视觉和高度信息融合导航方法,该面向无人机的惯性、视觉和高度信息融合导航方法包括:根据无人机实时高度信息获取尺度估计值;根据尺度估计值更新视觉测量信息;视觉传感器采集视觉图像,根据视觉图像进行特征提取和匹配获取惯性传感器的速度初始值;构建惯性残差;根据惯性残差采用两次惯性传感器初始化方法获取IMU惯性参数;根据IMU惯性参数和视觉图像进行惯性视觉联合优化完成位姿估计,进行无人机定位以完成无人机的惯性、视觉和高度信息融合导航。As shown in Figure 1, a specific embodiment of the present invention provides an inertial, visual and altitude information fusion navigation method for UAVs. The inertial, visual and altitude information fusion navigation method for UAVs includes: The real-time height information of the UAV obtains the scale estimate; updates the visual measurement information based on the scale estimate; the visual sensor collects visual images, performs feature extraction and matching based on the visual images to obtain the initial speed value of the inertial sensor; constructs the inertial residual; based on the inertial residual The difference is to use two inertial sensor initialization methods to obtain the IMU inertial parameters; based on the IMU inertial parameters and visual images, the inertial vision joint optimization is performed to complete the pose estimation, and the UAV is positioned to complete the UAV's inertial, visual and altitude information fusion navigation.

应用此种配置方式,提供了一种面向无人机的惯性、视觉和高度信息融合导航方法,该方法通过引入无人机实时高度信息,结合视觉图像,采用两次惯性传感器初始化方法获取IMU惯性参数,完成无人机位姿估计和定位。本发明在空中场景下能恢复较精准的尺度信息,IMU零位求解精度较高,跟踪定位过程中对于无人机的位姿估计和轨迹估计具备较高的精准性、实时性和鲁棒性。与现有技术相比,本发明的技术方案能够解决现有技术中导航方法位姿估计精度低、空中飞行时惯性传感器容易发散造成跟踪丢失、计算量复杂等技术问题。Applying this configuration method, a fusion navigation method of inertial, visual and altitude information for UAVs is provided. This method introduces real-time altitude information of UAVs, combines visual images, and uses two inertial sensor initialization methods to obtain IMU inertia. parameters to complete the UAV pose estimation and positioning. This invention can restore more accurate scale information in aerial scenes, has higher IMU zero position solution accuracy, and has higher accuracy, real-time and robustness for UAV pose estimation and trajectory estimation during tracking and positioning. . Compared with the existing technology, the technical solution of the present invention can solve the technical problems in the existing technology such as low accuracy of pose estimation in navigation methods, easy divergence of inertial sensors during air flight, resulting in tracking loss, and complex calculations.

进一步地,在本发明中,为了实现面向无人机的惯性、视觉和高度信息融合导航,首先根据无人机实时高度信息获取尺度估计值。Further, in the present invention, in order to realize the integrated navigation of inertial, visual and altitude information for UAVs, the scale estimation value is first obtained based on the real-time altitude information of the UAV.

在本发明中,根据无人机实时高度信息获取尺度估计值具体包括:在视觉传感器跟踪过程中,利用三角化技术获取关键帧的三维地图点深度,根据无人机实时高度信息和关键帧的三维地图点深度求解尺度估计值。In the present invention, obtaining the scale estimate value based on the real-time height information of the UAV specifically includes: during the visual sensor tracking process, using triangulation technology to obtain the three-dimensional map point depth of the key frame, and based on the real-time height information of the UAV and the key frame 3D map point depth solution scale estimate.

作为本发明的一个具体实施例,在视觉传感器跟踪过程中,通过提取图像的ORB(Oriented FAST and Rotated BRIEF)特征进行特征匹配,根据两帧匹配获取第i个关键帧的三维地图点深度。导航定位处理系统根据无人机实时高度信息获取第i个关键帧的高度。根据第i个关键帧的高度和三维地图点深度获取尺度估计值,i=1,2,……,n,n为整数。As a specific embodiment of the present invention, during the visual sensor tracking process, feature matching is performed by extracting the ORB (Oriented FAST and Rotated BRIEF) features of the image, and the three-dimensional map point depth of the i-th key frame is obtained based on the matching of the two frames. The navigation and positioning processing system obtains the height of the i-th key frame based on the real-time height information of the drone. Obtain the scale estimate based on the height of the i-th key frame and the depth of the three-dimensional map point, i=1,2,...,n, where n is an integer.

在该实施例中,根据s=h/d获取尺度估计值,其中,s为尺度估计值,h为第i个关键帧的高度,d为第i个关键帧的三维地图点深度。In this embodiment, the scale estimate is obtained according to s=h/d, where s is the scale estimate, h is the height of the i-th key frame, and d is the depth of the three-dimensional map point of the i-th key frame.

进一步地,在本发明中,在获取尺度估计值后,根据尺度估计值更新视觉测量信息。作为本发明的一个具体实施例,根据尺度估计值更新视觉测量信息,包括更新视觉定位求解的位置、速度和三维地图点位置坐标。Further, in the present invention, after obtaining the scale estimate value, the visual measurement information is updated according to the scale estimate value. As a specific embodiment of the present invention, the visual measurement information is updated according to the scale estimate, including updating the position, speed and three-dimensional map point position coordinates of the visual positioning solution.

进一步地,在本发明中,在更新视觉测量信息后,视觉传感器采集视觉图像,根据视觉图像进行特征提取和匹配获取惯性传感器的速度初始值。Further, in the present invention, after updating the visual measurement information, the visual sensor collects visual images, and performs feature extraction and matching based on the visual images to obtain the initial speed value of the inertial sensor.

在本发明中,视觉传感器采集视觉图像,根据视觉图像进行特征提取和匹配获取惯性传感器的速度初始值具体包括:视觉传感器采集视觉图像,根据视觉图像进行特征提取和匹配,获取第i个关键帧的相机位姿;将第i个关键帧的相机位姿转换到IMU的body坐标系下;根据IMU的body坐标系下的第i个关键帧的相机位姿和运动时间获取惯性body坐标系下惯性传感器的速度初始值。In the present invention, the visual sensor collects the visual image, performs feature extraction and matching based on the visual image, and obtains the initial speed value of the inertial sensor. Specifically, the visual sensor collects the visual image, performs feature extraction and matching based on the visual image, and obtains the i-th key frame. camera pose; convert the camera pose of the i-th key frame to the body coordinate system of the IMU; obtain the inertial body coordinate system based on the camera pose and motion time of the i-th key frame in the IMU body coordinate system The initial speed value of the inertial sensor.

作为本发明的一个具体实施例,根据Tbi=TbcTi将第i个关键帧的相机位姿转换到IMU的body坐标系下,其中,Ti为第i个关键帧的相机位姿,Tbi为IMU的body坐标系下第i个关键帧的相机位姿,位姿Tbi包括旋转量Ri和平移量pi,Tbc为相机坐标系到body坐标系的变换矩阵。As a specific embodiment of the present invention, the camera pose of the i-th key frame is converted to the body coordinate system of the IMU according to Tbi = TbcTi , whereTi is the camera pose of the i-th key frame , Tbi is the camera pose of the i-th key frame in the body coordinate system of the IMU. The pose Tbi includes the rotation amount Ri and the translation amount pi . Tbc is the transformation matrix from the camera coordinate system to the body coordinate system.

结合运动时间,可根据获取惯性body坐标系下惯性传感器的速度初始值,其中,vi为i时刻惯性body坐标系下惯性传感器的速度初始值,pi和pi-1分别为i时刻和i-1时刻的平移量,δt为运动时间。Combined with exercise time, it can be based on Obtain the initial speed value of the inertial sensor in the inertial body coordinate system, where vi is the initial speed value of the inertial sensor in the inertial body coordinate system at time i, pi and pi-1 are the translations at time i and i-1 respectively. quantity, δt is the movement time.

进一步地,在获取惯性速度初始值后,构建惯性残差。Further, after obtaining the initial value of the inertial velocity, the inertial residual is constructed.

在本发明中,构建惯性残差包括:根据IMU运动过程计算IMU运动测量状态,根据IMU运动测量状态获取IMU运动测量状态的预积分增量,根据IMU运动测量状态构建惯性残差。In the present invention, constructing the inertial residual includes: calculating the IMU motion measurement state according to the IMU motion process, obtaining the pre-integrated increment of the IMU motion measurement state according to the IMU motion measurement state, and constructing the inertial residual according to the IMU motion measurement state.

作为本发明的一个具体实施例,IMU运动测量状态包括旋转、速度和平移,对于纯惯性运动,从i时刻运动到j时刻,旋转量、速度量和平移量分别满足:其中,Rwbi表示i时刻从body坐标系到世界坐标系下的旋转量,Rwbj表示j时刻从body坐标系到世界坐标系下的旋转量,/>为传感器测量的角速度,/>为测量的加速度,g为重力加速度,/>和/>分别为k时刻陀螺仪的零位和加速度计的零位,vi、vj和vk分别为i时刻、j时刻和k时刻的速度初始值,Δt为从i时刻到k时刻的时间,Δtij为从i时刻到j时刻的时间,Rwbk表示k时刻从body坐标系到世界坐标系下的旋转量,pi为i时刻的平移量,pj为j时刻的平移量。As a specific embodiment of the present invention, the IMU motion measurement state includes rotation, speed and translation. For pure inertial motion, from the movement from time i to time j, the rotation amount, speed amount and translation amount respectively satisfy: Among them, Rwbi represents the amount of rotation from the body coordinate system to the world coordinate system at time i, Rwbj represents the amount of rotation from the body coordinate system to the world coordinate system at time j,/> is the angular velocity measured by the sensor,/> is the measured acceleration, g is the acceleration due to gravity,/> and/> are the zero position of the gyroscope and the zero position of the accelerometer at time k respectively, vi , vj and vk are the initial values of velocity at time i, time j and time k respectively, Δt is the time from time i to time k, Δtij is the time from time i to time j, Rwbk represents the amount of rotation from the body coordinate system to the world coordinate system at time k, pi is the translation amount at time i, and pj is the translation amount at time j.

IMU运动测量状态的预积分量仅与IMU测量值相关,因此提取从i时刻到j时刻内IMU积分累积,可以获取IMU运动测量状态预积分增量。在该实施例中,可根据获取IMU运动测量状态预积分增量,其中,/>为j时刻与i时刻相比旋转量的预积分增量,/>为j时刻与i时刻相比速度量的预积分增量,/>为j时刻与i时刻相比平移量的预积分增量,/>为i时刻陀螺仪的零位,/>为i时刻加速度计的零位,ΔRbibk为k时刻与i时刻相比旋转量的预积分增量,Δvik为k时刻与i时刻相比速度量的预积分增量。The pre-integration amount of the IMU motion measurement state is only related to the IMU measurement value. Therefore, by extracting the IMU integral accumulation from time i to time j, the pre-integration increment of the IMU motion measurement state can be obtained. In this embodiment, it can be based on Get the IMU motion measurement state pre-integration increment, where, /> is the pre-integrated increment of the rotation amount at time j compared to time i,/> is the pre-integrated increment of the velocity at time j compared with time i,/> is the pre-integrated increment of the translation amount at time j compared with time i,/> is the zero position of the gyroscope at time i,/> is the zero position of the accelerometer at time i, ΔRbibk is the pre-integrated increment of the rotation amount at time k compared with time i, and Δvik is the pre-integrated increment of velocity compared with time k and time i.

对状态量进行约束,建立惯性残差。在该实施例中,可根据建立惯性残差,其中,/>为i时刻至j时刻的旋转残差,/>为i时刻至j时刻的速度残差,/>为i时刻至j时刻的平移残差。Constrain the state quantity and establish the inertia residual. In this embodiment, it can be based on Establish the inertia residual, where,/> is the rotation residual from time i to time j,/> is the velocity residual from time i to time j,/> is the translation residual from time i to time j.

进一步地,在本发明中,在建立惯性残差后,根据惯性残差采用两次惯性传感器初始化方法获取IMU惯性参数。Further, in the present invention, after the inertial residual is established, the IMU inertial parameters are obtained by using two inertial sensor initialization methods based on the inertial residual.

在本发明中,如图2所示,根据惯性残差采用两次惯性传感器初始化方法获取IMU惯性参数具体包括:根据尺度估计值、更新后的视觉测量信息和惯性残差构建图优化问题进行第一次IMU初始化,获取惯性传感器零位;按照重力方向更新当前位姿信息,根据惯性传感器零位更新惯性残差;根据惯性传感器零位和更新后的惯性残差构建图优化问题进行第二次IMU初始化,获取IMU惯性参数;按照重力方向和尺度更新当前位姿信息,根据IMU惯性参数中的惯性传感器零位更新惯性残差。In the present invention, as shown in Figure 2, using two inertial sensor initialization methods to obtain IMU inertial parameters based on inertial residuals specifically includes: constructing a graph optimization problem based on scale estimates, updated visual measurement information and inertial residuals. Once the IMU is initialized, the zero position of the inertial sensor is obtained; the current pose information is updated according to the gravity direction, and the inertial residual is updated according to the zero position of the inertial sensor; the graph optimization problem is constructed for the second time based on the zero position of the inertial sensor and the updated inertial residual. Initialize the IMU and obtain the IMU inertial parameters; update the current pose information according to the gravity direction and scale, and update the inertial residual according to the zero position of the inertial sensor in the IMU inertial parameters.

作为本发明的一个具体实施例,对于第一次IMU初始化,考虑纯惯性传感器的运动问题,涉及到的IMU参数包含:yk={s,Rwg,b,v0:k},yk为第一次IMU初始化中第k关键帧的IMU参数;Rwg为标准重力系到世界坐标系下的重力旋转量;v0:k为0到第k关键帧之间body坐标系下的速度,可根据i时刻惯性body坐标系下惯性传感器的速度初始值vi获取;b为零位,在该实施例中,根据标准重力系到世界坐标系下的重力旋转量Rwg可获取在世界坐标系下的重力向量g=Rwg(0,0,G)TAs a specific embodiment of the present invention, for the first IMU initialization, considering the motion problem of pure inertial sensors, the IMU parameters involved include: yk = {s, Rwg , b, v0: k }, yk is the IMU parameter of the k-th keyframe in the first IMU initialization; Rwg is the gravity rotation amount from the standard gravity system to the world coordinate system; v0: k is the velocity in the body coordinate system between 0 and the k-th keyframe , can be obtained according to the initial speed valuevi of the inertial sensor in the inertial body coordinate system at time i; b is the zero position, In this embodiment, the gravity vector g=Rwg (0,0,G)T in the world coordinate system can be obtained according to the gravity rotation amount Rwg from the standard gravity system to the world coordinate system.

仅考虑惯性传感器初始化,将传感器测量值表示为I0:k,构建图优化问题用最大后验概率估计为p(yk|I0:k)∝p(I0:k|yk)p(yk),其中,p(yk)是先验信息,p(yk|I0:k)为在传感器测量为I0:k时IMU参数为yk的概率,p(I0:k|yk)为在IMU参数为yk时,传感器测量为I0:k的概率。Only considering the inertial sensor initialization, the sensor measurement value is expressed as I0:k , and the maximum posterior probability is used to construct the graph optimization problem estimated as p(yk |I0:k )∝p(I0:k |yk )p (yk ), where p(yk ) is a priori information, p(yk |I0:k ) is the probability that the IMU parameter is yk when the sensor measurement is I0:k , p(I0: k |yk ) is the probability that the sensor measurement is I0:k when the IMU parameter is yk .

求解图优化问题,利用每帧测量的独立性,可以转换为其中,p(Ii-1,i|s,R,b,vi)为在第i关键帧尺度为s,重力旋转为R,偏差为b和第i时刻速度为vi时,第i-1关键帧到第i关键帧出现设定传感器测量状态Ii-1,i的概率。Solving the graph optimization problem, exploiting the independence of measurements at each frame, can be converted to Among them, p(Ii-1, i | s, R, b, vi) is when the scale of the i-th key frame is s, the gravity rotation is R, the deviation is b and the speed of the i-th moment is vi , the i-th The probability that the -1 keyframe to the i-th keyframe appears sets the sensor measurement state Ii-1,i .

未来求解最大化后验概率,第k关键帧的IMU参数可以表示为为第一次IMU初始化实际获取的第k关键帧的IMU参数。To maximize the posterior probability in future solutions, the IMU parameters of the k-th key frame can be expressed as Initialize the IMU parameters of the k-th keyframe actually obtained for the first IMU initialization.

将最大似然估计取负对数,假设IMU预积分和先验分布均满足高斯分布,则可以得到其中,rp是先验残差,趋向于0忽略不计;为i-1时刻至i时刻的惯性残差,/>Taking the negative logarithm of the maximum likelihood estimate and assuming that both the IMU pre-integration and the prior distribution satisfy Gaussian distribution, we can get Among them, rp is the a priori residual, which tends to 0 and is ignored; is the inertia residual from time i-1 to time i,/>

经过上述推导,固定尺度估计值s和0到第k关键帧之间body坐标系下的速度v0:k,根据进行图优化求解获取标准重力系到世界坐标系下的重力旋转量Rωg、陀螺仪零位/>和加速度计零位/>After the above derivation, the fixed scale estimate s and the velocity v0 in the body coordinate system between the 0 and kth key frames are: k , according to Perform graph optimization and solve to obtain the gravity rotation amount Rωg and gyroscope zero position from the standard gravity system to the world coordinate system/> and accelerometer zero position/>

此外,在本发明中,在完成第一次IMU初始化获取惯性传感器零位后,按照重力方向更新当前位姿信息,根据惯性传感器零位更新惯性残差。作为本发明的一个具体实施例,根据惯性传感器零位更新惯性传感器的预积分过程。In addition, in the present invention, after the first IMU initialization is completed to obtain the zero position of the inertial sensor, the current pose information is updated according to the direction of gravity, and the inertial residual is updated according to the zero position of the inertial sensor. As a specific embodiment of the present invention, the pre-integration process of the inertial sensor is updated according to the zero position of the inertial sensor.

作为本发明的一个具体实施例,对于第二次IMU初始化,同样考虑纯惯性传感器的运动问题,涉及到的IMU参数包含:其中,/>为第二次IMU初始化中第k关键帧的IMU参数,/>为待求解的尺度因子。As a specific embodiment of the present invention, for the second IMU initialization, the motion problem of pure inertial sensors is also considered, and the involved IMU parameters include: Among them,/> is the IMU parameter of the k-th keyframe in the second IMU initialization,/> is the scale factor to be solved.

与第一次IMU初始化相似,可推导获取其中,/>为第二次IMU初始化实际获取的第k关键帧的IMU参数,/>为i-1时刻至i时刻的包含尺度信息的新的惯性残差,/>由于尺度信息需要同步进行优化,因此可根据Similar to the first IMU initialization, it can be deduced to obtain Among them,/> The IMU parameters of the k-th keyframe actually obtained for the second IMU initialization,/> is the new inertial residual containing scale information from time i-1 to time i,/> Since the scale information needs to be optimized simultaneously, it can be based on

获取新的惯性残差,其中,/>为新的旋转残差,/>为新的速度残差,/>为新的平移残差。 Get the new inertia residual, where, /> is the new rotation residual,/> is the new velocity residual,/> is the new translation residual.

此外,在本发明中,在完成第二次IMU初始化获取IMU惯性参数后,按照重力方向和尺度更新当前位姿信息,根据IMU惯性参数中的惯性传感器零位更新惯性残差。作为本发明的一个具体实施例,根据IMU惯性参数中的惯性传感器零位更新惯性传感器的预积分过程。In addition, in the present invention, after the second IMU initialization is completed to obtain the IMU inertial parameters, the current pose information is updated according to the gravity direction and scale, and the inertial residual is updated according to the zero position of the inertial sensor in the IMU inertial parameters. As a specific embodiment of the present invention, the pre-integration process of the inertial sensor is updated according to the zero position of the inertial sensor in the IMU inertial parameters.

进一步地,在本发明中,在根据惯性残差采用两次惯性传感器初始化方法获取IMU惯性参数后,根据IMU惯性参数和视觉图像进行惯性视觉联合优化完成位姿估计,进行无人机定位以完成无人机的惯性、视觉和高度信息融合导航。Further, in the present invention, after using two inertial sensor initialization methods to obtain the IMU inertial parameters based on the inertial residuals, the inertial vision joint optimization is performed based on the IMU inertial parameters and visual images to complete the pose estimation, and the UAV positioning is performed to complete the pose estimation. Fusion of inertial, visual and altitude information for drones.

在本发明中,根据IMU惯性参数和视觉图像进行惯性视觉联合优化完成位姿估计具体包括:根据IMU惯性参数构建惯性误差;根据视觉图像的ORB特征匹配结果,构造视觉重投影误差;根据惯性误差和视觉重投影误差构造视觉惯性联合优化函数进行位姿优化以获取全局最优相机位姿序列和场景地图点。In the present invention, the joint optimization of inertial vision based on the IMU inertial parameters and the visual image to complete the pose estimation specifically includes: constructing the inertial error based on the IMU inertial parameters; constructing the visual reprojection error based on the ORB feature matching result of the visual image; based on the inertial error and visual reprojection error to construct a visual-inertia joint optimization function for pose optimization to obtain the global optimal camera pose sequence and scene map points.

作为本发明的一个具体实施例,可根据获取的IMU惯性参数中的重力旋转量、速度和零位构建惯性误差。视觉惯性联合优化函数可以参照ORB-SLAM3中的方法进行构造。可采用非线性优化框架g2o,通过Levenberg-Marquardt方法迭代直至误差函数值小于设定值,得到全局最优相机位姿序列和场景地图点。As a specific embodiment of the present invention, the inertia error can be constructed based on the gravity rotation amount, speed and zero position in the acquired IMU inertial parameters. The visual-inertial joint optimization function can be constructed by referring to the method in ORB-SLAM3. The nonlinear optimization framework g2o can be used to iterate through the Levenberg-Marquardt method until the error function value is less than the set value, and the global optimal camera pose sequence and scene map points can be obtained.

进一步地,在本发明中,面向无人机的惯性、视觉和高度信息融合导航方法还包括:在完成无人机定位后,判断当前无人机跟踪定位状态,若当前无人机跟踪定位状态丢失,则导航系统进入重定位阶段,判断视觉传感器识别的图像帧是否与当前局部地图匹配,若匹配,则根据当前局部地图的坐标和无人机尺度信息继续进行无人机定位;若不匹配,则重新构建地图,返回根据无人机实时高度信息获取尺度估计值直至完成无人机定位。Further, in the present invention, the inertial, visual and altitude information fusion navigation method for drones also includes: after completing the positioning of the drone, determining the current tracking and positioning status of the drone. If the current tracking and positioning status of the drone is If lost, the navigation system enters the relocation stage to determine whether the image frame recognized by the visual sensor matches the current local map. If it matches, it will continue to position the drone based on the coordinates of the current local map and the scale information of the drone; if it does not match, , then the map is rebuilt and the scale estimate is obtained based on the real-time height information of the UAV until the UAV positioning is completed.

在重定位阶段,视觉传感器识别的图像帧与当前局部地图不匹配时,构建新的地图,设置新地图为当前活跃地图,设立尺度更新机制。作为本发明的一个具体实施例,尺度更新机制具体包括:视觉传感器采集预设时长的视觉测量结果,按照前述步骤重新求解尺度因子,根据新的尺度信息完成无人机位姿估计,根据新的尺度信息完成无人机定位。在该实施例中,预设时长可根据实际视觉图像的需要设置,例如可设置为10s。In the relocation phase, when the image frame recognized by the visual sensor does not match the current local map, a new map is constructed, the new map is set as the current active map, and a scale update mechanism is established. As a specific embodiment of the present invention, the scale update mechanism specifically includes: the visual sensor collects the visual measurement results for a preset time, resolve the scale factor according to the aforementioned steps, completes the UAV pose estimation based on the new scale information, and completes the UAV pose estimation based on the new scale information. Scale information completes UAV positioning. In this embodiment, the preset duration can be set according to the needs of the actual visual image, for example, it can be set to 10 seconds.

本发明的面向无人机的惯性、视觉和高度信息融合导航方法,在空中场景下能恢复较精准的尺度信息,IMU零位求解精度较高。跟踪定位过程中对于无人机的位姿估计和轨迹估计具备较高的精准性、实时性和鲁棒性。本发明针对无人机实际高空应用场景中尺度难以估计的问题,在ORB-SLAM算法的基础上引入高度信息,利用视觉测量获取三维地图点坐标,结合真实高度值和地图估计深度,有效求解了场景的尺度因子,恢复场景的尺度信息,整体提升了高空场景下跟踪定位中运动轨迹估计的精准性。本发明针对实际高空应用场景中出现的惯性传感器零位大的问题,提出一种新的惯性传感器初始化方法。利用两次图优化有效获取了惯性传感器的零位,并优化了场景尺度信息,整体提升了高空场景下跟踪定位中六自由度位姿估计的精准性和鲁棒性。The inertial, visual and altitude information fusion navigation method for UAVs of the present invention can restore more accurate scale information in aerial scenes, and the IMU zero position solution accuracy is high. During the tracking and positioning process, the UAV's pose estimation and trajectory estimation are highly accurate, real-time and robust. This invention aims at the problem that the scale is difficult to estimate in actual high-altitude application scenarios of UAVs. It introduces height information on the basis of the ORB-SLAM algorithm, uses visual measurement to obtain three-dimensional map point coordinates, and combines the real height value with the map estimated depth to effectively solve the problem. The scale factor of the scene restores the scale information of the scene, which overall improves the accuracy of motion trajectory estimation in tracking and positioning in high-altitude scenes. The present invention proposes a new inertial sensor initialization method to solve the problem of large zero position of the inertial sensor that occurs in actual high-altitude application scenarios. Two graph optimizations are used to effectively obtain the zero position of the inertial sensor and optimize the scene scale information, which overall improves the accuracy and robustness of six-degree-of-freedom pose estimation in tracking and positioning in high-altitude scenes.

根据本发明的另一方面,提供了一种存储装置,该存储装置存储有多个程序,多个程序用于实现如上所述的面向无人机的惯性、视觉和高度信息融合导航方法。According to another aspect of the present invention, a storage device is provided. The storage device stores multiple programs. The multiple programs are used to implement the inertial, visual and altitude information fusion navigation method for drones as described above.

作为本发明的一个具体实施例,存储装置可采用随机存储器(RAM)、内存、只读存储器(ROM)、电可编程ROM、电可擦除可编程ROM、寄存器、硬盘、可移动磁盘、CD-ROM、或技术领域内所公知的任意其它形式。As a specific embodiment of the present invention, the storage device can use random access memory (RAM), memory, read-only memory (ROM), electrically programmable ROM, electrically erasable programmable ROM, register, hard disk, removable disk, CD -ROM, or any other form known in the technical field.

根据本发明的另一方面,提供了一种处理装置,该处理装置包括处理器和如上所述的存储装置,所述处理器用于加载并执行所述存储装置中的多个程序以实现如上所述的面向无人机的惯性、视觉和高度信息融合导航方法。According to another aspect of the present invention, a processing device is provided. The processing device includes a processor and a storage device as described above. The processor is used to load and execute a plurality of programs in the storage device to implement the above. The above-mentioned inertial, visual and altitude information fusion navigation method for UAVs.

综上所述,本发明提供了一种面向无人机的惯性、视觉和高度信息融合导航方法,该方法通过引入无人机实时高度信息,结合视觉图像,采用两次惯性传感器初始化方法获取IMU惯性参数,完成无人机位姿估计和定位。本发明在空中场景下能恢复较精准的尺度信息,IMU零位求解精度较高,跟踪定位过程中对于无人机的位姿估计和轨迹估计具备较高的精准性、实时性和鲁棒性。与现有技术相比,本发明的技术方案能够解决现有技术中导航方法位姿估计精度低、空中飞行时惯性传感器容易发散造成跟踪丢失、计算量复杂等技术问题。To sum up, the present invention provides a fusion navigation method of inertial, visual and altitude information for UAVs. This method introduces real-time altitude information of UAVs, combines visual images, and uses two inertial sensor initialization methods to obtain the IMU. Inertial parameters are used to complete the UAV pose estimation and positioning. This invention can restore more accurate scale information in aerial scenes, has higher IMU zero position solution accuracy, and has higher accuracy, real-time and robustness for UAV pose estimation and trajectory estimation during tracking and positioning. . Compared with the existing technology, the technical solution of the present invention can solve the technical problems in the existing technology such as low accuracy of pose estimation in navigation methods, easy divergence of inertial sensors during air flight, resulting in tracking loss, and complex calculations.

以上所述仅为本发明的优选实施例而已,并不用于限制本发明,对于本领域的技术人员来说,本发明可以有各种更改和变化。凡在本发明的精神和原则之内,所作的任何修改、等同替换、改进等,均应包含在本发明的保护范围之内。The above descriptions are only preferred embodiments of the present invention and are not intended to limit the present invention. For those skilled in the art, the present invention may have various modifications and changes. Any modifications, equivalent substitutions, improvements, etc. made within the spirit and principles of the present invention shall be included in the protection scope of the present invention.

Claims (6)

1. The unmanned aerial vehicle-oriented inertial, visual and height information fusion navigation method is characterized by comprising the following steps of: acquiring a scale estimated value according to the real-time altitude information of the unmanned aerial vehicle; updating visual measurement information according to the scale estimation value; the visual sensor acquires a visual image, and performs feature extraction and matching according to the visual image to obtain a speed initial value of the inertial sensor; constructing an inertial residual error; acquiring inertial parameters of the IMU by adopting a twice inertial sensor initialization method according to the inertial residual error; performing inertial vision joint optimization according to the inertial parameters of the IMU and the visual images to complete pose estimation, and performing unmanned aerial vehicle positioning to complete inertial, visual and altitude information fusion navigation of the unmanned aerial vehicle;
the method for acquiring the inertial parameters of the IMU by adopting the twice inertial sensor initialization method according to the inertial residual error specifically comprises the following steps: carrying out the first IMU initialization according to the scale estimated value, the updated vision measurement information and the inertial residual error construction diagram optimization problem to obtain the zero position of the inertial sensor; updating current pose information according to the gravity direction, and updating an inertial residual error according to the zero position of the inertial sensor; carrying out a second IMU initialization according to the zero position of the inertial sensor and the updated inertial residual error construction diagram optimization problem to obtain IMU inertial parameters; updating current pose information according to the gravity direction and the scale, and updating an inertial residual error according to the zero position of an inertial sensor in the inertial parameters of the IMU;
in a first IMU initialization, a fixed scale estimate s and a velocity v in a body coordinate system between 0 and the kth key frame0:k According toCarrying out graph optimization solution to obtain gravity rotation quantity R from standard gravity system to world coordinate systemwg Zero position of gyroscope->And accelerometer zero +.>Wherein (1)>Initializing IMU parameters of a k key frame actually acquired for the first IMU, wherein the IMU parameters comprise a scale estimated value s and a gravity rotation quantity R from a standard gravity system to a world coordinate systemwg Zero position of gyroscope at moment i>Zero position of accelerometer at moment i>And a velocity v in a body coordinate system between 0 and kth key frames0:k ;rp As a priori residual, trending towards 0; />The inertial residual from instant i-1 to instant i, and (2)>For the rotation residual from instant i to instant j, < >>For the speed residual from instant i to instant j, < +.>A translation residual error from the moment i to the moment j;
for the scale factor to be solved, +.>For a new rotation residual, +.>For a new speed residual, +.>For a new translation residual, +.>For the angular velocity measured by the sensor, +.>For the zero position of the gyroscope at the moment i, deltat is the time from the moment i to the moment k, Rwbi Representing the rotation amount from body coordinate system to world coordinate system at moment i, Rwbj Representing the rotation quantity from body coordinate system to world coordinate system at moment j, vi And vj Initial values of the speeds at the moment i and the moment j respectively, g is the gravitational acceleration, deltatij For the time from i to j, ΔRbibk For the pre-integral increment of the rotation quantity at time k compared to time i +.>For the acceleration measured, ++>For the zero position of the accelerometer at time i, pi For the translation at time i, pj For translation at time j, deltavik Is the pre-integral increment of the velocity metric at time k compared to time i.
2. The unmanned aerial vehicle-oriented inertial, visual and altitude information fusion navigation method of claim 1, wherein the obtaining the initial value of the speed of the inertial sensor by feature extraction and matching according to the visual image specifically comprises: extracting and matching features according to the visual images to obtain the camera pose of the ith key frame; converting the camera pose of the ith key frame into a body coordinate system of the IMU; according toAcquiring initial speed value of inertial sensor in inertial body coordinate systemIn vi For the initial value of the speed of the inertial sensor under the moment i inertial body coordinate system, pi And pi-1 The translation amounts at the moment i and the moment i-1 are respectively, and δt is the movement time.
6. The unmanned aerial vehicle-oriented inertial, visual, and altitude information fusion navigation method of any one of claims 1 to 5, further comprising: after the unmanned aerial vehicle positioning is finished, judging the current unmanned aerial vehicle tracking and positioning state, if the current unmanned aerial vehicle tracking and positioning state is lost, entering a repositioning stage by the navigation system, judging whether an image frame identified by the visual sensor is matched with a current local map, and if so, continuing to perform unmanned aerial vehicle positioning according to the coordinates of the current local map and the unmanned aerial vehicle scale information; if the unmanned aerial vehicle height information is not matched with the real-time height information, reconstructing a map, and returning to acquiring the scale estimated value according to the unmanned aerial vehicle real-time height information until the unmanned aerial vehicle positioning is completed.
CN202210122642.3A2022-02-092022-02-09Unmanned aerial vehicle-oriented inertial, visual and altitude information fusion navigation methodActiveCN114485649B (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
CN202210122642.3ACN114485649B (en)2022-02-092022-02-09Unmanned aerial vehicle-oriented inertial, visual and altitude information fusion navigation method

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN202210122642.3ACN114485649B (en)2022-02-092022-02-09Unmanned aerial vehicle-oriented inertial, visual and altitude information fusion navigation method

Publications (2)

Publication NumberPublication Date
CN114485649A CN114485649A (en)2022-05-13
CN114485649Btrue CN114485649B (en)2023-09-12

Family

ID=81478209

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN202210122642.3AActiveCN114485649B (en)2022-02-092022-02-09Unmanned aerial vehicle-oriented inertial, visual and altitude information fusion navigation method

Country Status (1)

CountryLink
CN (1)CN114485649B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN114910069B (en)*2022-06-012025-01-10深圳市边界智控科技有限公司 A fusion positioning initialization system and method for unmanned aerial vehicle
CN116295342A (en)*2023-03-152023-06-23中国民航大学 A Multi-Sensing State Estimator for Aircraft Surveying
CN120252746B (en)*2025-06-062025-09-23中国科学院国家空间科学中心Remote long-endurance integrated navigation method and system combining visual inertia joint optimization and image matching

Citations (7)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN107869989A (en)*2017-11-062018-04-03东北大学 A positioning method and system based on visual inertial navigation information fusion
US10032276B1 (en)*2016-08-292018-07-24PerceptIn, Inc.Visual-inertial positional awareness for autonomous and non-autonomous device
CN110388917A (en)*2018-04-232019-10-29北京京东尚科信息技术有限公司Aircraft monocular vision Scale Estimation Method and device, aircraft guidance system and aircraft
CN112240768A (en)*2020-09-102021-01-19西安电子科技大学 Visual-Inertial Navigation Fusion SLAM Method Based on Runge-Kutta4 Improved Pre-integration
CN112596071A (en)*2020-11-022021-04-02中国兵器工业计算机应用技术研究所Unmanned aerial vehicle autonomous positioning method and device and unmanned aerial vehicle
CN113029134A (en)*2021-03-022021-06-25浙江商汤科技开发有限公司Visual inertial system initialization method and device, electronic equipment and storage medium
CN113223064A (en)*2020-01-212021-08-06北京初速度科技有限公司Method and device for estimating scale of visual inertial odometer

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US10032276B1 (en)*2016-08-292018-07-24PerceptIn, Inc.Visual-inertial positional awareness for autonomous and non-autonomous device
CN107869989A (en)*2017-11-062018-04-03东北大学 A positioning method and system based on visual inertial navigation information fusion
CN110388917A (en)*2018-04-232019-10-29北京京东尚科信息技术有限公司Aircraft monocular vision Scale Estimation Method and device, aircraft guidance system and aircraft
CN113223064A (en)*2020-01-212021-08-06北京初速度科技有限公司Method and device for estimating scale of visual inertial odometer
CN112240768A (en)*2020-09-102021-01-19西安电子科技大学 Visual-Inertial Navigation Fusion SLAM Method Based on Runge-Kutta4 Improved Pre-integration
CN112596071A (en)*2020-11-022021-04-02中国兵器工业计算机应用技术研究所Unmanned aerial vehicle autonomous positioning method and device and unmanned aerial vehicle
CN113029134A (en)*2021-03-022021-06-25浙江商汤科技开发有限公司Visual inertial system initialization method and device, electronic equipment and storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
一种基于惯性/视觉信息融合的无人机自主着陆导航算法;刘畅 等;导航定位与授时;第3卷(第6期);第6-11页*

Also Published As

Publication numberPublication date
CN114485649A (en)2022-05-13

Similar Documents

PublicationPublication DateTitle
CN112347840B (en)Vision sensor laser radar integrated unmanned aerial vehicle positioning and image building device and method
CN114485649B (en)Unmanned aerial vehicle-oriented inertial, visual and altitude information fusion navigation method
CN110068335B (en) A method and system for real-time positioning of UAV swarms in GPS-denied environment
CN106708066B (en) Autonomous landing method of UAV based on vision/inertial navigation
CN107747941B (en)Binocular vision positioning method, device and system
Panahandeh et al.Vision-aided inertial navigation based on ground plane feature detection
CN112304307A (en)Positioning method and device based on multi-sensor fusion and storage medium
CN111288989B (en)Visual positioning method for small unmanned aerial vehicle
CN111156998A (en)Mobile robot positioning method based on RGB-D camera and IMU information fusion
CN110617814A (en)Monocular vision and inertial sensor integrated remote distance measuring system and method
CN106780729A (en)A kind of unmanned plane sequential images batch processing three-dimensional rebuilding method
CN110260861B (en) Pose determination method and device, and odometer
CN110749308B (en) SLAM-oriented outdoor localization method using consumer-grade GPS and 2.5D building models
CN107504969A (en)Four rotor-wing indoor air navigation aids of view-based access control model and inertia combination
CN115406447B (en)Autonomous positioning method of quad-rotor unmanned aerial vehicle based on visual inertia in rejection environment
CN116989772B (en) An air-ground multi-modal multi-agent collaborative positioning and mapping method
CN111083633B (en) Mobile terminal positioning system and its establishment method, and mobile terminal positioning method
CN115560760A (en)Unmanned aerial vehicle-oriented vision/laser ranging high-altitude navigation method
CN117421384A (en)Visual inertia SLAM system sliding window optimization method based on common view projection matching
WO2023087681A1 (en)Positioning initialization method and apparatus, and computer-readable storage medium and computer program product
CN114529585A (en)Mobile equipment autonomous positioning method based on depth vision and inertial measurement
Xian et al.Fusing stereo camera and low-cost inertial measurement unit for autonomous navigation in a tightly-coupled approach
CN110598370B (en) Robust Attitude Estimation for Multi-rotor UAV Based on Fusion of SIP and EKF
Kehoe et al.State estimation using optical flow from parallax-weighted feature tracking
CN115482282A (en) Dynamic SLAM method with multi-target tracking capability in autonomous driving scenarios

Legal Events

DateCodeTitleDescription
PB01Publication
PB01Publication
SE01Entry into force of request for substantive examination
SE01Entry into force of request for substantive examination
GR01Patent grant
GR01Patent grant

[8]ページ先頭

©2009-2025 Movatter.jp