Movatterモバイル変換


[0]ホーム

URL:


CN114485649A - Unmanned aerial vehicle-oriented inertial, visual and height information fusion navigation method - Google Patents

Unmanned aerial vehicle-oriented inertial, visual and height information fusion navigation method
Download PDF

Info

Publication number
CN114485649A
CN114485649ACN202210122642.3ACN202210122642ACN114485649ACN 114485649 ACN114485649 ACN 114485649ACN 202210122642 ACN202210122642 ACN 202210122642ACN 114485649 ACN114485649 ACN 114485649A
Authority
CN
China
Prior art keywords
inertial
time
visual
imu
unmanned aerial
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210122642.3A
Other languages
Chinese (zh)
Other versions
CN114485649B (en
Inventor
尚克军
李至
扈光锋
庄广琛
裴新凯
李茜茜
段昊雨
明丽
焦浩
刘崇亮
王海军
王大元
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Automation Control Equipment Institute BACEI
Original Assignee
Beijing Automation Control Equipment Institute BACEI
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Automation Control Equipment Institute BACEIfiledCriticalBeijing Automation Control Equipment Institute BACEI
Priority to CN202210122642.3ApriorityCriticalpatent/CN114485649B/en
Publication of CN114485649ApublicationCriticalpatent/CN114485649A/en
Application grantedgrantedCritical
Publication of CN114485649BpublicationCriticalpatent/CN114485649B/en
Activelegal-statusCriticalCurrent
Anticipated expirationlegal-statusCritical

Links

Images

Classifications

Landscapes

Abstract

The invention provides an unmanned aerial vehicle-oriented inertial, visual and height information fusion navigation method, which comprises the following steps: obtaining a scale estimation value according to the real-time height information of the unmanned aerial vehicle; updating the vision measurement information according to the scale estimation value; the method comprises the steps that a visual sensor collects visual images, and according to the visual images, feature extraction and matching are carried out to obtain an initial speed value of an inertial sensor; constructing an inertia residual error; obtaining IMU inertial parameters by adopting a twice inertial sensor initialization method according to the inertial residual error; and performing inertial vision joint optimization according to the IMU inertial parameters and the visual image to complete pose estimation, and positioning the unmanned aerial vehicle to complete inertial, visual and height information fusion navigation of the unmanned aerial vehicle. By applying the technical scheme of the invention, the technical problems of low pose estimation precision, tracking loss caused by easy divergence of an inertial sensor during air flight, complex calculated amount and the like of the navigation method in the prior art can be solved.

Description

Translated fromChinese
面向无人机的惯性、视觉和高度信息融合导航方法Inertial, visual and altitude information fusion navigation method for UAV

技术领域technical field

本发明涉及计算机视觉技术领域,尤其涉及一种面向无人机的惯性、视觉和高度信息融合导航方法。The invention relates to the technical field of computer vision, in particular to an inertial, visual and altitude information fusion navigation method for unmanned aerial vehicles.

背景技术Background technique

计算机视觉(Computer Vision,简称CV)是使用计算机及相关设备对生物视觉的一种模拟。它的主要任务就是通过对采集的图片或视频进行处理来获得相应场景的三维信息,从而达到对目标进行识别、跟踪或测量等目的。Computer Vision (CV for short) is a simulation of biological vision using computers and related equipment. Its main task is to obtain the three-dimensional information of the corresponding scene by processing the collected pictures or videos, so as to achieve the purpose of identifying, tracking or measuring the target.

无人机导航方法是引导飞行器以一定的速度和方向完成运行过程的技术和方法,其中最关键的技术之一就是对无人机进行实时地跟踪定位。在无人机的飞行过程中,实时获取较高精准度的六自由度位姿信息,转化为无人机的空间经纬度坐标反馈到飞机控制系统,从而帮助无人机实现自主导航和自主着陆功能。The UAV navigation method is a technology and method to guide the aircraft to complete the operation process at a certain speed and direction. One of the most critical technologies is to track and locate the UAV in real time. During the flight of the UAV, the high-precision 6-DOF pose information is obtained in real time, which is converted into the spatial latitude and longitude coordinates of the UAV and fed back to the aircraft control system, thereby helping the UAV to achieve autonomous navigation and autonomous landing functions. .

近年来,研究者通常采用即时定位与地图构建(SLAM,Simultaneouslocalization and mapping)技术,在真实场景中实时获取自身位姿,同时获取真实场景空间几何结构的场景地图。但利用SLAM技术进行实时定位仍存在以下问题:基于单目/双目的方法在快速运动、弱纹理环境下跟踪定位精度较差,甚至会跟踪丢失;基于惯性传感器(IMU)的方法受噪声影响较大,累积误差大,容易发生导航偏移;基于差分GPS的方法通常可以获取较精准的空间位置,但对于大多数无人机应用或者作战场景,场景可能会遭遇卫星拒止的情况,无法实时获取GPS信息。In recent years, researchers usually use Simultaneous localization and mapping (SLAM, Simultaneous localization and mapping) technology to obtain their own poses in real time in real scenes, and at the same time obtain a scene map of the spatial geometry of the real scene. However, the use of SLAM technology for real-time positioning still has the following problems: the tracking and positioning accuracy based on the monocular/binocular method is poor in fast motion and weak texture environment, and even the tracking may be lost; the method based on inertial sensor (IMU) is affected by noise. It is large, the cumulative error is large, and navigation offsets are prone to occur; the method based on differential GPS can usually obtain relatively accurate spatial positions, but for most UAV applications or combat scenarios, the scenario may encounter satellite rejection, and cannot Get GPS information in real time.

总的来说,目前采用单个传感器对无人机进行导航定位难以满足实际应用需求。利用惯性/视觉融合导航的方法应用最为广泛,但现有的SLAM方法在无人机场景下位姿轨迹精度较低,极容易跟踪丢失,难以低计算量的实现实时定位与导航。In general, it is difficult to use a single sensor to navigate and position UAVs to meet practical application requirements. The method of using inertial/visual fusion navigation is the most widely used, but the existing SLAM method has low accuracy of pose and trajectory in the UAV scene, it is very easy to lose tracking, and it is difficult to realize real-time positioning and navigation with low computational cost.

发明内容SUMMARY OF THE INVENTION

本发明旨在至少解决现有技术中存在的技术问题之一。The present invention aims to solve at least one of the technical problems existing in the prior art.

根据本发明的一方面,提供了一种面向无人机的惯性、视觉和高度信息融合导航方法,该面向无人机的惯性、视觉和高度信息融合导航方法包括:根据无人机实时高度信息获取尺度估计值,根据尺度估计值更新视觉测量信息;视觉传感器采集视觉图像,根据视觉图像进行特征提取和匹配获取惯性传感器的速度初始值;构建惯性残差;根据惯性残差采用两次惯性传感器初始化方法获取IMU惯性参数;根据IMU惯性参数和视觉图像进行惯性视觉联合优化完成位姿估计,进行无人机定位以完成无人机的惯性、视觉和高度信息融合导航。According to an aspect of the present invention, an inertial, visual and altitude information fusion navigation method oriented to UAV is provided. The inertial, visual and altitude information fusion navigation method oriented to UAV includes: according to the real-time altitude information of the UAV Obtain the scale estimation value and update the visual measurement information according to the scale estimation value; the visual sensor collects the visual image, performs feature extraction and matching according to the visual image to obtain the initial value of the inertial sensor speed; constructs the inertial residual; uses the inertial sensor twice according to the inertial residual The initialization method obtains the inertial parameters of the IMU; according to the inertial parameters of the IMU and the visual image, the inertial vision joint optimization is performed to complete the pose estimation, and the UAV positioning is performed to complete the inertial, visual and altitude information fusion navigation of the UAV.

进一步地,根据无人机实时高度信息获取尺度估计值具体包括:在视觉传感器跟踪过程中,通过提取图像的ORB特征进行特征匹配,根据两帧匹配获取第i个关键帧的三维地图点深度;导航定位处理系统根据无人机实时高度信息获取第i个关键帧的高度;根据s=h/d获取尺度估计值,其中,s为尺度估计值,h为第i个关键帧的高度,d为第i个关键帧的三维地图点深度,i=1,2,……,n,n为整数。Further, obtaining the scale estimation value according to the real-time height information of the UAV specifically includes: during the tracking process of the visual sensor, performing feature matching by extracting the ORB feature of the image, and obtaining the depth of the three-dimensional map point of the i-th key frame according to the two-frame matching; The navigation and positioning processing system obtains the height of the ith key frame according to the real-time height information of the UAV; obtains the scale estimation value according to s=h/d, where s is the scale estimation value, h is the height of the ith key frame, and d is the depth of the 3D map point of the i-th key frame, i=1,2,...,n, where n is an integer.

进一步地,根据视觉图像进行特征提取和匹配获取惯性传感器的速度初始值具体包括:根据视觉图像进行特征提取和匹配,获取第i个关键帧的相机位姿;将第i个关键帧的相机位姿转换到IMU的body坐标系下;根据

Figure BDA0003499066930000021
获取惯性body坐标系下惯性传感器的速度初始值,其中,vi为i时刻惯性body坐标系下惯性传感器的速度初始值,pi和pi-1分别为i时刻和i-1时刻的平移量,δt为运动时间。Further, performing feature extraction and matching according to the visual image to obtain the initial speed value of the inertial sensor specifically includes: performing feature extraction and matching according to the visual image, and obtaining the camera pose of the ith key frame; The pose is converted to the body coordinate system of the IMU; according to
Figure BDA0003499066930000021
Obtain the initial speed value of the inertial sensor in the inertial body coordinate system, where vi is the initial speed value of the inertial sensor in the inertial body coordinate system at time i, and pi and pi-1 are the translation at time i and time i-1, respectively , δt is the movement time.

进一步地,构建惯性残差具体包括:根据IMU运动过程计算IMU运动测量状态,根据IMU运动测量状态获取IMU运动测量状态的预积分增量,根据IMU运动测量状态构建惯性残差。Further, constructing the inertial residual specifically includes: calculating the IMU motion measurement state according to the IMU motion process, obtaining the pre-integration increment of the IMU motion measurement state according to the IMU motion measurement state, and constructing the inertial residual according to the IMU motion measurement state.

进一步地,根据

Figure BDA0003499066930000031
建立惯性残差,其中,
Figure BDA0003499066930000032
为i时刻至j时刻的旋转残差,
Figure BDA0003499066930000033
为i时刻至j时刻的速度残差,
Figure BDA0003499066930000034
为i时刻至j时刻的平移残差,
Figure BDA0003499066930000035
为传感器测量的角速度,
Figure BDA0003499066930000036
为i时刻陀螺仪的零位,Δt为从i时刻到k时刻的时间,Rwbi表示i时刻从body坐标系到世界坐标系下的旋转量,Rwbj表示j时刻从body坐标系到世界坐标系下的旋转量,vi和vj分别为i时刻和j时刻的速度初始值,g为重力加速度,Δtij为从i时刻到j时刻的时间,ΔRbibk为k时刻与i时刻相比旋转量的预积分增量,
Figure BDA0003499066930000037
为测量的加速度,
Figure BDA0003499066930000038
为i时刻加速度计的零位,pi为i时刻的平移量,pj为j时刻的平移量,Δvik为k时刻与i时刻相比速度量的预积分增量。Further, according to
Figure BDA0003499066930000031
establish the inertial residual, where,
Figure BDA0003499066930000032
is the rotation residual from time i to time j,
Figure BDA0003499066930000033
is the velocity residual from time i to time j,
Figure BDA0003499066930000034
is the translation residual from time i to time j,
Figure BDA0003499066930000035
is the angular velocity measured by the sensor,
Figure BDA0003499066930000036
is the zero position of the gyroscope at time i, Δt is the time from time i to time k, Rwbi represents the rotation amount from the body coordinate system to the world coordinate system at time i, and Rwbj represents the time j time from the body coordinate system to the world coordinate The rotation amount under the system, vi and vj are the initial velocity values at time i and time j respectively, g is the acceleration of gravity, Δtij is the time from time i to time j, ΔRbibk is the comparison between time k and time i Pre-integration increment for rotation amount,
Figure BDA0003499066930000037
is the measured acceleration,
Figure BDA0003499066930000038
is the zero position of the accelerometer at time i, pi is the translation at time i, pj is the translation at time j, and Δvik is the pre-integration increment of the velocity at time k compared to time i.

进一步地,根据惯性残差采用两次惯性传感器初始化方法获取IMU惯性参数具体包括:根据尺度估计值、更新后的视觉测量信息和惯性残差构建图优化问题进行第一次IMU初始化,获取惯性传感器零位;按照重力方向更新当前位姿信息,根据惯性传感器零位更新惯性残差;根据惯性传感器零位和更新后的惯性残差构建图优化问题进行第二次IMU初始化,获取IMU惯性参数;按照重力方向和尺度更新当前位姿信息,根据IMU惯性参数中的惯性传感器零位更新惯性残差。Further, using the inertial sensor initialization method twice to obtain the inertial parameters of the IMU according to the inertial residual specifically includes: performing the first IMU initialization according to the scale estimation value, the updated visual measurement information and the inertial residual construction map optimization problem, and obtaining the inertial sensor. Zero position; update the current pose information according to the direction of gravity, update the inertial residual according to the zero position of the inertial sensor; build a graph optimization problem according to the zero position of the inertial sensor and the updated inertial residual to perform the second IMU initialization to obtain the IMU inertial parameters; The current pose information is updated according to the gravity direction and scale, and the inertial residual is updated according to the zero position of the inertial sensor in the inertial parameters of the IMU.

进一步地,在第一次IMU初始化中,固定尺度估计值s和0到第k关键帧之间body坐标系下的速度v0:k,根据

Figure BDA0003499066930000041
进行图优化求解获取标准重力系到世界坐标系下的重力旋转量Rwg、陀螺仪零位
Figure BDA0003499066930000042
和加速度计零位
Figure BDA0003499066930000043
其中,
Figure BDA0003499066930000044
为第一次IMU初始化实际获取的第k关键帧的IMU参数,包括尺度估计值s、标准重力系到世界坐标系下的重力旋转量Rwg、i时刻陀螺仪的零位
Figure BDA0003499066930000045
i时刻加速度计的零位
Figure BDA0003499066930000046
以及0到第k关键帧之间body坐标系下的速度v0:k;rp为先验残差,趋向于0;
Figure BDA0003499066930000047
为i-1时刻至i时刻的惯性残差,
Figure BDA0003499066930000048
Further, in the first IMU initialization, the fixed scale estimated value s and the velocity v0:k in the body coordinate system between 0 and the kth key frame, according to
Figure BDA0003499066930000041
Perform the graph optimization solution to obtain the gravity rotation Rwg and the zero position of the gyroscope from the standard gravity system to the world coordinate system
Figure BDA0003499066930000042
and accelerometer zero
Figure BDA0003499066930000043
in,
Figure BDA0003499066930000044
The IMU parameters of the kth key frame actually obtained for the first IMU initialization, including the scale estimate s, the gravity rotation Rwg from the standard gravity system to the world coordinate system, and the zero position of the gyroscope at time i
Figure BDA0003499066930000045
The zero position of the accelerometer at time i
Figure BDA0003499066930000046
And the velocity v0:k in the body coordinate system between 0 and the kth key frame; rp is the prior residual, which tends to 0;
Figure BDA0003499066930000047
is the inertial residual from time i-1 to time i,
Figure BDA0003499066930000048

进一步地,在第二次IMU初始化中,根据

Figure BDA0003499066930000049
进行图优化求解获取IMU惯性参数,其中,
Figure BDA00034990669300000410
为第二次IMU初始化实际获取的第k关键帧的IMU参数,
Figure BDA00034990669300000411
为i-1时刻至i时刻的包含尺度信息的新的惯性残差,
Figure BDA00034990669300000412
Figure BDA0003499066930000051
Figure BDA0003499066930000052
为待求解的尺度因子,
Figure BDA0003499066930000053
为新的旋转残差,
Figure BDA0003499066930000054
为新的速度残差,
Figure BDA0003499066930000055
为新的平移残差。Further, in the second IMU initialization, according to
Figure BDA0003499066930000049
Perform a graph optimization solution to obtain the IMU inertial parameters, where,
Figure BDA00034990669300000410
The IMU parameters of the kth key frame actually obtained for the second IMU initialization,
Figure BDA00034990669300000411
is the new inertial residual containing scale information from time i-1 to time i,
Figure BDA00034990669300000412
Figure BDA0003499066930000051
Figure BDA0003499066930000052
is the scale factor to be solved,
Figure BDA0003499066930000053
is the new rotated residual,
Figure BDA0003499066930000054
is the new velocity residual,
Figure BDA0003499066930000055
is the new translation residual.

进一步地,根据IMU惯性参数和视觉图像进行惯性视觉联合优化完成位姿估计具体包括:根据IMU惯性参数构建惯性误差;根据视觉图像的ORB特征匹配结果,构造视觉重投影误差;根据惯性误差和视觉重投影误差构造视觉惯性联合优化函数进行位姿优化以获取全局最优相机位姿序列和场景地图点。Further, the joint optimization of inertial vision according to the IMU inertial parameters and the visual image to complete the pose estimation specifically includes: constructing the inertial error according to the IMU inertial parameters; constructing the visual reprojection error according to the ORB feature matching result of the visual image; The reprojection error constructs a visual-inertial joint optimization function for pose optimization to obtain the global optimal camera pose sequence and scene map points.

进一步地,面向无人机的惯性、视觉和高度信息融合导航方法还包括:在完成无人机定位后,判断当前无人机跟踪定位状态,若当前无人机跟踪定位状态丢失,则导航系统进入重定位阶段,判断视觉传感器识别的图像帧是否与当前局部地图匹配,若匹配,则根据当前局部地图的坐标和无人机尺度信息继续进行无人机定位;若不匹配,则重新构建地图,返回根据无人机实时高度信息获取尺度估计值直至完成无人机定位。Further, the inertial, visual and altitude information fusion navigation method oriented to the UAV also includes: after completing the UAV positioning, judging the current UAV tracking and positioning state, if the current UAV tracking and positioning state is lost, the navigation system Enter the relocation stage, and determine whether the image frame recognized by the visual sensor matches the current local map. If it matches, continue to locate the drone according to the coordinates of the current local map and the drone scale information; if it does not match, rebuild the map. , and return to obtain the scale estimation value according to the real-time height information of the UAV until the UAV positioning is completed.

应用本发明的技术方案,提供了一种面向无人机的惯性、视觉和高度信息融合导航方法,该方法通过引入无人机实时高度信息,结合视觉图像,采用两次惯性传感器初始化方法获取IMU惯性参数,完成无人机位姿估计和定位。本发明在空中场景下能恢复较精准的尺度信息,IMU零位求解精度较高,跟踪定位过程中对于无人机的位姿估计和轨迹估计具备较高的精准性、实时性和鲁棒性。与现有技术相比,本发明的技术方案能够解决现有技术中导航方法位姿估计精度低、空中飞行时惯性传感器容易发散造成跟踪丢失、计算量复杂等技术问题。By applying the technical solution of the present invention, an inertial, visual and altitude information fusion navigation method for unmanned aerial vehicles is provided. The method adopts the inertial sensor initialization method twice to obtain the IMU by introducing the real-time altitude information of the unmanned aerial vehicle and combining with the visual image. Inertial parameters, complete the UAV pose estimation and positioning. The present invention can restore more accurate scale information in the aerial scene, the IMU zero position solution precision is high, and the UAV's pose estimation and trajectory estimation have high accuracy, real-time and robustness in the tracking and positioning process. . Compared with the prior art, the technical solution of the present invention can solve the technical problems of the prior art, such as the low accuracy of the pose estimation of the navigation method, the easy divergence of the inertial sensor when flying in the air, resulting in tracking loss and complex computation.

附图说明Description of drawings

所包括的附图用来提供对本发明实施例的进一步的理解,其构成了说明书的一部分,用于例示本发明的实施例,并与文字描述一起来阐释本发明的原理。显而易见地,下面描述中的附图仅仅是本发明的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。The accompanying drawings, which are included to provide a further understanding of the embodiments of the invention, constitute a part of the specification, are used to illustrate the embodiments of the invention, and together with the description, serve to explain the principles of the invention. Obviously, the drawings in the following description are only some embodiments of the present invention, and for those of ordinary skill in the art, other drawings can also be obtained from these drawings without creative effort.

图1示出了根据本发明的具体实施例提供的面向无人机的惯性、视觉和高度信息融合导航方法的流程示意图;1 shows a schematic flowchart of a UAV-oriented inertial, visual and altitude information fusion navigation method provided according to a specific embodiment of the present invention;

图2示出了根据本发明的具体实施例提供的两次惯性传感器初始化方法的流程示意图。FIG. 2 shows a schematic flowchart of a method for initializing an inertial sensor twice according to a specific embodiment of the present invention.

具体实施方式Detailed ways

需要说明的是,在不冲突的情况下,本申请中的实施例及实施例中的特征可以相互组合。下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本发明一部分实施例,而不是全部的实施例。以下对至少一个示例性实施例的描述实际上仅仅是说明性的,决不作为对本发明及其应用或使用的任何限制。基于本发明中的实施例,本领域普通技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。It should be noted that the embodiments in the present application and the features of the embodiments may be combined with each other in the case of no conflict. The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention. Obviously, the described embodiments are only a part of the embodiments of the present invention, but not all of the embodiments. The following description of at least one exemplary embodiment is merely illustrative in nature and is in no way intended to limit the invention, its application, or uses. Based on the embodiments of the present invention, all other embodiments obtained by those of ordinary skill in the art without creative efforts shall fall within the protection scope of the present invention.

需要注意的是,这里所使用的术语仅是为了描述具体实施方式,而非意图限制根据本申请的示例性实施方式。如在这里所使用的,除非上下文另外明确指出,否则单数形式也意图包括复数形式,此外,还应当理解的是,当在本说明书中使用术语“包含”和/或“包括”时,其指明存在特征、步骤、操作、器件、组件和/或它们的组合。It should be noted that the terminology used herein is for the purpose of describing specific embodiments only, and is not intended to limit the exemplary embodiments according to the present application. As used herein, unless the context clearly dictates otherwise, the singular is intended to include the plural as well, furthermore, it is to be understood that when the terms "comprising" and/or "including" are used in this specification, it indicates that There are features, steps, operations, devices, components, and/or combinations thereof.

除非另外具体说明,否则在这些实施例中阐述的部件和步骤的相对布置、数字表达式和数值不限制本发明的范围。同时,应当明白,为了便于描述,附图中所示出的各个部分的尺寸并不是按照实际的比例关系绘制的。对于相关领域普通技术人员已知的技术、方法和设备可能不作详细讨论,但在适当情况下,所述技术、方法和设备应当被视为说明书的一部分。在这里示出和讨论的所有示例中,任何具体值应被解释为仅仅是示例性的,而不是作为限制。因此,示例性实施例的其它示例可以具有不同的值。应注意到:相似的标号和字母在下面的附图中表示类似项,因此,一旦某一项在一个附图中被定义,则在随后的附图中不需要对其进行进一步讨论。The relative arrangement of the components and steps, the numerical expressions and numerical values set forth in these embodiments do not limit the scope of the invention unless specifically stated otherwise. Meanwhile, it should be understood that, for the convenience of description, the dimensions of various parts shown in the accompanying drawings are not drawn in an actual proportional relationship. Techniques, methods, and apparatus known to those of ordinary skill in the relevant art may not be discussed in detail, but where appropriate, such techniques, methods, and apparatus should be considered part of the specification. In all examples shown and discussed herein, any specific value should be construed as illustrative only and not as limiting. Accordingly, other examples of exemplary embodiments may have different values. It should be noted that like numerals and letters refer to like items in the following figures, so once an item is defined in one figure, it does not require further discussion in subsequent figures.

如图1所示,根据本发明的具体实施例提供了一种面向无人机的惯性、视觉和高度信息融合导航方法,该面向无人机的惯性、视觉和高度信息融合导航方法包括:根据无人机实时高度信息获取尺度估计值;根据尺度估计值更新视觉测量信息;视觉传感器采集视觉图像,根据视觉图像进行特征提取和匹配获取惯性传感器的速度初始值;构建惯性残差;根据惯性残差采用两次惯性传感器初始化方法获取IMU惯性参数;根据IMU惯性参数和视觉图像进行惯性视觉联合优化完成位姿估计,进行无人机定位以完成无人机的惯性、视觉和高度信息融合导航。As shown in FIG. 1 , according to a specific embodiment of the present invention, an inertial, visual and altitude information fusion navigation method for UAVs is provided, and the inertial, visual and altitude information fusion navigation methods for UAVs include: The real-time height information of the UAV obtains the scale estimation value; the visual measurement information is updated according to the scale estimation value; the visual sensor collects the visual image, performs feature extraction and matching according to the visual image to obtain the initial speed value of the inertial sensor; constructs the inertial residual; The difference adopts the inertial sensor initialization method twice to obtain the IMU inertial parameters; according to the IMU inertial parameters and visual images, the inertial vision joint optimization is performed to complete the pose estimation, and the UAV positioning is performed to complete the UAV inertial, visual and altitude information fusion navigation.

应用此种配置方式,提供了一种面向无人机的惯性、视觉和高度信息融合导航方法,该方法通过引入无人机实时高度信息,结合视觉图像,采用两次惯性传感器初始化方法获取IMU惯性参数,完成无人机位姿估计和定位。本发明在空中场景下能恢复较精准的尺度信息,IMU零位求解精度较高,跟踪定位过程中对于无人机的位姿估计和轨迹估计具备较高的精准性、实时性和鲁棒性。与现有技术相比,本发明的技术方案能够解决现有技术中导航方法位姿估计精度低、空中飞行时惯性传感器容易发散造成跟踪丢失、计算量复杂等技术问题。Using this configuration method, an inertial, visual and altitude information fusion navigation method for UAVs is provided. This method adopts the inertial sensor initialization method twice to obtain the inertial inertia of the IMU by introducing the real-time altitude information of UAVs and combining visual images. parameters to complete the UAV pose estimation and positioning. The present invention can restore more accurate scale information in the aerial scene, the IMU zero position solution precision is high, and the UAV's pose estimation and trajectory estimation have high accuracy, real-time and robustness in the tracking and positioning process. . Compared with the prior art, the technical solution of the present invention can solve the technical problems of the prior art, such as the low accuracy of the pose estimation of the navigation method, the easy divergence of the inertial sensor when flying in the air, resulting in tracking loss and complex computation.

进一步地,在本发明中,为了实现面向无人机的惯性、视觉和高度信息融合导航,首先根据无人机实时高度信息获取尺度估计值。Further, in the present invention, in order to realize the fusion navigation of inertial, visual and altitude information for the drone, the scale estimation value is first obtained according to the real-time altitude information of the drone.

在本发明中,根据无人机实时高度信息获取尺度估计值具体包括:在视觉传感器跟踪过程中,利用三角化技术获取关键帧的三维地图点深度,根据无人机实时高度信息和关键帧的三维地图点深度求解尺度估计值。In the present invention, obtaining the scale estimation value according to the real-time height information of the UAV specifically includes: in the tracking process of the visual sensor, using the triangulation technology to obtain the depth of the three-dimensional map point of the key frame, according to the real-time height information of the UAV and the depth of the key frame. 3D map point depth solver scale estimate.

作为本发明的一个具体实施例,在视觉传感器跟踪过程中,通过提取图像的ORB(Oriented FAST and Rotated BRIEF)特征进行特征匹配,根据两帧匹配获取第i个关键帧的三维地图点深度。导航定位处理系统根据无人机实时高度信息获取第i个关键帧的高度。根据第i个关键帧的高度和三维地图点深度获取尺度估计值,i=1,2,……,n,n为整数。As a specific embodiment of the present invention, in the process of visual sensor tracking, feature matching is performed by extracting ORB (Oriented FAST and Rotated BRIEF) features of the image, and the depth of the 3D map point of the i-th key frame is obtained according to the matching of two frames. The navigation and positioning processing system obtains the height of the i-th key frame according to the real-time height information of the UAV. The scale estimation value is obtained according to the height of the i-th key frame and the depth of the 3D map point, i=1, 2, ..., n, where n is an integer.

在该实施例中,根据s=h/d获取尺度估计值,其中,s为尺度估计值,h为第i个关键帧的高度,d为第i个关键帧的三维地图点深度。In this embodiment, the scale estimation value is obtained according to s=h/d, where s is the scale estimation value, h is the height of the ith key frame, and d is the depth of the 3D map point of the ith key frame.

进一步地,在本发明中,在获取尺度估计值后,根据尺度估计值更新视觉测量信息。作为本发明的一个具体实施例,根据尺度估计值更新视觉测量信息,包括更新视觉定位求解的位置、速度和三维地图点位置坐标。Further, in the present invention, after obtaining the scale estimation value, the visual measurement information is updated according to the scale estimation value. As a specific embodiment of the present invention, the visual measurement information is updated according to the scale estimation value, including updating the position, speed and position coordinates of the three-dimensional map point for the visual positioning solution.

进一步地,在本发明中,在更新视觉测量信息后,视觉传感器采集视觉图像,根据视觉图像进行特征提取和匹配获取惯性传感器的速度初始值。Further, in the present invention, after updating the visual measurement information, the visual sensor collects the visual image, and performs feature extraction and matching to obtain the initial speed value of the inertial sensor according to the visual image.

在本发明中,视觉传感器采集视觉图像,根据视觉图像进行特征提取和匹配获取惯性传感器的速度初始值具体包括:视觉传感器采集视觉图像,根据视觉图像进行特征提取和匹配,获取第i个关键帧的相机位姿;将第i个关键帧的相机位姿转换到IMU的body坐标系下;根据IMU的body坐标系下的第i个关键帧的相机位姿和运动时间获取惯性body坐标系下惯性传感器的速度初始值。In the present invention, the visual sensor collects the visual image, and performs feature extraction and matching according to the visual image to obtain the initial speed value of the inertial sensor, which specifically includes: the visual sensor collects the visual image, performs feature extraction and matching according to the visual image, and obtains the i-th key frame. Convert the camera pose of the ith key frame to the body coordinate system of the IMU; obtain the inertial body coordinate system according to the camera pose and motion time of the ith key frame under the body coordinate system of the IMU Inertial sensor velocity initial value.

作为本发明的一个具体实施例,根据Tbi=TbcTi将第i个关键帧的相机位姿转换到IMU的body坐标系下,其中,Ti为第i个关键帧的相机位姿,Tbi为IMU的body坐标系下第i个关键帧的相机位姿,位姿Tbi包括旋转量Ri和平移量pi,Tbc为相机坐标系到body坐标系的变换矩阵。As a specific embodiment of the present invention, the camera pose of the ith key frame is converted to the body coordinate system of the IMU according to Tbi =Tbc Ti , where Ti is the camera pose of the ith key frame , Tbi is the camera pose of the i-th key frame in the body coordinate system of the IMU, the pose Tbi includes the rotation amount Ri and the translation amount pi , and Tbc is the transformation matrix from the camera coordinate system to the body coordinate system.

结合运动时间,可根据

Figure BDA0003499066930000091
获取惯性body坐标系下惯性传感器的速度初始值,其中,vi为i时刻惯性body坐标系下惯性传感器的速度初始值,pi和pi-1分别为i时刻和i-1时刻的平移量,δt为运动时间。Combined with exercise time, according to
Figure BDA0003499066930000091
Obtain the initial speed value of the inertial sensor in the inertial body coordinate system, where vi is the initial speed value of the inertial sensor in the inertial body coordinate system at time i, and pi and pi-1 are the translation at time i and time i-1, respectively , δt is the movement time.

进一步地,在获取惯性速度初始值后,构建惯性残差。Further, after the initial value of the inertial velocity is obtained, the inertial residual is constructed.

在本发明中,构建惯性残差包括:根据IMU运动过程计算IMU运动测量状态,根据IMU运动测量状态获取IMU运动测量状态的预积分增量,根据IMU运动测量状态构建惯性残差。In the present invention, constructing the inertial residual includes: calculating the IMU motion measurement state according to the IMU motion process, obtaining the pre-integration increment of the IMU motion measurement state according to the IMU motion measurement state, and constructing the inertial residual according to the IMU motion measurement state.

作为本发明的一个具体实施例,IMU运动测量状态包括旋转、速度和平移,对于纯惯性运动,从i时刻运动到j时刻,旋转量、速度量和平移量分别满足:

Figure BDA0003499066930000101
其中,Rwbi表示i时刻从body坐标系到世界坐标系下的旋转量,Rwbj表示j时刻从body坐标系到世界坐标系下的旋转量,
Figure BDA0003499066930000102
为传感器测量的角速度,
Figure BDA0003499066930000103
为测量的加速度,g为重力加速度,
Figure BDA0003499066930000104
Figure BDA0003499066930000105
分别为k时刻陀螺仪的零位和加速度计的零位,vi、vj和vk分别为i时刻、j时刻和k时刻的速度初始值,Δt为从i时刻到k时刻的时间,Δtij为从i时刻到j时刻的时间,Rwbk表示k时刻从body坐标系到世界坐标系下的旋转量,pi为i时刻的平移量,pj为j时刻的平移量。As a specific embodiment of the present invention, the motion measurement state of the IMU includes rotation, speed and translation. For pure inertial motion, from time i to time j, the rotation amount, speed amount and translation amount respectively satisfy:
Figure BDA0003499066930000101
Among them, Rwbi represents the rotation amount from the body coordinate system to the world coordinate system at time i, and Rwbj represents the rotation amount from the body coordinate system to the world coordinate system at time j,
Figure BDA0003499066930000102
is the angular velocity measured by the sensor,
Figure BDA0003499066930000103
is the measured acceleration, g is the gravitational acceleration,
Figure BDA0003499066930000104
and
Figure BDA0003499066930000105
are the zero position of the gyroscope and the zero position of the accelerometer at time k, respectively, vi , vj and vk are the initial velocity values at time i, time j and time k, respectively, Δt is the time from time i to time k, Δtij is the time from time i to time j, Rwbk represents the rotation from the body coordinate system to the world coordinate system at time k, pi is the translation at time i, and pj is the translation at timej .

IMU运动测量状态的预积分量仅与IMU测量值相关,因此提取从i时刻到j时刻内IMU积分累积,可以获取IMU运动测量状态预积分增量。在该实施例中,可根据

Figure BDA0003499066930000106
获取IMU运动测量状态预积分增量,其中,
Figure BDA0003499066930000107
为j时刻与i时刻相比旋转量的预积分增量,
Figure BDA0003499066930000108
为j时刻与i时刻相比速度量的预积分增量,
Figure BDA0003499066930000109
为j时刻与i时刻相比平移量的预积分增量,
Figure BDA00034990669300001010
为i时刻陀螺仪的零位,
Figure BDA00034990669300001011
为i时刻加速度计的零位,ΔRbibk为k时刻与i时刻相比旋转量的预积分增量,Δvik为k时刻与i时刻相比速度量的预积分增量。The pre-integration amount of the IMU motion measurement state is only related to the IMU measurement value. Therefore, by extracting the IMU integral accumulation from time i to time j, the pre-integration increment of the IMU motion measurement state can be obtained. In this embodiment, according to
Figure BDA0003499066930000106
Get the IMU motion measurement state pre-integration increment, where,
Figure BDA0003499066930000107
is the pre-integration increment of the rotation amount at time j compared to time i,
Figure BDA0003499066930000108
is the pre-integration increment of the velocity at time j compared to time i,
Figure BDA0003499066930000109
is the pre-integration increment of the translation amount at time j compared to time i,
Figure BDA00034990669300001010
is the zero position of the gyroscope at time i,
Figure BDA00034990669300001011
is the zero position of the accelerometer at time i, ΔRbibk is the pre-integration increment of the rotation amount at time k compared with time i, and Δvik is the pre-integration increment of velocity compared with time k at time i.

对状态量进行约束,建立惯性残差。在该实施例中,可根据

Figure BDA0003499066930000111
建立惯性残差,其中,
Figure BDA0003499066930000112
为i时刻至j时刻的旋转残差,
Figure BDA0003499066930000113
为i时刻至j时刻的速度残差,
Figure BDA0003499066930000114
为i时刻至j时刻的平移残差。Constrain the state quantities to establish inertial residuals. In this embodiment, according to
Figure BDA0003499066930000111
establish the inertial residual, where,
Figure BDA0003499066930000112
is the rotation residual from time i to time j,
Figure BDA0003499066930000113
is the velocity residual from time i to time j,
Figure BDA0003499066930000114
is the translation residual from time i to time j.

进一步地,在本发明中,在建立惯性残差后,根据惯性残差采用两次惯性传感器初始化方法获取IMU惯性参数。Further, in the present invention, after the inertial residual is established, two inertial sensor initialization methods are used to obtain the IMU inertial parameters according to the inertial residual.

在本发明中,如图2所示,根据惯性残差采用两次惯性传感器初始化方法获取IMU惯性参数具体包括:根据尺度估计值、更新后的视觉测量信息和惯性残差构建图优化问题进行第一次IMU初始化,获取惯性传感器零位;按照重力方向更新当前位姿信息,根据惯性传感器零位更新惯性残差;根据惯性传感器零位和更新后的惯性残差构建图优化问题进行第二次IMU初始化,获取IMU惯性参数;按照重力方向和尺度更新当前位姿信息,根据IMU惯性参数中的惯性传感器零位更新惯性残差。In the present invention, as shown in FIG. 2 , using the inertial sensor initialization method twice to obtain the inertial parameters of the IMU according to the inertial residuals specifically includes: constructing a graph optimization problem according to the scale estimation value, the updated visual measurement information and the inertial residuals. Initialize the IMU once to obtain the zero position of the inertial sensor; update the current pose information according to the direction of gravity, update the inertial residual according to the zero position of the inertial sensor; construct a graph optimization problem according to the zero position of the inertial sensor and the updated inertial residual for the second time The IMU is initialized, and the inertial parameters of the IMU are obtained; the current pose information is updated according to the gravity direction and scale, and the inertial residual is updated according to the zero position of the inertial sensor in the inertial parameters of the IMU.

作为本发明的一个具体实施例,对于第一次IMU初始化,考虑纯惯性传感器的运动问题,涉及到的IMU参数包含:yk={s,Rwg,b,v0:k},yk为第一次IMU初始化中第k关键帧的IMU参数;Rwg为标准重力系到世界坐标系下的重力旋转量;v0:k为0到第k关键帧之间body坐标系下的速度,可根据i时刻惯性body坐标系下惯性传感器的速度初始值vi获取;b为零位,

Figure BDA0003499066930000115
在该实施例中,根据标准重力系到世界坐标系下的重力旋转量Rwg可获取在世界坐标系下的重力向量g=Rwg(0,0,G)T。As a specific embodiment of the present invention, for the first IMU initialization, considering the motion problem of pure inertial sensors, the involved IMU parameters include: yk ={s, Rwg , b, v0 : k }, yk is the IMU parameter of the k-th key frame in the first IMU initialization; Rwg is the gravity rotation amount from the standard gravity system to the world coordinate system; v0: k is the velocity in the body coordinate system between 0 and the k-th key frame , which can be obtained according to the initial value vi of the inertial sensor in the inertial body coordinate system at timei ; b is zero,
Figure BDA0003499066930000115
In this embodiment, the gravity vector g=Rwg (0,0,G)T in the world coordinate system can be obtained according to the gravity rotation amount Rwg from the standard gravity system to the world coordinate system.

仅考虑惯性传感器初始化,将传感器测量值表示为I0:k,构建图优化问题用最大后验概率估计为p(yk|I0:k)∝p(I0:k|yk)p(yk),其中,p(yk)是先验信息,p(yk|I0:k)为在传感器测量为I0:k时IMU参数为yk的概率,p(I0:k|yk)为在IMU参数为yk时,传感器测量为I0:k的概率。Considering only the inertial sensor initialization, denoting the sensor measurements as I0:k , the construction graph optimization problem is estimated as p(yk |I0:k )∝p(I0:k |yk )p with maximum posterior probability (yk ), where p(yk ) is the prior information, p(yk |I0:k ) is the probability that the IMU parameter is yk when the sensor measurement is I0:k , p(I0: k |yk ) is the probability that the sensor measurement is I0:k when the IMU parameter is yk .

求解图优化问题,利用每帧测量的独立性,可以转换为

Figure BDA0003499066930000121
其中,p(Ii-1,i|s,R,b,vi)为在第i关键帧尺度为s,重力旋转为R,偏差为b和第i时刻速度为vi时,第i-1关键帧到第i关键帧出现设定传感器测量状态Ii-1,i的概率。Solving the graph optimization problem, exploiting the independence of each frame measurement, can be transformed into
Figure BDA0003499066930000121
Among them, p(Ii-1, i |s, R, b, vi ) is when the scale of the i-th key frame is s, the gravity rotation is R, the deviation is b and the speed at the i-th time is vi, thei -th key frame is -1 keyframe to the occurrence of the i-th keyframe sets the probability that the sensor measures the state Ii-1,i .

未来求解最大化后验概率,第k关键帧的IMU参数可以表示为

Figure BDA0003499066930000122
Figure BDA0003499066930000123
为第一次IMU初始化实际获取的第k关键帧的IMU参数。To solve the maximum posterior probability in the future, the IMU parameters of the kth key frame can be expressed as
Figure BDA0003499066930000122
Figure BDA0003499066930000123
The IMU parameters of the kth key frame actually obtained for the first IMU initialization.

将最大似然估计取负对数,假设IMU预积分和先验分布均满足高斯分布,则可以得到

Figure BDA0003499066930000124
其中,rp是先验残差,趋向于0忽略不计;
Figure BDA0003499066930000125
为i-1时刻至i时刻的惯性残差,
Figure BDA0003499066930000126
Figure BDA0003499066930000127
Taking the negative logarithm of the maximum likelihood estimate, and assuming that the IMU pre-integration and prior distribution both satisfy the Gaussian distribution, we can get
Figure BDA0003499066930000124
Among them, rp is the prior residual, which tends to be 0 and can be ignored;
Figure BDA0003499066930000125
is the inertial residual from time i-1 to time i,
Figure BDA0003499066930000126
Figure BDA0003499066930000127

经过上述推导,固定尺度估计值s和0到第k关键帧之间body坐标系下的速度v0:k,根据

Figure BDA0003499066930000128
进行图优化求解获取标准重力系到世界坐标系下的重力旋转量Rωg、陀螺仪零位
Figure BDA0003499066930000131
和加速度计零位
Figure BDA0003499066930000132
After the above derivation, the fixed scale estimated value s and the velocity v0:k in the body coordinate system between 0 and the kth key frame, according to
Figure BDA0003499066930000128
Perform graph optimization to obtain the gravitational rotation Rωg from the standard gravitational system to the world coordinate system and the zero position of the gyroscope
Figure BDA0003499066930000131
and accelerometer zero
Figure BDA0003499066930000132

此外,在本发明中,在完成第一次IMU初始化获取惯性传感器零位后,按照重力方向更新当前位姿信息,根据惯性传感器零位更新惯性残差。作为本发明的一个具体实施例,根据惯性传感器零位更新惯性传感器的预积分过程。In addition, in the present invention, after the first IMU initialization is completed to obtain the zero position of the inertial sensor, the current pose information is updated according to the direction of gravity, and the inertial residual is updated according to the zero position of the inertial sensor. As a specific embodiment of the present invention, the pre-integration process of the inertial sensor is updated according to the zero position of the inertial sensor.

作为本发明的一个具体实施例,对于第二次IMU初始化,同样考虑纯惯性传感器的运动问题,涉及到的IMU参数包含:

Figure BDA0003499066930000133
其中,
Figure BDA0003499066930000134
为第二次IMU初始化中第k关键帧的IMU参数,
Figure BDA0003499066930000135
为待求解的尺度因子。As a specific embodiment of the present invention, for the second IMU initialization, the motion problem of pure inertial sensors is also considered, and the involved IMU parameters include:
Figure BDA0003499066930000133
in,
Figure BDA0003499066930000134
is the IMU parameter of the kth key frame in the second IMU initialization,
Figure BDA0003499066930000135
is the scale factor to be solved.

与第一次IMU初始化相似,可推导获取

Figure BDA0003499066930000136
其中,
Figure BDA0003499066930000137
为第二次IMU初始化实际获取的第k关键帧的IMU参数,
Figure BDA0003499066930000138
为i-1时刻至i时刻的包含尺度信息的新的惯性残差,
Figure BDA0003499066930000139
由于尺度信息需要同步进行优化,因此可根据Similar to the first IMU initialization, it can be deduced to obtain
Figure BDA0003499066930000136
in,
Figure BDA0003499066930000137
The IMU parameters of the kth key frame actually obtained for the second IMU initialization,
Figure BDA0003499066930000138
is the new inertial residual containing scale information from time i-1 to time i,
Figure BDA0003499066930000139
Since the scale information needs to be optimized synchronously, it can be optimized according to

Figure BDA00034990669300001310
获取新的惯性残差,其中,
Figure BDA00034990669300001311
为新的旋转残差,
Figure BDA00034990669300001312
为新的速度残差,
Figure BDA00034990669300001313
为新的平移残差。
Figure BDA00034990669300001310
Get the new inertial residuals, where,
Figure BDA00034990669300001311
is the new rotated residual,
Figure BDA00034990669300001312
is the new velocity residual,
Figure BDA00034990669300001313
is the new translation residual.

此外,在本发明中,在完成第二次IMU初始化获取IMU惯性参数后,按照重力方向和尺度更新当前位姿信息,根据IMU惯性参数中的惯性传感器零位更新惯性残差。作为本发明的一个具体实施例,根据IMU惯性参数中的惯性传感器零位更新惯性传感器的预积分过程。In addition, in the present invention, after the second IMU initialization is completed to obtain the IMU inertial parameters, the current pose information is updated according to the gravity direction and scale, and the inertial residual is updated according to the zero position of the inertial sensor in the IMU inertial parameters. As a specific embodiment of the present invention, the pre-integration process of the inertial sensor is updated according to the zero position of the inertial sensor in the inertial parameters of the IMU.

进一步地,在本发明中,在根据惯性残差采用两次惯性传感器初始化方法获取IMU惯性参数后,根据IMU惯性参数和视觉图像进行惯性视觉联合优化完成位姿估计,进行无人机定位以完成无人机的惯性、视觉和高度信息融合导航。Further, in the present invention, after using the inertial sensor initialization method twice to obtain the IMU inertial parameters according to the inertial residual, the inertial visual joint optimization is carried out according to the IMU inertial parameters and the visual image to complete the pose estimation, and the UAV positioning is carried out to complete the position and attitude estimation. Inertial, visual and altitude information fusion navigation of UAV.

在本发明中,根据IMU惯性参数和视觉图像进行惯性视觉联合优化完成位姿估计具体包括:根据IMU惯性参数构建惯性误差;根据视觉图像的ORB特征匹配结果,构造视觉重投影误差;根据惯性误差和视觉重投影误差构造视觉惯性联合优化函数进行位姿优化以获取全局最优相机位姿序列和场景地图点。In the present invention, the joint optimization of inertial vision according to the IMU inertial parameters and the visual image to complete the pose estimation specifically includes: constructing the inertial error according to the IMU inertial parameter; constructing the visual reprojection error according to the ORB feature matching result of the visual image; according to the inertial error A visual-inertial joint optimization function is constructed with the visual reprojection error for pose optimization to obtain the global optimal camera pose sequence and scene map points.

作为本发明的一个具体实施例,可根据获取的IMU惯性参数中的重力旋转量、速度和零位构建惯性误差。视觉惯性联合优化函数可以参照ORB-SLAM3中的方法进行构造。可采用非线性优化框架g2o,通过Levenberg-Marquardt方法迭代直至误差函数值小于设定值,得到全局最优相机位姿序列和场景地图点。As a specific embodiment of the present invention, the inertial error can be constructed according to the gravity rotation amount, speed and zero position in the acquired IMU inertial parameters. The visual-inertial joint optimization function can be constructed by referring to the method in ORB-SLAM3. The nonlinear optimization framework g2o can be used, and the Levenberg-Marquardt method can be used to iterate until the error function value is less than the set value to obtain the global optimal camera pose sequence and scene map points.

进一步地,在本发明中,面向无人机的惯性、视觉和高度信息融合导航方法还包括:在完成无人机定位后,判断当前无人机跟踪定位状态,若当前无人机跟踪定位状态丢失,则导航系统进入重定位阶段,判断视觉传感器识别的图像帧是否与当前局部地图匹配,若匹配,则根据当前局部地图的坐标和无人机尺度信息继续进行无人机定位;若不匹配,则重新构建地图,返回根据无人机实时高度信息获取尺度估计值直至完成无人机定位。Further, in the present invention, the inertial, visual and altitude information fusion navigation method oriented to the UAV also includes: after completing the UAV positioning, judging the current UAV tracking and positioning state, if the current UAV tracking and positioning state If it is lost, the navigation system enters the relocation stage, and judges whether the image frame recognized by the visual sensor matches the current local map. , then rebuild the map, and return to obtain the scale estimation value according to the real-time height information of the UAV until the UAV positioning is completed.

在重定位阶段,视觉传感器识别的图像帧与当前局部地图不匹配时,构建新的地图,设置新地图为当前活跃地图,设立尺度更新机制。作为本发明的一个具体实施例,尺度更新机制具体包括:视觉传感器采集预设时长的视觉测量结果,按照前述步骤重新求解尺度因子,根据新的尺度信息完成无人机位姿估计,根据新的尺度信息完成无人机定位。在该实施例中,预设时长可根据实际视觉图像的需要设置,例如可设置为10s。In the relocation phase, when the image frame recognized by the visual sensor does not match the current local map, a new map is constructed, the new map is set as the current active map, and a scale update mechanism is established. As a specific embodiment of the present invention, the scale update mechanism specifically includes: a visual sensor collects a visual measurement result of a preset duration, re-solves the scale factor according to the preceding steps, completes the UAV pose estimation according to the new scale information, and according to the new scale information The scale information completes the UAV positioning. In this embodiment, the preset duration can be set according to the needs of the actual visual image, for example, it can be set to 10s.

本发明的面向无人机的惯性、视觉和高度信息融合导航方法,在空中场景下能恢复较精准的尺度信息,IMU零位求解精度较高。跟踪定位过程中对于无人机的位姿估计和轨迹估计具备较高的精准性、实时性和鲁棒性。本发明针对无人机实际高空应用场景中尺度难以估计的问题,在ORB-SLAM算法的基础上引入高度信息,利用视觉测量获取三维地图点坐标,结合真实高度值和地图估计深度,有效求解了场景的尺度因子,恢复场景的尺度信息,整体提升了高空场景下跟踪定位中运动轨迹估计的精准性。本发明针对实际高空应用场景中出现的惯性传感器零位大的问题,提出一种新的惯性传感器初始化方法。利用两次图优化有效获取了惯性传感器的零位,并优化了场景尺度信息,整体提升了高空场景下跟踪定位中六自由度位姿估计的精准性和鲁棒性。The drone-oriented inertial, visual and altitude information fusion navigation method of the present invention can restore more accurate scale information in an aerial scene, and the IMU zero position solution precision is high. During the tracking and positioning process, the pose estimation and trajectory estimation of the UAV have high accuracy, real-time and robustness. Aiming at the problem that the scale is difficult to estimate in the actual high-altitude application scene of the UAV, the invention introduces the height information on the basis of the ORB-SLAM algorithm, uses the visual measurement to obtain the coordinates of the three-dimensional map point, and combines the real height value and the map to estimate the depth to effectively solve the problem. The scale factor of the scene restores the scale information of the scene, which improves the accuracy of motion trajectory estimation in tracking and positioning in high-altitude scenes as a whole. Aiming at the problem of large zero position of the inertial sensor in the actual high-altitude application scenario, the invention proposes a new initialization method of the inertial sensor. The zero position of the inertial sensor is effectively obtained by two graph optimizations, and the scene scale information is optimized, which improves the accuracy and robustness of the six-degree-of-freedom pose estimation in the tracking and positioning in the high-altitude scene as a whole.

根据本发明的另一方面,提供了一种存储装置,该存储装置存储有多个程序,多个程序用于实现如上所述的面向无人机的惯性、视觉和高度信息融合导航方法。According to another aspect of the present invention, a storage device is provided, the storage device stores a plurality of programs, and the plurality of programs are used to implement the above-mentioned inertial, visual and altitude information fusion navigation method for UAV.

作为本发明的一个具体实施例,存储装置可采用随机存储器(RAM)、内存、只读存储器(ROM)、电可编程ROM、电可擦除可编程ROM、寄存器、硬盘、可移动磁盘、CD-ROM、或技术领域内所公知的任意其它形式。As a specific embodiment of the present invention, the storage device may adopt random access memory (RAM), internal memory, read only memory (ROM), electrically programmable ROM, electrically erasable programmable ROM, registers, hard disks, removable disks, CDs - ROM, or any other form known in the art.

根据本发明的另一方面,提供了一种处理装置,该处理装置包括处理器和如上所述的存储装置,所述处理器用于加载并执行所述存储装置中的多个程序以实现如上所述的面向无人机的惯性、视觉和高度信息融合导航方法。According to another aspect of the present invention, there is provided a processing device comprising a processor and the above-mentioned storage device, wherein the processor is configured to load and execute a plurality of programs in the storage device to realize the above-mentioned The described inertial, visual and altitude information fusion navigation method for UAV.

综上所述,本发明提供了一种面向无人机的惯性、视觉和高度信息融合导航方法,该方法通过引入无人机实时高度信息,结合视觉图像,采用两次惯性传感器初始化方法获取IMU惯性参数,完成无人机位姿估计和定位。本发明在空中场景下能恢复较精准的尺度信息,IMU零位求解精度较高,跟踪定位过程中对于无人机的位姿估计和轨迹估计具备较高的精准性、实时性和鲁棒性。与现有技术相比,本发明的技术方案能够解决现有技术中导航方法位姿估计精度低、空中飞行时惯性传感器容易发散造成跟踪丢失、计算量复杂等技术问题。To sum up, the present invention provides an inertial, visual and altitude information fusion navigation method for UAVs. The method obtains the IMU by introducing the real-time altitude information of UAVs, combined with visual images, and using the inertial sensor initialization method twice. Inertial parameters, complete the UAV pose estimation and positioning. The present invention can restore more accurate scale information in the aerial scene, the IMU zero position solution precision is high, and the UAV's pose estimation and trajectory estimation have high accuracy, real-time and robustness in the tracking and positioning process. . Compared with the prior art, the technical solution of the present invention can solve the technical problems of the prior art, such as the low accuracy of the pose estimation of the navigation method, the easy divergence of the inertial sensor when flying in the air, resulting in tracking loss and complex computation.

以上所述仅为本发明的优选实施例而已,并不用于限制本发明,对于本领域的技术人员来说,本发明可以有各种更改和变化。凡在本发明的精神和原则之内,所作的任何修改、等同替换、改进等,均应包含在本发明的保护范围之内。The above descriptions are only preferred embodiments of the present invention, and are not intended to limit the present invention. For those skilled in the art, the present invention may have various modifications and changes. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present invention shall be included within the protection scope of the present invention.

Claims (10)

1. An unmanned aerial vehicle-oriented inertial, visual and altitude information fusion navigation method is characterized by comprising the following steps: obtaining a scale estimation value according to the real-time height information of the unmanned aerial vehicle; updating the vision measurement information according to the scale estimation value; the method comprises the steps that a visual sensor collects visual images, and according to the visual images, feature extraction and matching are carried out to obtain an initial speed value of an inertial sensor; constructing an inertia residual error; obtaining IMU inertial parameters by adopting a twice inertial sensor initialization method according to the inertial residual error; and performing inertial vision joint optimization according to the IMU inertial parameters and the visual image to complete pose estimation, and positioning the unmanned aerial vehicle to complete inertial, visual and height information fusion navigation of the unmanned aerial vehicle.
2. The unmanned aerial vehicle-oriented inertial, visual and altitude information fusion navigation method according to claim 1, wherein the obtaining of the scale estimation value according to the real-time altitude information of the unmanned aerial vehicle specifically comprises: in the tracking process of the visual sensor, performing feature matching by extracting ORB features of an image, and acquiring the depth of a three-dimensional map point of an ith key frame according to two-frame matching; the navigation positioning processing system acquires the height of the ith key frame according to the real-time height information of the unmanned aerial vehicle; and obtaining a scale estimation value according to the s-h/d, wherein the s is the scale estimation value, h is the height of the ith key frame, d is the three-dimensional map point depth of the ith key frame, and i-1, 2, … …, n and n are integers.
3. The unmanned aerial vehicle-oriented inertial, visual and altitude information fusion navigation method according to claim 2, wherein the extracting and matching the features according to the visual image to obtain the initial velocity value of the inertial sensor specifically comprises: performing feature extraction and matching according to the visual image to acquire the camera pose of the ith key frame; converting the camera pose of the ith key frame into a body coordinate system of the IMU; according to
Figure FDA0003499066920000011
Acquiring the initial velocity value of the inertial sensor under the inertial body coordinate system, wherein viIs the initial value of the velocity of the inertial sensor in the inertial body coordinate system at the moment i, piAnd pi-1The translation amounts at the i moment and the i-1 moment respectively, and delta t is the motion time.
4. The unmanned aerial vehicle-oriented inertial, visual and altitude information fusion navigation method according to any one of claims 1 to 3, wherein constructing the inertial residual specifically comprises: and calculating an IMU motion measurement state according to the IMU motion process, acquiring a pre-integral increment of the IMU motion measurement state according to the IMU motion measurement state, and constructing an inertia residual error according to the IMU motion measurement state.
5. The unmanned aerial vehicle-oriented inertial, visual and altitude information fusion navigation method according to claim 4, wherein the navigation method is based on
Figure FDA0003499066920000021
An inertial residual is established, wherein,
Figure FDA0003499066920000022
the rotation residuals from time i to time j,
Figure FDA0003499066920000023
the velocity residual from time i to time j,
Figure FDA0003499066920000024
the translation residuals from time i to time j,
Figure FDA0003499066920000025
for the angular velocity to be measured by the sensor,
Figure FDA0003499066920000026
zero position of the gyroscope at time i, Δ t is the time from time i to time k, RwbiRepresenting the amount of rotation, R, of the i moment from the body coordinate system to the world coordinate systemwbjRepresents the amount of rotation, v, of time j from the body coordinate system to the world coordinate systemiAnd vjThe initial values of the speed at the time i and the time j respectively, g is the gravity acceleration, delta tijIs the time from time i to time j, Δ RbibkA pre-integrated increment of the amount of rotation at time k compared to time i,
Figure FDA0003499066920000027
in order to measure the acceleration of the vehicle,
Figure FDA0003499066920000028
zero position, p, of accelerometer at time iiAmount of translation at i, pjAmount of translation at time j, Δ vikIs a pre-integrated increment of the velocity amount at time k compared to time i.
6. The unmanned aerial vehicle-oriented inertial, visual and altitude information fusion navigation method according to claim 1, wherein obtaining IMU inertial parameters by using two inertial sensor initialization methods according to inertial residuals specifically comprises: performing IMU initialization for the first time according to the scale estimation value, the updated visual measurement information and the inertial residual error construction chart optimization problem to obtain the zero position of the inertial sensor; updating current pose information according to the gravity direction, and updating inertia residual errors according to the zero position of the inertia sensor; performing IMU initialization for the second time according to the zero position of the inertial sensor and the updated inertial residual error construction diagram optimization problem to obtain IMU inertial parameters; and updating the current pose information according to the gravity direction and the scale, and updating the inertia residual error according to the inertia sensor zero position in the IMU inertia parameters.
7. The unmanned aerial vehicle-oriented inertial, visual and altitude information fusion navigation method as claimed in claim 6, wherein in the first IMU initialization, the fixed scale estimation value s and the velocity v in the body coordinate system between 0 and k key frames0:kAccording to
Figure FDA0003499066920000031
Performing graph optimization to obtain the amount of gravity rotation R from the standard gravity system to the world coordinate systemwgZero position of gyroscope
Figure FDA0003499066920000032
And accelerometer zero position
Figure FDA0003499066920000033
Wherein,
Figure FDA0003499066920000034
initializing IMU parameters of the k-th key frame actually acquired for the first IMU, wherein the IMU parameters comprise a scale estimation value s and a gravity rotation amount R from a standard gravity system to a world coordinate systemwgZero position of gyroscope at moment i
Figure FDA0003499066920000035
Zero position of accelerometer at moment i
Figure FDA0003499066920000036
And velocity upsilon in body coordinate system between 0 and kth key frame0:k;rpIs a priori residual, tends to 0;
Figure FDA0003499066920000037
is the inertial residual from time i-1 to time i,
Figure FDA0003499066920000038
8. the unmanned aerial vehicle-oriented inertial, visual and altitude information fusion navigation method according to claim 7, wherein in the second IMU initialization, the method is based on
Figure FDA0003499066920000039
And carrying out graph optimization solution to obtain inertial parameters of the IMU, wherein,
Figure FDA00034990669200000310
the IMU parameters of the actual acquired k-th keyframe are initialized for the second IMU,
Figure FDA00034990669200000311
the new inertial residual containing scale information from time i-1 to time i,
Figure FDA00034990669200000312
Figure FDA0003499066920000041
Figure FDA0003499066920000042
for the scale factors to be solved for the solution,
Figure FDA0003499066920000043
for the new residual of the rotation to be,
Figure FDA0003499066920000044
for the new velocity residual to be the result of the velocity,
Figure FDA0003499066920000045
is the new translation residual.
9. The unmanned aerial vehicle-oriented inertial, visual and altitude information fusion navigation method according to claim 1, wherein performing inertial and visual joint optimization based on IMU inertial parameters and visual images to complete pose estimation specifically comprises: constructing an inertial error according to the IMU inertial parameters; constructing a visual reprojection error according to the ORB feature matching result of the visual image; and constructing a visual inertia joint optimization function according to the inertial error and the visual reprojection error to perform pose optimization so as to acquire a global optimal camera pose sequence and scene map points.
10. The unmanned aerial vehicle-oriented inertial, visual and altitude information fusion navigation method according to any one of claims 1 to 9, wherein the unmanned aerial vehicle-oriented inertial, visual and altitude information fusion navigation method further comprises: after the unmanned aerial vehicle positioning is finished, judging the current unmanned aerial vehicle tracking and positioning state, if the current unmanned aerial vehicle tracking and positioning state is lost, enabling the navigation system to enter a repositioning stage, judging whether an image frame identified by a visual sensor is matched with a current local map, and if the image frame is matched with the current local map, continuing to perform unmanned aerial vehicle positioning according to the coordinates of the current local map and the scale information of the unmanned aerial vehicle; and if not, reconstructing the map, and returning to obtain the scale estimation value according to the real-time height information of the unmanned aerial vehicle until positioning of the unmanned aerial vehicle is completed.
CN202210122642.3A2022-02-092022-02-09Unmanned aerial vehicle-oriented inertial, visual and altitude information fusion navigation methodActiveCN114485649B (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
CN202210122642.3ACN114485649B (en)2022-02-092022-02-09Unmanned aerial vehicle-oriented inertial, visual and altitude information fusion navigation method

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN202210122642.3ACN114485649B (en)2022-02-092022-02-09Unmanned aerial vehicle-oriented inertial, visual and altitude information fusion navigation method

Publications (2)

Publication NumberPublication Date
CN114485649Atrue CN114485649A (en)2022-05-13
CN114485649B CN114485649B (en)2023-09-12

Family

ID=81478209

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN202210122642.3AActiveCN114485649B (en)2022-02-092022-02-09Unmanned aerial vehicle-oriented inertial, visual and altitude information fusion navigation method

Country Status (1)

CountryLink
CN (1)CN114485649B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN114910069A (en)*2022-06-012022-08-16深圳市边界智控科技有限公司Fusion positioning initialization system and method for unmanned aerial vehicle
CN116295342A (en)*2023-03-152023-06-23中国民航大学 A Multi-Sensing State Estimator for Aircraft Surveying
CN120252746A (en)*2025-06-062025-07-04中国科学院国家空间科学中心 A long-range and long-duration integrated navigation method and system combining visual-inertial joint optimization and image matching

Citations (7)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN107869989A (en)*2017-11-062018-04-03东北大学 A positioning method and system based on visual inertial navigation information fusion
US10032276B1 (en)*2016-08-292018-07-24PerceptIn, Inc.Visual-inertial positional awareness for autonomous and non-autonomous device
CN110388917A (en)*2018-04-232019-10-29北京京东尚科信息技术有限公司Aircraft monocular vision Scale Estimation Method and device, aircraft guidance system and aircraft
CN112240768A (en)*2020-09-102021-01-19西安电子科技大学 Visual-Inertial Navigation Fusion SLAM Method Based on Runge-Kutta4 Improved Pre-integration
CN112596071A (en)*2020-11-022021-04-02中国兵器工业计算机应用技术研究所Unmanned aerial vehicle autonomous positioning method and device and unmanned aerial vehicle
CN113029134A (en)*2021-03-022021-06-25浙江商汤科技开发有限公司Visual inertial system initialization method and device, electronic equipment and storage medium
CN113223064A (en)*2020-01-212021-08-06北京初速度科技有限公司Method and device for estimating scale of visual inertial odometer

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US10032276B1 (en)*2016-08-292018-07-24PerceptIn, Inc.Visual-inertial positional awareness for autonomous and non-autonomous device
CN107869989A (en)*2017-11-062018-04-03东北大学 A positioning method and system based on visual inertial navigation information fusion
CN110388917A (en)*2018-04-232019-10-29北京京东尚科信息技术有限公司Aircraft monocular vision Scale Estimation Method and device, aircraft guidance system and aircraft
CN113223064A (en)*2020-01-212021-08-06北京初速度科技有限公司Method and device for estimating scale of visual inertial odometer
CN112240768A (en)*2020-09-102021-01-19西安电子科技大学 Visual-Inertial Navigation Fusion SLAM Method Based on Runge-Kutta4 Improved Pre-integration
CN112596071A (en)*2020-11-022021-04-02中国兵器工业计算机应用技术研究所Unmanned aerial vehicle autonomous positioning method and device and unmanned aerial vehicle
CN113029134A (en)*2021-03-022021-06-25浙江商汤科技开发有限公司Visual inertial system initialization method and device, electronic equipment and storage medium

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
刘畅 等: "一种基于惯性/视觉信息融合的无人机自主着陆导航算法", 导航定位与授时, vol. 3, no. 6, pages 6 - 11*
尚克军 等: "无人机动平台着陆惯性/视觉位姿歧义校正算法", 中国惯性技术学报, vol. 28, no. 4, pages 462 - 468*

Cited By (4)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN114910069A (en)*2022-06-012022-08-16深圳市边界智控科技有限公司Fusion positioning initialization system and method for unmanned aerial vehicle
CN114910069B (en)*2022-06-012025-01-10深圳市边界智控科技有限公司 A fusion positioning initialization system and method for unmanned aerial vehicle
CN116295342A (en)*2023-03-152023-06-23中国民航大学 A Multi-Sensing State Estimator for Aircraft Surveying
CN120252746A (en)*2025-06-062025-07-04中国科学院国家空间科学中心 A long-range and long-duration integrated navigation method and system combining visual-inertial joint optimization and image matching

Also Published As

Publication numberPublication date
CN114485649B (en)2023-09-12

Similar Documents

PublicationPublication DateTitle
CN112347840B (en)Vision sensor laser radar integrated unmanned aerial vehicle positioning and image building device and method
CN106708066B (en) Autonomous landing method of UAV based on vision/inertial navigation
CN114485649B (en)Unmanned aerial vehicle-oriented inertial, visual and altitude information fusion navigation method
Panahandeh et al.Vision-aided inertial navigation based on ground plane feature detection
CN106679648B (en)Visual inertia combination SLAM method based on genetic algorithm
CN111288989B (en)Visual positioning method for small unmanned aerial vehicle
CN111156998A (en)Mobile robot positioning method based on RGB-D camera and IMU information fusion
CN110260861B (en) Pose determination method and device, and odometer
CN110617814A (en)Monocular vision and inertial sensor integrated remote distance measuring system and method
CN107504969A (en)Four rotor-wing indoor air navigation aids of view-based access control model and inertia combination
CN108981693A (en)VIO fast joint initial method based on monocular camera
CN110749308B (en) SLAM-oriented outdoor localization method using consumer-grade GPS and 2.5D building models
Steiner et al.A vision-aided inertial navigation system for agile high-speed flight in unmapped environments: Distribution statement a: Approved for public release, distribution unlimited
CN116989772B (en) An air-ground multi-modal multi-agent collaborative positioning and mapping method
CN114387341B (en) Method for calculating the six-degree-of-freedom position and attitude of an aircraft using a single aerial observation image
CN111083633B (en) Mobile terminal positioning system and its establishment method, and mobile terminal positioning method
Rhudy et al.Unmanned aerial vehicle navigation using wide‐field optical flow and inertial sensors
Kehoe et al.State estimation using optical flow from parallax-weighted feature tracking
CN110598370B (en) Robust Attitude Estimation for Multi-rotor UAV Based on Fusion of SIP and EKF
CN105389819A (en)Robust semi-calibrating down-looking image epipolar rectification method and system
Hinzmann et al.Robust map generation for fixed-wing UAVs with low-cost highly-oblique monocular cameras
Cao et al.Research on application of computer vision assist technology in high-precision UAV navigation and positioning
Schill et al.Estimating ego-motion in panoramic image sequences with inertial measurements
Song et al.Advancements in translation accuracy for stereo visual-inertial initialization
CN107449419A (en)The Full Parameterized vision measuring method of the continuous kinematic parameter of body target

Legal Events

DateCodeTitleDescription
PB01Publication
PB01Publication
SE01Entry into force of request for substantive examination
SE01Entry into force of request for substantive examination
GR01Patent grant
GR01Patent grant

[8]ページ先頭

©2009-2025 Movatter.jp