Movatterモバイル変換


[0]ホーム

URL:


CN101109640A - Vision-based autonomous landing navigation system for unmanned aircraft - Google Patents

Vision-based autonomous landing navigation system for unmanned aircraft
Download PDF

Info

Publication number
CN101109640A
CN101109640ACNA2006100888367ACN200610088836ACN101109640ACN 101109640 ACN101109640 ACN 101109640ACN A2006100888367 ACNA2006100888367 ACN A2006100888367ACN 200610088836 ACN200610088836 ACN 200610088836ACN 101109640 ACN101109640 ACN 101109640A
Authority
CN
China
Prior art keywords
runway
camera
airborne
uav
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CNA2006100888367A
Other languages
Chinese (zh)
Inventor
陈宗基
陈磊
周锐
李卫琪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beihang University
Original Assignee
Beihang University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beihang UniversityfiledCriticalBeihang University
Priority to CNA2006100888367ApriorityCriticalpatent/CN101109640A/en
Publication of CN101109640ApublicationCriticalpatent/CN101109640A/en
Pendinglegal-statusCriticalCurrent

Links

Images

Classifications

Landscapes

Abstract

Translated fromChinese

一种基于视觉的无人驾驶飞机自主着陆导航系统,由软件算法及硬件装置组成;软件算法有:计算机视觉算法及信息融合和状态估计算法;硬件装置有:跑道特征、机载传感器子系统和信息融合子系统;机载传感器子系统有:机载摄像机系统、机载惯导系统、高度表系统、磁罗盘,对UAV真实状态进行测量,通过机载摄像机系统对跑道特征进行跟踪与分析,得到跑道特征点的测量值,并传给信息融合子系统;信息融合子系统根据前一时刻对飞机状态的估计值以及机载传感器子系统在当前时刻对飞机状态的测量值,通过跑道模型、摄像机系统模型得到跑道特征点的估计值,并与测量值进行比较,及融合其他测量信息,通过数据处理模块的计算,得到高精度的导航信息。

A vision-based autonomous landing navigation system for unmanned aircraft, consisting of software algorithms and hardware devices; software algorithms include: computer vision algorithms, information fusion and state estimation algorithms; hardware devices include: runway characteristics, airborne sensor subsystems and Information fusion subsystem; airborne sensor subsystem includes: airborne camera system, airborne inertial navigation system, altimeter system, magnetic compass, to measure the real state of UAV, track and analyze the runway characteristics through the airborne camera system, Obtain the measured values of runway feature points and transmit them to the information fusion subsystem; the information fusion subsystem uses the runway model, The camera system model obtains the estimated value of the runway feature point, compares it with the measured value, and fuses other measured information. Through the calculation of the data processing module, high-precision navigation information is obtained.

Description

Translated fromChinese
基于视觉的无人驾驶飞机自主着陆导航系统Vision-based autonomous landing navigation system for unmanned aircraft

(一)技术领域:(1) Technical field:

本发明一种基于视觉的无人驾驶飞机自主着陆导航系统,基于视觉的无人驾驶飞机(Unmanned Aerial Vehicles,以下简称:UAV)自主着陆导航方案主要应用于在UAV着陆过程中,为UAV着陆控制系统提供UAV相对于跑道的位置、姿态、运动速度等导航信息,以控制UAV自主完成着陆。同时,本发明也可用于有人飞机着陆过程中,为驾驶员提供导航辅助信息,属于无人驾驶飞机着陆导航系统。The present invention is a vision-based unmanned aircraft autonomous landing navigation system. The vision-based unmanned aerial vehicle (Unmanned Aerial Vehicles, hereinafter referred to as: UAV) autonomous landing navigation scheme is mainly used in the UAV landing process and is a UAV landing control system. The system provides navigation information such as the UAV's position, attitude, and movement speed relative to the runway, so as to control the UAV to complete the landing autonomously. At the same time, the invention can also be used in the landing process of manned aircraft to provide pilots with navigation auxiliary information, and belongs to the unmanned aircraft landing navigation system.

(二)背景技术:(two) background technology:

无人驾驶飞机的安全回收(着陆)是无人驾驶飞机研制技术的关键之一。常见的无人驾驶飞机回收方式包括伞降回收、中空回收、起落架滑跑着陆、拦截网回收、气垫着陆和垂直着陆等。在小型无人驾驶飞机回收中常用伞降回收、拦截网回收等回收方式,但这些方式显然不适合中型、大型无人驾驶飞机的回收。在中型、大型无人驾驶飞机的回收方式研究中,起落架滑跑着陆逐渐成为主要的发展方向。The safe recovery (landing) of unmanned aircraft is one of the keys of unmanned aircraft development technology. Common unmanned aircraft recovery methods include parachute recovery, hollow recovery, landing gear roll landing, interception net recovery, air cushion landing and vertical landing, etc. Parachute recovery, interception net recovery and other recovery methods are commonly used in the recovery of small unmanned aircraft, but these methods are obviously not suitable for the recovery of medium and large unmanned aircraft. In the research on recovery methods of medium-sized and large-scale unmanned aircraft, landing gear roll-out has gradually become the main development direction.

固定翼UAV的起落架滑跑着陆(下面简称着陆)是复杂的飞行阶段,它要求在低空速和低高度的情况下,精确控制飞机的姿态和轨迹。Landing gear roll-out (hereinafter referred to as landing) of a fixed-wing UAV is a complex flight stage, which requires precise control of the attitude and trajectory of the aircraft at low airspeed and low altitude.

目前,在有人驾驶飞机中应用比较广泛的机场导航系统主要有仪表着陆系统(ILS)和微波着陆系统(MLS)。而根据UAV的任务要求,其可能经常会采用在无固定跑道或短距离跑道的小型机场上进行起飞和着陆的任务。由于仪表着陆系统(ILS)和微波着陆系统(MLS)地引导精度问题或系统复杂性问题,很难满足UAV着陆的要求。At present, the airport navigation systems widely used in manned aircraft mainly include instrument landing system (ILS) and microwave landing system (MLS). According to the mission requirements of the UAV, it may often adopt the task of taking off and landing on a small airport with no fixed runway or a short-distance runway. It is difficult to meet the requirements of UAV landing due to the problem of guidance accuracy or system complexity of Instrument Landing System (ILS) and Microwave Landing System (MLS).

随着GPS/差分GPS(Differential GPS,简称:DGPS)技术的不断发展和完善,用GPS/DGPS作为飞机进场着陆设备以取代ILS仪表着陆系统已被国际民航组织(ICAO)确认为今后有人驾驶飞机的机场导航系统的发展目标。国内外已提出多种增强GPS用于飞机精密着陆的方案,并进行试验。基于GPS技术的着陆导航方案,已经在UAV的着陆导航中有了许多应用。With the continuous development and improvement of GPS/Differential GPS (DGPS for short: DGPS) technology, using GPS/DGPS as aircraft approach and landing equipment to replace the ILS instrument landing system has been confirmed by the International Civil Aviation Organization (ICAO) as the future manned pilot Objectives for the development of airport navigation systems for aircraft. Various schemes of enhancing GPS for aircraft precision landing have been proposed at home and abroad, and have been tested. The landing navigation scheme based on GPS technology has already had many applications in UAV landing navigation.

上面提到的ILS、MLS以及GPS/DGPS等导航方式都要依赖飞机以外的设备所提供的信号进行导航,所以依靠这些技术只能完成半自主(自动)着陆。The above-mentioned ILS, MLS, and GPS/DGPS and other navigation methods all rely on signals provided by equipment other than the aircraft for navigation, so relying on these technologies can only complete semi-autonomous (automatic) landing.

考虑到半自主着陆方式需要外部信号配合才能够完成导航参数的获得,对外界的依赖性较强,当所有外部导航信号失效时,为了能够安全完成UAV的着陆,需要对UAV着陆过程中的自主导航问题进行研究。Considering that the semi-autonomous landing method requires the cooperation of external signals to complete the acquisition of navigation parameters, it is highly dependent on the outside world. When all external navigation signals fail, in order to safely complete the landing of the UAV, it is necessary to control the autonomy of the UAV during the landing process. Research on navigation problems.

本发明主要解决UAV的着陆过程中的自主导航问题,即UAV只依靠自身的机载设备,基本上或完全不依赖于地面设备的辅助,与外界不发生任何光、电交互条件下完成的着陆导航。要能够自主的完成飞机进场着陆,则UAV就必须具有自主导航的能力,即必须能够较准确地得到一些必要的飞机状态,如飞机相对机场的位置,飞机姿态、速度等。如果只依赖常规机载传感器(如惯导系统、高度表等)信息的情况下,由于惯导系统的漂移在长期飞行后无法得到有效的修正和校准,完成UAV的自主着陆的导航,是不现实的,也是不可能的,所以必须引入其他的传感器设备。计算机视觉由于其经济、无源、信息丰富等特点,已成为无人驾驶飞机自主着陆中不可缺少的重要信息源。通过计算机视觉可以很容易的测量静态参考物,作为机载惯性元件漂移的补偿。而当由于高频干扰、参考物跟踪失败等原因使视觉失效时,惯性元件可以准确的测量干扰,并在短时间内给出准确的导航信息,引导视觉系统正确工作。The invention mainly solves the problem of autonomous navigation during the landing process of UAV, that is, the UAV only relies on its own airborne equipment, basically or completely independent of the assistance of ground equipment, and completes the landing without any optical or electrical interaction with the outside world. navigation. To be able to complete the approach and landing of the aircraft autonomously, the UAV must have the ability of autonomous navigation, that is, it must be able to obtain some necessary aircraft states more accurately, such as the position of the aircraft relative to the airport, aircraft attitude, speed, etc. If you only rely on the information of conventional airborne sensors (such as inertial navigation system, altimeter, etc.), the drift of the inertial navigation system cannot be effectively corrected and calibrated after a long-term flight, and it is impossible to complete the navigation of the UAV's autonomous landing. Realistically, it is also impossible, so other sensor devices must be introduced. Computer vision has become an indispensable and important information source in the autonomous landing of unmanned aircraft due to its characteristics of economy, passiveness, and rich information. Static references can be easily measured by computer vision to compensate for the drift of the onboard inertial elements. When the vision fails due to high-frequency interference, reference object tracking failure, etc., the inertial component can accurately measure the interference, and provide accurate navigation information in a short time to guide the vision system to work correctly.

随着计算机视觉算法性能和可靠性的提高,非线性估计和识别技术的进步,计算机硬件的发展,以及完善的实时算法应用,计算机视觉技术得到越来越广泛地应用,基于视觉的传感器已逐渐成为自动控制领域中测量运动、位置和结构的通用传感器。以计算机视觉为理论基础的视觉导航系统,具有大视场、非接触、速度快、信息丰富以及精度适中等优点,特别适合于估计无人驾驶飞机相对于着陆目标的位置姿态。With the improvement of the performance and reliability of computer vision algorithms, the advancement of nonlinear estimation and recognition technology, the development of computer hardware, and the application of perfect real-time algorithms, computer vision technology has been more and more widely used, and vision-based sensors have gradually Become a universal sensor for measuring motion, position and structure in the field of automatic control. The visual navigation system based on computer vision theory has the advantages of large field of view, non-contact, fast speed, rich information, and moderate accuracy. It is especially suitable for estimating the position and attitude of unmanned aircraft relative to the landing target.

在期刊中检索到的现有的无人驾驶飞机着陆视觉导航方案,一般均采用使用视觉系统利用几何关系单独估计无人驾驶飞机的运动状态,但这类方案从精度、可靠性和实时性上还不能很好的满足无人驾驶飞机着陆的要求。虽然有极少方案考虑将视觉信息与诸如惯导系统等传感器相融合,但对着陆环境假设过多,方案的实用性不高。因此需要将计算机视觉系统作为其他导航传感器的辅助,通过将视觉信息与诸如惯导系统等传感器融合,得到更加精确的无人驾驶飞机的运动状态估计;同时将其他传感器的测量信息用于视觉导航系统,不但可以减少视觉系统的运算量同时可以提高其可靠性。虽然有极少方案考虑了类似的融合方案,但由于使用的传感器组合不够合理,对着陆环境假设过多,方案的实用性不高。The existing unmanned aircraft landing visual navigation schemes retrieved in journals generally use the visual system to estimate the motion state of the unmanned aircraft independently using geometric relationships, but such schemes are difficult to achieve in terms of accuracy, reliability and real-time performance. It can't meet the requirements of unmanned aircraft landing well. Although few solutions consider the fusion of visual information with sensors such as inertial navigation systems, too many assumptions about the landing environment make the practicality of the solutions high. Therefore, it is necessary to use the computer vision system as an aid to other navigation sensors. By fusing the visual information with sensors such as the inertial navigation system, a more accurate estimation of the motion state of the unmanned aircraft is obtained; at the same time, the measurement information of other sensors is used for visual navigation. The system can not only reduce the computing load of the vision system but also improve its reliability. Although there are very few schemes that consider similar fusion schemes, due to the unreasonable combination of sensors used and too many assumptions about the landing environment, the practicability of the scheme is not high.

(三)发明内容:(3) Contents of the invention:

本发明一种基于视觉的无人驾驶飞机自主着陆导航系统,其目的是:该自主着陆导航系统,可以在不依赖外界信息的情况下,只利用机载惯导系统、高度表系统、磁航向仪、机载视觉系统等被动传感器和地面合作标志图像,为UAV在着陆过程中提供UAV相对于跑道的导航信息,可以作为诸如全球定位系统(Global Position System,简称GPS)导航等半自主导航方式的备选,提高飞机的生存能力,也可以作为有人飞机着陆过程中的导航辅助信息的提供源。The present invention is a vision-based autonomous landing navigation system for unmanned aircraft. Its purpose is: the autonomous landing navigation system can only use the airborne inertial Passive sensors such as instrumentation and airborne vision systems and images of ground cooperation signs provide UAVs with navigation information relative to the runway during landing, and can be used as semi-autonomous navigation methods such as Global Positioning System (Global Position System, GPS) navigation. As an alternative to improve the survivability of the aircraft, it can also be used as a source of navigation aid information during the landing process of manned aircraft.

本发明一种基于视觉的无人驾驶飞机自主着陆导航系统,由软件算法及硬件装置两部份组成;The present invention is a vision-based autonomous landing navigation system for unmanned aircraft, which consists of two parts: a software algorithm and a hardware device;

该软件算法包括有:计算机视觉算法及信息融合和状态估计算法;The software algorithm includes: computer vision algorithm, information fusion and state estimation algorithm;

该硬件装置包括有:布置在跑道平面上的跑道特征1、用于测量UAV状态的机载传感器子系统2和用于处理传感器测量信息的信息融合子系统3;The hardware device includes: a runway feature 1 arranged on the runway plane, an airborne sensor subsystem 2 for measuring the state of the UAV, and an information fusion subsystem 3 for processing sensor measurement information;

该机载传感器子系统2包括有机载摄像机系统24、机载惯导系统21、高度表系统22、磁罗盘23;通过机载惯导系统21、高度表系统22、磁罗盘23等传感器,对UAV真实状态进行测量,通过机载摄像机系统24对跑道特征1进行跟踪与分析,得到跑道特征点11的测量值,这些测量信息传给信息融合子系统3;The airborne sensor subsystem 2 includes an airborne camera system 24, an airborne inertial navigation system 21, an altimeter system 22, and a magnetic compass 23; Measure the real state of the UAV, track and analyze the runway feature 1 through the onboard camera system 24, obtain the measured value of the runway feature point 11, and transmit the measurement information to the information fusion subsystem 3;

该信息融合子系统3根据前一时刻其对飞机状态的估计值以及机载传感器子系统2在当前时刻对飞机状态的测量值,通过跑道模型31、摄像机系统模型32得到跑道特征点11的估计值,与跑道特征点11的测量值进行比较,并融合其他测量信息,通过数据处理模块33的计算,最终得到高精度的导航信息。The information fusion subsystem 3 obtains an estimate of the runway feature point 11 through the runway model 31 and the camera system model 32 according to the estimated value of the aircraft state at the previous moment and the measured value of the aircraft state at the current moment by the onboard sensor subsystem 2 The value is compared with the measured value of the runway feature point 11, and other measured information is fused, and through the calculation of the data processing module 33, high-precision navigation information is finally obtained.

该软件算法首先应定义有坐标系,为了在导航过程中利用跑道上布置的特征信息来估计UAV相对于跑道的位置和姿态,我们必须将地面上的特征与摄像机捕捉到的特征联系起来,建立它们之间的函数关系。因此,我们需要定义坐标系;The software algorithm should first define a coordinate system. In order to use the feature information arranged on the runway to estimate the position and attitude of the UAV relative to the runway during the navigation process, we must link the features on the ground with the features captured by the camera, and establish functional relationship between them. Therefore, we need to define the coordinate system;

该坐标系主要包括有:惯性坐标系{E:o-xyz},机体坐标系{B:ob-xbybzb},以及摄像机坐标系{C:oc-xcyczc}和视平面坐标系{I:(u,v)};The coordinate system mainly includes: inertial coordinate system {E: o-xyz}, body coordinate system {B: ob -xb yb zb }, and camera coordinate system {C: oc -xc yc zc } and view plane coordinate system {I: (u, v)};

该摄像机坐标系{C}是机体坐标系{B}经过简单的平移,然后依次水平转动ψc和俯仰转动θc得到(ψc、θc是安装摄像机的二自由度云台相对于机体坐标系{B}的转动角);为便于分析,假设安装摄像机的云台转动中心和摄像机光轴中心是重合的;视平面坐标系{I}的定义和摄像机坐标系{C}一致;The camera coordinate system {C} is obtained by simply translating the body coordinate system {B}, and then turning horizontally ψc and pitching θc sequentially (ψc , θc are the coordinates of the two-degree-of-freedom pan/tilt with the camera installed relative to the body {B} rotation angle); for the convenience of analysis, it is assumed that the pan-tilt rotation center of the camera is coincident with the camera optical axis center; the definition of the viewing plane coordinate system {I} is consistent with the camera coordinate system {C};

根据系统中的几何关系,给出如下符号定义:REB表示从惯性坐标系{E}到机体坐标系{B}的旋转变换;RBC是从机体坐标系{B}到摄像机坐标系{C}的旋转变换;

Figure A20061008883600071
表示UAV在惯性坐标系{E}的位置;
Figure A20061008883600072
表示摄像机在{B}中的位置。其中,REB变换常用从惯性坐标系{E}到机体坐标系{B}的欧拉角(φ,θ,ψ)表示;RBC可以用摄像机安装云台的方位角和俯仰角(θC,ψC)表示,如式(1)、(2)所示;According to the geometric relationship in the system, the following symbols are defined: REB represents the rotation transformation from the inertial coordinate system {E} to the body coordinate system {B}; RBC is the transformation from the body coordinate system {B} to the camera coordinate system {C } rotation transformation;
Figure A20061008883600071
Indicates the position of the UAV in the inertial coordinate system {E};
Figure A20061008883600072
Indicates the position of the camera in {B}. Among them, REB transformation is often expressed by the Euler angles (φ, θ, ψ) from the inertial coordinate system {E} to the body coordinate system {B}; RBC can use the azimuth and pitch angles (θC , ψC ), as shown in formulas (1) and (2);

RREBEB==coscosθθcoscosψψcoscosθθsinsinψψ--sinsinθθ--coscosφφsinsinψψ++sinsinφφsinsinθθcoscosψψcoscosφφcoscosψψ++sinsinφφsinsinθθsinsinψψsinsinφφcoscosθθsinsinφφsinsinψψ++coscosφφsinsinθθcoscosψψ--sinsinφφcoscosψψ++coscosφφsinsinθθsinsinψψcoscosφφcoscosθθ------((11))

RRBCBC==coscosθθcccoscosψψcccoscosθθccsinsinψψcc--sinsinθθcc--sinsinψψcccoscosψψcc00sinsinθθcccoscosψψccsinsinθθccsinsinψψcccoscosθθcc------((22))

设第i个特征点在惯性坐标系{E}中坐标为

Figure A20061008883600075
,则其在摄像机坐标系{C}中坐标表示为:Let the coordinates of the i-th feature point in the inertial coordinate system {E} be
Figure A20061008883600075
, then its coordinates in the camera coordinate system {C} are expressed as:

PP→&Right Arrow;iiCC==RRBCBC((RREBEB((PP→&Right Arrow;ii--PP→&Right Arrow;BB))--PP→&Right Arrow;CCBB))==RRBCBC··RREBEB((PP→&Right Arrow;ii--PP→&Right Arrow;BB))--RRBCBC··PP→&Right Arrow;CCBB------((33))

RBC·REB=r1r2r3r4r5r6r7r8r9,-RBC·P→CB=(kx,ky,kz)T,将式(3)写成分量形式为:make R BC &Center Dot; R EB = r 1 r 2 r 3 r 4 r 5 r 6 r 7 r 8 r 9 , - R BC &Center Dot; P &Right Arrow; C B = ( k x , k the y , k z ) T , write formula (3) into components as:

xxiiCCythe yiiCCzziiCC==rr11((xxii--xx))++rr22((ythe yii--ythe y))++rr33((zzii--zz))++kkxxrr44((xxii--xx))++rr55((ythe yii--ythe y))++rr66((zzii--zz))++kkythe yrr77((xxii--xx))++rr88((ythe yii--ythe y))++rr99((zzii--zz))++kkzz------((44))

摄像机坐标系{C}与视平面坐标系{I}之间采用基于针孔摄像机模型的透视投影。因此,摄像机的焦距为f时,第i个特征点在视平面坐标系{I}中的坐标表示为:The perspective projection based on the pinhole camera model is used between the camera coordinate system {C} and the viewing plane coordinate system {I}. Therefore, when the focal length of the camera is f, the coordinates of the i-th feature point in the viewing plane coordinate system {I} are expressed as:

uuiivvii==ff·&Center Dot;ythe yiiCC//xxiiCCzziiCC//xxiiCC------((55))

由于特征点相对于机场跑道的位置是已知的,通过式(4)、式(5),就可以得到视平面坐标[ui,vi]T、惯性坐标[xi,yi,zi]T与UAV相对于跑道的位置和姿态的关系。Since the position of the feature point relative to the airport runway is known, through formula (4) and formula (5), we can get the visual plane coordinates [ui , vi ]T , inertial coordinates [xi, yi , zi ]T versus the UAV's position and attitude relative to the runway.

该自主着陆组合导航系统,假设UAV的进入基于视觉的自主着陆阶段以前,一般会对机载导航系统进行多次空中校准,但由于此时UAV飞行高度较高,速度较快,所以校准的精度还不足以导引UAV完成着陆操作。这里假设在进入基于视觉的自主着陆阶段以前:For the autonomous landing integrated navigation system, it is assumed that before the UAV enters the vision-based autonomous landing stage, the airborne navigation system will generally be calibrated several times in the air. Not enough to guide the UAV to complete the landing maneuver. This assumes that before entering the vision-based autonomous landing phase:

(1)飞机被成功的导引到指定的区域;(1) The aircraft is successfully guided to the designated area;

(2)机载导航系统进行了空中校准,其高度误差小于±30m,水平方向的位置误差小于±200m,速度误差小于±1m/s,欧拉角测量误差小于±1度。(2) The airborne navigation system has been calibrated in the air, and its height error is less than ±30m, the horizontal position error is less than ±200m, the velocity error is less than ±1m/s, and the Euler angle measurement error is less than ±1 degree.

关于自主着陆导引过程,在UAV进入基于视觉的自主着陆过程后,机载摄像机系统根据导航系统的测量值,预估特征点的位置,并控制摄像机系统的云台,使摄像机在一定区域内搜索预先设置在机场跑道上的特征图像;当系统获取到特征图案后,摄像机系统进入跟踪模态,即控制云台以跟踪特征图像,并对图像进行相应的图像处理,提取特征图像上的特征点的信息;融合机载惯导系统信息,同时结合高度表系统、磁罗盘等机载传感器的测量信息,获得高精度的UAV导航信息。Regarding the autonomous landing guidance process, after the UAV enters the vision-based autonomous landing process, the onboard camera system estimates the position of the feature points based on the measurement values of the navigation system, and controls the pan/tilt of the camera system so that the camera is within a certain area. Search for the feature image preset on the airport runway; when the system acquires the feature pattern, the camera system enters the tracking mode, that is, controls the pan/tilt to track the feature image, and performs corresponding image processing on the image to extract the features on the feature image point information; integrate the information of the airborne inertial navigation system, and combine the measurement information of airborne sensors such as the altimeter system and magnetic compass to obtain high-precision UAV navigation information.

其中,该跑道特征1能够在摄像机视平面上成像的每个跑道特征点11,都会通过式(4)、式(5)给出以UAV位置和姿态为参数的两个方程。由于这些方程是非线性方程,若只有三个特征点,会得到位置和姿态的多组解。为了得到唯一解,至少需要4个一般布局的特征点。在自主着陆导航系统中,为了减小噪声对系统的影响,提高系统的估计精度,采用7个特征点作为跑道特征。Wherein, each runway feature point 11 that can be imaged on the camera viewing plane by the runway feature 1 will give two equations with UAV position and attitude as parameters through formula (4) and formula (5). Since these equations are nonlinear equations, if there are only three feature points, multiple sets of solutions for position and attitude will be obtained. In order to obtain a unique solution, at least 4 feature points of the general layout are required. In the autonomous landing navigation system, in order to reduce the impact of noise on the system and improve the estimation accuracy of the system, seven feature points are used as runway features.

考虑到提取多边形区域的顶点算法成熟、鲁棒性好,而三角形和四边形的顶点曲率大,更易于顶点提取,同时跑道颜色较深,所以选取在跑道起点附近喷涂灰度值较高的(白色)三角形与四边形组合图案的方法来布置跑道特征1,采用两个不同图形组合的方式,还可以帮助UAV判断着陆方向。由于飞机着陆过程与跑道之间夹角较小,沿跑道方向在的图案成像时会有较大的压缩,所以多边形区域沿跑道方向设计得较长。将标志布置在飞机着陆点之前,可以防止由于飞机滑跑刹车时,在标志上留下过多的刹车痕迹,影响其他UAV的识别效果。同时可以使飞机下滑时,得到尽量大的图像。Considering that the vertex algorithm for extracting polygonal areas is mature and robust, and the vertex curvature of triangles and quadrilaterals is large, which makes it easier to extract vertices. At the same time, the color of the runway is darker, so it is selected to spray a higher gray value (white color) near the starting point of the runway. ) The method of combining patterns of triangles and quadrilaterals to arrange runway features 1, and the combination of two different graphics can also help the UAV to judge the landing direction. Since the angle between the landing process of the aircraft and the runway is small, the pattern imaging along the runway direction will have a greater compression, so the polygonal area is designed to be longer along the runway direction. Arranging the sign before the landing point of the aircraft can prevent too many braking marks from being left on the sign when the aircraft is taxiing and braking, which will affect the recognition effect of other UAVs. At the same time, when the plane is sliding down, an image as large as possible can be obtained.

其中,机载传感器子系统2中的机载摄像机系统24,由二自由度云台241、摄像机242以及图像处理模块243组成;二自由度云台241固定在UAV下部,可以在云台控制器的控制下,相对于UAV作俯仰运动和连续的水平运动,使系统完成搜索目标和锁定目标的任务;该摄像机242采用PAL制彩色摄像机一体机,可以在控制指令的控制下进行变倍操作,使得跑道特征11在摄像机242中的成像大小利于视觉算法的运算;该图像处理模块243可以对摄像机242拍摄的场景进行采集,同时对采集的图像进行处理,得到跑道特征点11在视平面上成像的坐标值,图像处理模块243中对图像进行处理的计算机视觉技术是机载摄像机系统24的关键。Wherein, the airborne camera system 24 in the airborne sensor subsystem 2 is composed of a two-degree-of-freedom pan-tilt 241, a camera 242, and an image processing module 243; Under the control of the UAV, it makes pitching motion and continuous horizontal motion relative to the UAV, so that the system can complete the task of searching for and locking the target; the camera 242 adopts a PAL-made color camera integrated machine, which can perform zoom operation under the control of the control command. The imaging size of the runway feature 11 in the camera 242 is beneficial to the calculation of the visual algorithm; the image processing module 243 can collect the scene shot by the camera 242, and simultaneously process the collected image to obtain the imaging of the runway feature point 11 on the visual plane The computer vision technology for image processing in the image processing module 243 is the key to the airborne camera system 24 .

其中,该机载传感器子系统2中的机载惯导系统21,采用惯性测量单元(Inertial Measurement Unit,简称:IMU),它由固连的三个正交安装的加速度计和三个正交的陀螺仪组成;该惯性测量单元IMU的直接测量值为飞机在体轴向的三个加速度和三个角速度,可以通过信息融合子系统3计算出飞机的速度、位置和姿态等信息;一般认为,IMU具有优良的高更新率特性(可以达到100Hz以上),且为实时输出。Among them, the airborne inertial navigation system 21 in the airborne sensor subsystem 2 adopts an inertial measurement unit (Inertial Measurement Unit, referred to as: IMU), which is composed of three orthogonally installed accelerometers and three orthogonal The gyroscope is composed of gyroscopes; the direct measurement values of the inertial measurement unit IMU are three accelerations and three angular velocities of the aircraft in the body axis, and information such as the speed, position and attitude of the aircraft can be calculated through the information fusion subsystem 3; it is generally believed that , IMU has excellent high update rate characteristics (can reach more than 100Hz), and it is a real-time output.

其中,该机载传感器子系统2中的高度表系统22,采用气压高度表221和雷达高度表222的组合;根据气压高度表221的低频特性和雷达高度表222的高频特性,利用互补滤波方式(其运算在数据处理模块33种完成),消除了气压高度表221产生的稳态误差以及雷达高度表222较大的测量噪声,得到高精度的高度估计;由于采用了两种高度表组合的方式,使得系统采样频率较低,但其输出是实时的,高度表系统22采样频率为30Hz。Wherein, the altimeter system 22 in the airborne sensor subsystem 2 adopts the combination of the barometric altimeter 221 and the radar altimeter 222; mode (its operation is completed in 33 kinds of data processing modules), eliminates the steady-state error produced by the barometric altimeter 221 and the larger measurement noise of the radar altimeter 222, and obtains high-precision height estimation; due to the combination of two altimeters The way makes the system sampling frequency lower, but its output is real-time, and the sampling frequency of the altimeter system 22 is 30Hz.

其中,该信息融合子系统3利用信息融合技术,可以充分融合机载传感器子系统2中各机载传感器在空间和时间上的冗余或互补信息,从而取长补短,精确地给出UAV的导航信息,消除传感器测量值的不确定性,提高系统的可靠性;考虑到UAV着陆时,对机载传感器子系统2的信息融合主要是检测级的融合,而卡尔曼滤波方法正好适用于实时融合动态的低层次冗余数据,所以本方案选用基于卡尔曼滤波方法的融合算法。Among them, the information fusion subsystem 3 uses information fusion technology to fully fuse the redundant or complementary information of each airborne sensor in the airborne sensor subsystem 2 in space and time, so as to learn from each other and accurately give UAV navigation information , to eliminate the uncertainty of the sensor measurement value and improve the reliability of the system; considering that when the UAV lands, the information fusion of the airborne sensor subsystem 2 is mainly detection-level fusion, and the Kalman filter method is just suitable for real-time fusion of dynamic Low-level redundant data, so this program uses a fusion algorithm based on the Kalman filter method.

其中,该计算机视觉算法是在自主着陆过程中,机载摄像机系统24根据对UAV状态的估计值,控制二自由度云台241和摄像机242的焦距,使摄像机能够搜索、捕捉和跟踪跑道特征图案,并在图像处理模块243中通过一系列的计算机视觉算法,得到用于UAV位置和姿态估计的特征点坐标;视觉处理过程主要分为:图像预处理、区域分割与标记和特征提取与目标识别三个阶段。Among them, the computer vision algorithm is that during the autonomous landing process, the airborne camera system 24 controls the focal length of the two-degree-of-freedom pan-tilt 241 and the camera 242 according to the estimated value of the UAV state, so that the camera can search, capture and track the characteristic pattern of the runway , and through a series of computer vision algorithms in the image processing module 243, the feature point coordinates for UAV position and attitude estimation are obtained; the visual processing process is mainly divided into: image preprocessing, region segmentation and marking, feature extraction and target recognition three phases.

1)图像预处理1) Image preprocessing

一般情况下,摄像机242获得的原始图像由于受到种种条件限制和随机干扰,往往不能在视觉系统中直接进行分析处理,必须在图像预处理阶段对原始图像进行灰度校正、噪声过滤等图像预处理。In general, the original image obtained by the camera 242 cannot be directly analyzed and processed in the visual system due to various conditions and random interference, and the original image must be pre-processed in the image pre-processing stage, such as grayscale correction and noise filtering. .

根据UAV机载摄像机系统24的任务特点,考虑到对灰度图的处理算法比较成熟,同时运算速度较快、运算量相对比较少,首先通过公式:According to the task characteristics of the UAV airborne camera system 24, considering that the processing algorithm for the grayscale image is relatively mature, and at the same time, the calculation speed is relatively fast and the calculation amount is relatively small, firstly through the formula:

Y=0.299×R+0.596×G+0.211×B    (6)Y=0.299×R+0.596×G+0.211×B (6)

将获取的彩色图像转化为灰度图像。其中,R,G,B分别表示图像中红、绿、蓝三种颜色的分量。Convert the acquired color image to grayscale image. Among them, R, G, and B represent the three color components of red, green, and blue in the image, respectively.

由于获取的图像存在噪声,必须对其进行滤波处理,一般认为图像采集过程中存在的噪声可以近似为白噪声,中值滤波是一种具有很好低通特性,同时还能尽可能多的保存图像边缘细节的非线性平滑方法,它的主要思想是用邻域中亮度的中值代替图像的当前点;邻域中亮度的中值不受个别噪声毛刺的影响,因此中值滤波可以在相当好的消除脉冲噪声同时,还对图像的边缘细节进行保留;本方案选用7×7的中值滤波器对灰度图像进行处理。Due to the noise in the acquired image, it must be filtered. It is generally believed that the noise in the image acquisition process can be approximated as white noise. The non-linear smoothing method of image edge details, its main idea is to replace the current point of the image with the median value of the brightness in the neighborhood; the median value of the brightness in the neighborhood is not affected by individual noise spikes, so the median filter can be used in quite While eliminating the impulse noise well, the edge details of the image are also preserved; this program uses a 7×7 median filter to process the grayscale image.

2)区域分割与标记2) Region Segmentation and Marking

本阶段任务是从经过预处理后的图像中得到特征识别所需的候选区域,需要经过阈值化分割(Thresholding)和区域生长与标记(Region growing andLabeling)两个过程。The task at this stage is to obtain the candidate regions required for feature recognition from the preprocessed image, which requires two processes of thresholding segmentation (Thresholding) and region growing and labeling (Region growing and Labeling).

对进行过滤波的图像进行阈值化,将灰度图转化为二值图,这样可以有效地减少后续处理工作的运算量。阈值化分割算法首先需要确定一个处在图像灰度取值范围之中的灰度阈值,然后将图像中各个像素的灰度值都与之比较,并根据比较的结果将对应的像素化分为两类:像素灰度大于阈值的为一类,即区域生长阶段的种子区域;像素灰度小于阈值的为另一类。考虑到在不同时间着陆,摄像机捕捉到的图像亮度会随着光照不同而变化,所以阈值化时不能选择固定的阈值。由于阈值化的目的是为了得到区域生长的种子,而且跑道特征在图像中的灰度值会比较大,所以选择的阈值可以相对比较高。经过对不同的阈值选取方法的实验比较,采用设置处于图像最大与最小灰度之间固定比例的灰度值作为阈值的方法,虽然简单,但该方法可以得到鲁棒性与高分割质量的最好折中。Threshold the filtered image and convert the grayscale image into a binary image, which can effectively reduce the amount of computation for subsequent processing. The thresholding segmentation algorithm first needs to determine a gray threshold within the gray value range of the image, then compares the gray value of each pixel in the image with it, and divides the corresponding pixel into There are two types: one is the pixel gray value greater than the threshold value, that is, the seed region in the region growth stage; the other type is the pixel gray value smaller than the threshold value. Considering the landing at different times, the brightness of the image captured by the camera will vary with the illumination, so a fixed threshold cannot be selected for thresholding. Since the purpose of thresholding is to obtain the seeds of regional growth, and the gray value of the runway features in the image will be relatively large, the selected threshold can be relatively high. After experimental comparison of different threshold selection methods, the method of setting the gray value with a fixed ratio between the maximum and minimum gray of the image as the threshold is adopted. Although simple, this method can obtain the best combination of robustness and high segmentation quality. Good compromise.

经过阈值化得到的种子区域,采用区域生长的方法,得到包括完整跑道特征图案在内的待处理区域。区域生长是根据预先定义的生长准则把像素或子区域集合成较大区域的处理方法,也就是以一组“种子”区域开始,将那些属性类似于种子的邻域像素附加到对应种子上。这里,生长准则定义为:若相邻待选像素的灰度值与种子区域的平均灰度值之间的差小于一个预先选定的固定值,则认为该像素与种子区域类似。标识就是在区域生长完成后给每个不相邻的区域标志一个唯一的数字(整数)。The seed area obtained by thresholding is used to obtain the area to be processed including the complete runway characteristic pattern by using the method of area growth. Region growing is a processing method of grouping pixels or sub-regions into larger regions according to predefined growth criteria, that is, starting with a set of "seed" regions, and attaching those neighboring pixels whose properties are similar to the seeds to the corresponding seeds. Here, the growth criterion is defined as: if the difference between the gray value of the adjacent candidate pixel and the average gray value of the seed area is smaller than a pre-selected fixed value, the pixel is considered similar to the seed area. The identification is to mark a unique number (integer) for each non-adjacent area after the area growth is completed.

3)特征提取与目标识别3) Feature extraction and target recognition

本阶段的任务就是从多个不相邻的候选区域中提取能够用于识别的特征,对各个区域进行抽象的描述,并通过对这些特征的分析,识别出与跑道标记所对应的两个区域,最后从这两个区域中得到所需的七个特征点在视平面坐标系{I}中的坐标值,用于飞机的位置和姿态估计。The task of this stage is to extract features that can be used for recognition from multiple non-adjacent candidate areas, abstractly describe each area, and through the analysis of these features, identify two areas corresponding to the runway markings , and finally get the coordinate values of the seven required feature points in the viewing plane coordinate system {I} from these two areas, which are used for the position and attitude estimation of the aircraft.

考虑到机载摄像机系统24在进行视觉处理之前,可以根据其他传感器信息得到跑道特征点在视平面成像的预估值,再根据跑道标志的特点,选择下面几种区域的描述作为特征:Considering that the onboard camera system 24 can obtain the estimated value of the imaging of the runway feature points in the visual plane according to other sensor information before performing visual processing, and then select the following descriptions of the areas as features according to the characteristics of the runway markings:

a.区域的面积:即区域包含的像素总个数。a. Area of the area: the total number of pixels contained in the area.

b.细长度:即包围区域的最小面积的矩形的长宽比。最小外接矩形是通过以离散步长旋转矩形,直到得到最小值来确定的。b. Slenderness: the aspect ratio of the rectangle with the smallest area enclosing the area. The minimum bounding rectangle is determined by rotating the rectangle in discrete steps until a minimum is obtained.

c.方向:即最小外接矩形的长边的方向。考虑到跑道标志在视平面上的成像为细长区域,所以使用方向作为特征是合理的。c. Direction: the direction of the long side of the smallest circumscribed rectangle. Considering that the image of the runway markings on the viewing plane is a slender area, it is reasonable to use the direction as a feature.

d.紧致度:即区域外部边界长度的平方与区域面积的比值。d. Compactness: the ratio of the square of the length of the outer boundary of the region to the area of the region.

e.区域的位置:即区域最小外接矩形对称中心在视平面上的坐标。e. Location of the area: the coordinates of the symmetric center of the smallest circumscribed rectangle of the area on the viewing plane.

f.投影:通过求取沿最小外接矩形的长边与短边方向的投影,分析区域的形状。f. Projection: Analyze the shape of the region by calculating the projection along the long and short sides of the smallest circumscribed rectangle.

g.两个区域之间相对位置:包括两个区域的距离与相对方向。g. Relative position between two areas: including the distance and relative direction of the two areas.

得到区域的特征后,与由预估跑道特征点坐标导出的特征值作比较,最终确定两个跑道标志区域。After obtaining the features of the area, compare them with the feature values derived from the estimated runway feature point coordinates, and finally determine two runway marking areas.

由于着陆过程UAV与跑道之间夹角较小,跑道特征区域成像时沿跑道方向会有较大的压缩,使得布置在跑道中心线上的三个顶点(点B、D和F)难以准确检测。所以本发明采取的特征点识别策略是:首先使用最小核值相似区(Smallest Univalue Segment Assimilating Nucleus,SUSAN)原则检测两个跑道特征区域的边界和角点,并选择每个区域曲率最大的两个角点与点A、C、E和G相对应;由于经过点E、C与经过点G、A的两直线的交点H和经过点E、A与经过点G、C的两直线的交点I都位于跑道中心线上,同时点B、D和F也位于这条线上,所以,可以通过求取过点H、I的直线与两个跑道区域边界的交点确定点B、D和F。Due to the small angle between the UAV and the runway during the landing process, there will be a large compression along the direction of the runway when the runway feature area is imaged, making it difficult to accurately detect the three vertices (points B, D, and F) arranged on the centerline of the runway . Therefore, the feature point identification strategy adopted by the present invention is: first, use the Smallest Univalue Segment Assimilating Nucleus (SUSAN) principle to detect the boundaries and corner points of two runway feature areas, and select the two with the largest curvature in each area. The corner points correspond to the points A, C, E and G; because the intersection point H of the two straight lines passing through the points E, C and the points G, A and the intersection point I of the two straight lines passing through the points E, A and the points G, C They are all located on the center line of the runway, and points B, D and F are also located on this line. Therefore, points B, D and F can be determined by calculating the intersection of the straight line passing through points H and I and the boundary of the two runway areas.

关于视觉算法的实时性要求,视觉处理的过程是比较复杂的任务,需要一定时间,要做到较高的帧速率(如高于30帧/秒)是非常困难的,需要有非常高性能的图像处理硬件,因此机载摄像机系统24的测量信息一般输出频率较低,而且有一定的延迟。同时,由于环境的变化或处理任务的变化,图像处理的时间也是不定的。当系统进行搜索操作时,视觉算法会在整个图像平面进行。而当系统成功捕获目标并进行跟踪后,由于可以准确的预估特征标志的成像位置,图像处理仅在预估的一个较小范围进行。所以图像处理的时间会有较大的不同。Regarding the real-time requirements of the visual algorithm, the process of visual processing is a relatively complex task that takes a certain amount of time. It is very difficult to achieve a high frame rate (such as higher than 30 frames per second), and a very high-performance computer is required. Image processing hardware, so the measurement information of the on-board camera system 24 is generally output at a low frequency, and there is a certain delay. At the same time, due to changes in the environment or changes in processing tasks, the time for image processing is also uncertain. When the system performs a search operation, vision algorithms are performed across the entire image plane. However, when the system successfully captures and tracks the target, since it can accurately estimate the imaging position of the feature mark, the image processing is only carried out in a small estimated range. Therefore, the image processing time will be quite different.

根据对我们现有的原形系统进行测试,系统在搜索目标过程中,图像处理的帧速率约8帧/秒,延迟约150ms(包括采集卡采集时间);在跟踪目标过程中,图像处理的帧速率约18帧/秒,延迟约75ms(包括采集卡采集时间)。According to the test of our existing prototype system, in the process of searching for the target, the frame rate of image processing is about 8 frames per second, and the delay is about 150ms (including the acquisition time of the acquisition card); in the process of tracking the target, the frame rate of image processing is The rate is about 18 frames per second, and the delay is about 75ms (including the acquisition time of the acquisition card).

其中,信息融合和状态估计算法,随着机载传感器数量的增多,多传感器信息融合技术已成为状态估计的重要手段。作为信息融合中进行位置估计的有效方法——卡尔曼滤波算法,由于采用了递推形式,数据存储量小,不仅可以处理平稳随机过程,而且可以处理多维和非平稳随机过程。目前卡尔曼滤波技术已成为实现多传感器信息融合的主要技术手段。本节主要论述如何将视觉信息与惯导、高度表信息融合,构造适合于UAV自主着陆中多传感器多速率扩展卡尔曼滤波器。Among them, information fusion and state estimation algorithms, with the increase in the number of airborne sensors, multi-sensor information fusion technology has become an important means of state estimation. As an effective method for position estimation in information fusion - Kalman filter algorithm, due to the use of recursive form, the data storage is small, not only can deal with stationary random processes, but also can deal with multi-dimensional and non-stationary random processes. At present, Kalman filter technology has become the main technical means to realize multi-sensor information fusion. This section mainly discusses how to fuse visual information with inertial navigation and altimeter information to construct a multi-sensor and multi-rate extended Kalman filter suitable for UAV autonomous landing.

关于系统测量方程和动态方程构造,状态估计中最重要的就是机载摄像机系统24通过计算机视觉技术得到的特征点信息,本方案将这7个特征点在视平面坐标系{I}中的测量值作为系统测量值的一部分。这里假设系统得到的特征点测量值由于图像采集、量化及处理等操作,存在相互独立的白噪声[ηui,ηvi]T使之与真实值之间存在如下关系:Regarding the construction of system measurement equations and dynamic equations, the most important thing in state estimation is the feature point information obtained by the airborne camera system 24 through computer vision technology. In this program, the measurement of these 7 feature points in the viewing plane coordinate system {I} value as part of the system measurement. Here it is assumed that the measured values of the feature points obtained by the system Due to operations such as image acquisition, quantization and processing, there is mutually independent white noise [ηui , ηvi ]T so that there is the following relationship between it and the true value:

zz22ii--11zz22ii==uuii~~vvii~~==uuiivvii++ηηuiuiηηvivi,,ii==1,21,2,,·····&Center Dot;,,77..------((77))

除了上述14个测量值,系统还存在由高度表系统22得到的飞行高度和由磁罗盘23得到的偏航角的测量值。可表示为:In addition to the above-mentioned 14 measured values, the system also has the measured values of the flight altitude obtained by the altimeter system 22 and the yaw angle obtained by the magnetic compass 23 . Can be expressed as:

zz1515zz1616==Hh~~ψψ~~==Hhψψ++ηηHhηηψψ------((88))

由式(7)表示的图像特征点信息和式(8)表示的其他传感器信息可以通过关系式h(X(k))与UAV在惯性坐标系中的位置和姿态信息联系起来,在k时刻,可以表示为:The image feature point information represented by Equation (7) and other sensor information represented by Equation (8) can be related to the UAV’s position and attitude information in the inertial coordinate system through the relational expression h(X(k)), at time k ,It can be expressed as:

Z(k)=h(X(k))+ζz(k)    (9)Z(k)=h(X(k))+ζz (k) (9)

其中,Z(k)=[z1,z2,…,z16]T;X(k)是系统的状态向量,包括UAV在惯性坐标系中的位置(x,y,z)、速度(Vx,Vy,Vz)以及姿态角(φ,θ,ψ);ζz(k)是协方差阵为对角阵R(k)的白噪声向量。特征点测量值由机载摄像机系统24通过图像处理模块243直接得到,而特征点测量值的估计值,则是根据系统状态X(k),通过摄像机系统模型32对跑道模型31进行处理,按照图2所示,通过公式(1)-(5)计算得到。Among them, Z(k)=[z1 , z2 ,...,z16 ]T ; X(k) is the state vector of the system, including the position (x, y, z) and velocity ( Vx , Vy , Vz ) and attitude angles (φ, θ, ψ); ζz (k) is a white noise vector whose covariance matrix is a diagonal matrix R(k). The measured value of the feature point is directly obtained by the onboard camera system 24 through the image processing module 243, and the estimated value of the measured value of the feature point is processed on the runway model 31 through the camera system model 32 according to the system state X(k), according to As shown in Figure 2, it is calculated by formulas (1)-(5).

在状态估计中,我们取UAV在惯性坐标系中的位置(x,y,z)和速度(Vx,Vy,Vz)以及UAV的姿态角(φ,θ,ψ)作为系统的状态变量X(k),通过UAV的运动学关系,得到系统的动态方程:In the state estimation, we take the position (x, y, z) and velocity (Vx , Vy , Vz ) of the UAV in the inertial coordinate system and the attitude angle (φ, θ, ψ) of the UAV as the state of the system The variable X(k), through the kinematic relationship of UAV, obtains the dynamic equation of the system:

X(k+1)=Φ(X(k))+ζx(k)。    (10)X(k+1)=Φ(X(k))+ζx (k). (10)

其中,Φ(·)是系统的非线性状态方程,而ζx(k)是协方差阵为对角阵Q(k)的白噪声向量。Among them, Φ(·) is the nonlinear state equation of the system, and ζx (k) is the white noise vector whose covariance matrix is the diagonal matrix Q(k).

关于多速率扩展卡尔曼滤波器,通过式(9)给出的测量方程和式(10)给出的动态方程,利用扩展卡尔曼滤波器(EKF),建立UAV的动态估计器,从而可以估计出UAV在空间的位置和姿态。EKF方法,必须假设在每次状态更新和测量更新时,各个传感器在该时刻均有测量值。但是,我们注意到由于机载传感器子系统2中各传感器系统给出的信息具有不同的更新频率,不同时间延迟特性,特别是机载摄像机系统24,它的更新速率和延迟特性不是固定不变的。因此需要根据信息的特点,对扩展卡尔曼滤波器进行改进,设计出能真正处理UAV多传感系统信息的多速率扩展卡尔曼滤波器(MEKF)。Regarding the multi-rate extended Kalman filter, through the measurement equation given by formula (9) and the dynamic equation given by formula (10), the UAV dynamic estimator is established by using the extended Kalman filter (EKF), so that it can estimate The position and attitude of the UAV in space. In the EKF method, it must be assumed that at each state update and measurement update, each sensor has a measurement value at that moment. However, we noticed that since the information given by each sensor system in the airborne sensor subsystem 2 has different update frequencies and different time delay characteristics, especially the airborne camera system 24, its update rate and delay characteristics are not constant of. Therefore, it is necessary to improve the extended Kalman filter according to the characteristics of the information, and design a multi-rate extended Kalman filter (MEKF) that can truly process the information of the UAV multi-sensing system.

由系统动态方程式(10)可知,系统状态变量可由惯导系统给出。机载惯导系统21具有较高的输出频率,这里假设为1/T1;而量测方程式(9)的输出频率由机载摄像机系统24、高度表系统22和磁罗盘23的更新频率决定,而机载摄像机系统24相对于两个传感器输出频率较低,延迟最大。不妨假设三个传感器都与机载摄像机系统24输出频率1/TV相同,输出延迟都与摄像机系统延迟Td相同。为了简便起见,设TV=pT1,Td=qT1,其中p,q为可变正整数。According to the system dynamic equation (10), the system state variables can be given by the inertial navigation system. The airborne inertial navigation system 21 has a higher output frequency, which is assumed to be 1/T1 here; and the output frequency of the measurement equation (9) is determined by the update frequency of the airborne camera system 24, the altimeter system 22 and the magnetic compass 23 , while the onboard camera system 24 has a lower output frequency relative to the two sensors, and the delay is the largest. It may be assumed that all three sensors have the same output frequency 1/TV as the onboard camera system 24, and output delays are all the same as the camera system delay Td . For simplicity, it is assumed that TV =pT1 , Td =qT1 , where p and q are variable positive integers.

MEKF对多速率测量和测量延迟的处理,主要体现在下面两个运行机制上:MEKF's processing of multi-rate measurement and measurement delay is mainly reflected in the following two operating mechanisms:

1)基于事件驱动:当没有视觉测量数据时,滤波器只进行状态预估和误差协方差的传播;而当测量发生后,用测量的数据去校正滤波器的预估状态和误差协方差,如式(11)所示。1) Event-driven: When there is no visual measurement data, the filter only performs state estimation and error covariance propagation; and when the measurement occurs, the measured data is used to correct the estimated state and error covariance of the filter, As shown in formula (11).

2)基于延迟校正的数据平滑:由于摄像机系统存在延迟Td,kT1时刻获得的测量值与状态X(k-q)相对应,因此需要利用存储的各个时刻的状态值X(i),i=k-q,…k,先对原来的状态和误差协方差进行校正,再使用滤波器动态方程,进行数据平滑,更新当前的状态估计和协方差,即求得式(11)中的Φd(·)。2) Data smoothing based on delay correction: Since there is a delay Td in the camera system, the measured value obtained at time kT1 corresponds to the state X(kq), so it is necessary to use the stored state value X(i) at each moment, i= kq,...k, first correct the original state and error covariance, and then use the filter dynamic equation to smooth the data and update the current state estimation and covariance, that is, to obtain Φd (· ).

通过MEKF对机载传感器子系统2各个测量值的处理,导航系统不但具有与惯导系统相同的高带宽,同时由于视觉系统的存在,可以修正惯导系统的漂移,得到高频率高精度的导航信息。Through the MEKF processing of each measurement value of the airborne sensor subsystem 2, the navigation system not only has the same high bandwidth as the inertial navigation system, but also can correct the drift of the inertial navigation system due to the existence of the vision system, and obtain high-frequency and high-precision navigation information.

本发明一种基于视觉的无人驾驶飞机自主着陆导航系统,其优点是:不依赖外界信息的情况下,只利用机载惯导系统、高度表系统、磁航向仪、机载视觉系统等被动传感器和地面合作标志图像,为UAV在着陆过程中提供UAV相对于跑道的导航信息,提高飞机的生存能力,同时也可以作为有人飞机着陆过程中的导航辅助信息的提供源。A vision-based autonomous landing navigation system for unmanned aircraft of the present invention has the advantage that it only uses passive navigation systems such as airborne inertial navigation systems, altimeter systems, magnetic heading instruments, and airborne vision systems without relying on external information. The sensor and the ground cooperation mark image provide the UAV with navigation information relative to the runway during the landing process, improve the survivability of the aircraft, and also serve as a source of navigation assistance information during the landing process of the manned aircraft.

(四)附图说明:(4) Description of drawings:

图1基于视觉的组合导航方案坐标系定义示意图。Fig. 1 Schematic diagram of coordinate system definition based on vision-based integrated navigation scheme.

图2基于视觉的UAV自主着陆组合导航系统结构图。Fig. 2 Structural diagram of vision-based UAV autonomous landing integrated navigation system.

图3(a)系统采集到的彩色图像。Figure 3(a) The color image collected by the system.

图3(b)图像预处理后的图像。Figure 3(b) Image after image preprocessing.

图3(c)区域分割后的图像。Figure 3(c) Image after region segmentation.

图3(d)特征提取与目标识别(经放大)。Figure 3(d) Feature extraction and object recognition (enlarged).

图中标号如下:The numbers in the figure are as follows:

1跑道特征           11跑道特征点(A~G)1 Runway features 11 Runway feature points (A~G)

2机载传感器子系统   21机载惯导系统   22高度表系统2 Airborne sensor subsystem 21 Airborne inertial navigation system 22 Altimeter system

221气压高度表       222雷达高度表    23磁罗盘221 barometric altimeter 222 radar altimeter 23 magnetic compass

24机载摄像机系统    241自由度云台    242摄像机24 on-board camera system 241 degrees of freedom PTZ 242 cameras

243图像处理模块243 image processing module

3信息融合子系统     31跑道模型       32摄像机系统模型3 information fusion subsystem 31 runway model 32 camera system model

33数据处理模块33 data processing module

(五)具体实施方式:(5) Specific implementation methods:

本发明一种基于视觉的无人驾驶飞机自主着陆导航系统,由软件算法及硬件装置两部份组成;The present invention is a vision-based autonomous landing navigation system for unmanned aircraft, which consists of two parts: a software algorithm and a hardware device;

该软件算法包括有:计算机视觉算法及信息融合和状态估计算法;The software algorithm includes: computer vision algorithm, information fusion and state estimation algorithm;

请参阅图2所示,该硬件装置包括有:布置在跑道平面上的跑道特征1、用于测量UAV状态的机载传感器子系统2和用于处理传感器测量信息的信息融合子系统3;Please refer to Fig. 2, the hardware device includes: a runway feature 1 arranged on the runway plane, an airborne sensor subsystem 2 for measuring the state of the UAV, and an information fusion subsystem 3 for processing sensor measurement information;

该机载传感器子系统2包括有机载摄像机系统24、机载惯导系统21、高度表系统22、磁罗盘23;通过机载惯导系统21、高度表系统22、磁罗盘23等传感器,对UAV真实状态进行测量,通过机载摄像机系统24对跑道特征1进行跟踪与分析,得到跑道特征点11的测量值,这些测量信息传给信息融合子系统3;The airborne sensor subsystem 2 includes an airborne camera system 24, an airborne inertial navigation system 21, an altimeter system 22, and a magnetic compass 23; Measure the real state of the UAV, track and analyze the runway feature 1 through the onboard camera system 24, obtain the measured value of the runway feature point 11, and transmit the measurement information to the information fusion subsystem 3;

该信息融合子系统3根据前一时刻其对飞机状态的估计值以及机载传感器子系统2在当前时刻对飞机状态的测量值,通过跑道模型31、摄像机系统模型32得到跑道特征点11的估计值,与跑道特征点11的测量值进行比较,并融合其他测量信息,通过数据处理模块33的计算,最终得到高精度的导航信息。The information fusion subsystem 3 obtains an estimate of the runway feature point 11 through the runway model 31 and the camera system model 32 according to the estimated value of the aircraft state at the previous moment and the measured value of the aircraft state at the current moment by the onboard sensor subsystem 2 The value is compared with the measured value of the runway feature point 11, and other measured information is fused, and through the calculation of the data processing module 33, high-precision navigation information is finally obtained.

为了在导航过程中利用跑道上布置的特征信息来估计UAV相对于跑道的位置和姿态,我们必须将地面上的特征与摄像机捕捉到的特征联系起来,建立它们之间的函数关系。因此,我们需要定义坐标系;该坐标系主要包括有:惯性坐标系{E}、机体坐标系{B}、摄像机坐标系{C}及视平面坐标系{I};In order to use the feature information arranged on the runway to estimate the position and attitude of the UAV relative to the runway during navigation, we must link the features on the ground with the features captured by the camera and establish a functional relationship between them. Therefore, we need to define a coordinate system; the coordinate system mainly includes: the inertial coordinate system {E}, the body coordinate system {B}, the camera coordinate system {C} and the viewing plane coordinate system {I};

不同坐标系的定义请参阅图1所示,主要包括惯性坐标系{E:o-xyz},机体坐标系{B:ob-xbybzb},以及摄像机坐标系{C:oc-xcyczc}和视平面坐标系{I:(u,v)}。摄像机坐标系{C}是机体坐标系{B}经过简单的平移,然后依次水平转动ψc和俯仰转动θc得到(ψc、θc是安装摄像机的二自由度云台相对于机体坐标系的转动角)。为便于分析,假设安装摄像机的云台转动中心和摄像机光轴中心是重合的。视平面坐标系{I}的定义和摄像机坐标系{C}一致。Please refer to Figure 1 for the definition of different coordinate systems, mainly including the inertial coordinate system {E: o-xyz}, the body coordinate system {B: ob -xb yb zb }, and the camera coordinate system {C: oc -xc yc zc } and view plane coordinate system {I: (u, v)}. The camera coordinate system {C} is obtained by simply translating the body coordinate system {B}, and then turning ψc horizontally and θc pitching in turn (ψc , θc are the two-degree-of-freedom pan/tilt with the camera installed relative to the body coordinate system angle of rotation). For the convenience of analysis, it is assumed that the rotation center of the pan/tilt where the camera is installed coincides with the center of the optical axis of the camera. The definition of the viewing plane coordinate system {I} is consistent with the camera coordinate system {C}.

根据系统中的几何关系,给出如下符号定义:REB表示从{E}到{B}的旋转变换;RBC是从机体坐标系{B}到摄像机坐标系{C}的旋转变换;

Figure A20061008883600151
表示UAV在惯性坐标系{E}的位置;
Figure A20061008883600152
表示摄像机在机体坐标系{B}中的位置。其中,REB变换常用从惯性坐标系{E}到机体坐标系{B}的欧拉角(φ,θ,ψ)表示;RBC可以用摄像机安装云台的方位角和俯仰角(θC,ψC)表示,如式(1)、(2)所示。According to the geometric relationship in the system, the following symbols are defined: REB represents the rotation transformation from {E} to {B}; RBC is the rotation transformation from the body coordinate system {B} to the camera coordinate system {C};
Figure A20061008883600151
Indicates the position of the UAV in the inertial coordinate system {E};
Figure A20061008883600152
Indicates the position of the camera in the body coordinate system {B}. Among them, REB transformation is often expressed by the Euler angles (φ, θ, ψ) from the inertial coordinate system {E} to the body coordinate system {B}; RBC can use the azimuth and pitch angles (θC , ψC ), as shown in formulas (1) and (2).

RREBEB==coscosθθcoscosψψcoscosθθsinsinψψ--sinsinθθ--coscosφφsinsinψψ++sinsinφφsinsinθθcoscosψψcoscosφφcoscosψψ++sinsinφφsinsinθθsinsinψψsinsinφφcoscosθθsinsinφφsinsinψψ++coscosφφsinsinθθcoscosψψ--sinsinφφcoscosψψ++coscosφφsinsinθθsinsinψψcoscosφφcoscosθθ------((11))

RBC=cosθccosψccosθcsinψc-sinθc-sinψccosψc0sinθccosψcsinθcsinψccosθc---(2)设第i个特征点在惯性坐标系{E}中坐标为

Figure A20061008883600155
,则其在摄像机坐标系{C}中坐标表示为:R BC = cos θ c cos ψ c cos θ c sin ψ c - sin θ c - sin ψ c cos ψ c 0 sin θ c cos ψ c sin θ c sin ψ c cos θ c - - - ( 2 ) Let the coordinates of the i-th feature point in the inertial coordinate system {E} be
Figure A20061008883600155
, then its coordinates in the camera coordinate system {C} are expressed as:

PP→&Right Arrow;iiCC==RRBCBC((RREBEB((PP→&Right Arrow;ii--PP→&Right Arrow;BB))--PP→&Right Arrow;CCBB))==RRBCBC·&Center Dot;RREBEB((PP→&Right Arrow;ii--PP→&Right Arrow;BB))--RRBCBC·&Center Dot;PP→&Right Arrow;CCBB------((33))

RBC·REB=r1r2r3r4r5r6r7r8r9,-RBC·P→CB=(kx,ky,kz)T,将式(3)写成分量形式为:make R BC · R EB = r 1 r 2 r 3 r 4 r 5 r 6 r 7 r 8 r 9 , - R BC · P &Right Arrow; C B = ( k x , k the y , k z ) T , write formula (3) into components as:

xxiiCCythe yiiCCzziiCC==rr11((xxii--xx))++rr22((ythe yii--ythe y))++rr33((zzii--zz))++kkxxrr44((xxii--xx))++rr55((ythe yii--ythe y))++rr66((zzii--zz))++kkythe yrr77((xxii--xx))++rr88((ythe yii--ythe y))++rr99((zzii--zz))++kkzz------((44))

摄像机坐标系{C}与视平面坐标系{I}之间采用基于针孔摄像机模型的透视投影。因此,摄像机的焦距为f时,第i个特征点在视平面坐标系{I}中的坐标表示为:The perspective projection based on the pinhole camera model is used between the camera coordinate system {C} and the viewing plane coordinate system {I}. Therefore, when the focal length of the camera is f, the coordinates of the i-th feature point in the viewing plane coordinate system {I} are expressed as:

uuiivvii==ff··ythe yiiCC//xxiiCCzziiCC//xxiiCC------((55))

由于特征点相对于机场跑道的位置是已知的,通过式(4)、式(5),就可以得到视平面坐标[ui,vi]T、惯性坐标[xi,yi,zi]T与UAV相对于跑道的位置和姿态的关系。Since the position of the feature point relative to the airport runway is known, through formula (4) and formula (5), we can get the visual plane coordinates [ui , vi ]T , inertial coordinates [xi, yi , zi ]T versus the UAV's position and attitude relative to the runway.

UAV的进入基于视觉的自主着陆阶段以前,一般会对机载导航系统进行多次空中校准,但由于此时UAV飞行高度较高,速度较快,所以校准的精度还不足以导引UAV完成着陆操作。这里假设在进入基于视觉的自主着陆阶段以前:Before the UAV enters the vision-based autonomous landing stage, the airborne navigation system is usually calibrated multiple times in the air. However, because the UAV is flying at a higher altitude and faster at this time, the accuracy of the calibration is not enough to guide the UAV to complete the landing. operate. This assumes that before entering the vision-based autonomous landing phase:

(1)飞机被成功的导引到指定的区域;(1) The aircraft is successfully guided to the designated area;

(2)机载导航系统进行了空中校准,其高度误差小于±30m,水平方向的位置误差小于±200m,速度误差小于±1m/s,欧拉角测量误差小于±1度。(2) The airborne navigation system has been calibrated in the air, and its height error is less than ±30m, the horizontal position error is less than ±200m, the velocity error is less than ±1m/s, and the Euler angle measurement error is less than ±1 degree.

在UAV进入基于视觉的自主着陆过程后,机载摄像机系统根据导航系统的测量值,预估特征点的位置,并控制摄像机系统的云台,使摄像机在一定区域内搜索预先设置在机场跑道上的特征图像;当系统获取到特征图案后,摄像机系统进入跟踪模态,即控制云台以跟踪特征图像,并对图像进行相应的图像处理,提取特征图像上的特征点的信息;融合机载惯导系统信息,同时结合高度表系统、磁罗盘等机载传感器的测量信息,获得高精度的UAV导航信息。After the UAV enters the vision-based autonomous landing process, the onboard camera system estimates the position of the feature point based on the measurement value of the navigation system, and controls the pan/tilt of the camera system to make the camera search in a certain area and pre-set on the airport runway The characteristic image of the image; when the system acquires the characteristic pattern, the camera system enters the tracking mode, that is, controls the gimbal to track the characteristic image, and performs corresponding image processing on the image to extract the information of the characteristic point on the characteristic image; Inertial navigation system information, combined with the measurement information of airborne sensors such as altimeter system and magnetic compass, can obtain high-precision UAV navigation information.

其中,跑道特征1的选择,能够在摄像机视平面上成像的每个跑道特征点11,都会通过式(4)、式(5)给出以UAV位置和姿态为参数的两个方程。由于这些方程是非线性方程,若只有三个特征点,会得到位置和姿态的多组解。为了得到唯一解,至少需要4个一般布局的特征点。在自主着陆导航系统中,为了减小噪声对系统的影响,提高系统的估计精度,采用7个特征点作为跑道特征。Among them, the selection of runway feature 1, each runway feature point 11 that can be imaged on the camera viewing plane, will give two equations with UAV position and attitude as parameters through formula (4) and formula (5). Since these equations are nonlinear equations, if there are only three feature points, multiple sets of solutions for position and attitude will be obtained. In order to obtain a unique solution, at least 4 feature points of the general layout are required. In the autonomous landing navigation system, in order to reduce the impact of noise on the system and improve the estimation accuracy of the system, seven feature points are used as runway features.

考虑到提取多边形区域的顶点算法成熟、鲁棒性好,而三角形和四边形的顶点曲率大,更易于顶点提取,同时跑道颜色较深,所以选取在跑道起点附近喷涂灰度值较高的(白色)三角形与四边形组合图案的方法来布置跑道特征1,如图1点A至G所示。两个图形关于跑道中心线对称,同时点B、D和F位于跑道中心线上,这些跑道特征点11在惯性坐标系{E}中的坐标分别为:A(30,-15,0)、B(10,0,0)、C(30,15,0)、D(70,0,0)、E(200,-20,0)、F(150,0,0)和G(200,20,0)。采用两个不同图形组合的方式,还可以帮助UAV判断着陆方向。由于飞机着陆过程与跑道之间夹角较小,沿跑道方向在的图案成像时会有较大的压缩,所以多边形区域沿跑道方向设计得较长。将标志布置在飞机着陆点之前,可以防止由于飞机滑跑刹车时,在标志上留下过多的刹车痕迹,影响其他UAV的识别效果。同时可以使飞机下滑时,得到尽量大的图像。Considering that the vertex algorithm for extracting polygonal areas is mature and robust, and the vertex curvature of triangles and quadrilaterals is large, which makes it easier to extract vertices. At the same time, the color of the runway is darker, so it is selected to spray a higher gray value (white color) near the starting point of the runway. ) The method of combining patterns of triangles and quadrilaterals is used to arrange the runway feature 1, as shown by points A to G in Figure 1. The two figures are symmetrical about the centerline of the runway, and points B, D, and F are located on the centerline of the runway. The coordinates of these runway feature points 11 in the inertial coordinate system {E} are: A(30, -15, 0), B(10,0,0), C(30,15,0), D(70,0,0), E(200,-20,0), F(150,0,0) and G(200, 20,0). The combination of two different graphics can also help the UAV to judge the landing direction. Since the angle between the landing process of the aircraft and the runway is small, the pattern imaging along the runway direction will have a greater compression, so the polygonal area is designed to be longer along the runway direction. Arranging the sign before the landing point of the aircraft can prevent too many braking marks from being left on the sign when the aircraft is taxiing and braking, which will affect the recognition effect of other UAVs. At the same time, when the plane is sliding down, an image as large as possible can be obtained.

机载摄像机系统24由二自由度云台241、摄像机242以及图像处理模块243组成。二自由度云台241固定在UAV下部,可以在云台控制器的控制下,相对于UAV作俯仰运动和连续的水平运动,使系统完成搜索目标和锁定目标的任务。摄像机242采用PAL制彩色摄像机一体机,可以在控制指令的控制下进行变倍操作,使得跑道特征11在摄像机242中的成像大小利于视觉算法的运算。图像处理模块243可以对摄像机242拍摄的场景进行采集,同时对采集的图像进行处理,得到跑道特征点11在视平面上成像的坐标值。图像处理模块243中对图像进行处理的计算机视觉技术是机载摄像机系统24的关键,具体内容将在下面加以论述。The airborne camera system 24 is composed of a two-degree-of-freedom platform 241 , a camera 242 and an image processing module 243 . The two-degree-of-freedom pan-tilt 241 is fixed on the lower part of the UAV, and can perform pitching motion and continuous horizontal motion relative to the UAV under the control of the pan-tilt controller, so that the system can complete the tasks of searching and locking the target. The camera 242 adopts a PAL-made color camera all-in-one machine, which can perform zoom operation under the control of the control command, so that the imaging size of the runway feature 11 in the camera 242 is conducive to the operation of the visual algorithm. The image processing module 243 can collect the scenes captured by the camera 242 and process the collected images at the same time to obtain the image coordinates of the runway feature points 11 on the viewing plane. The computer vision technology for image processing in the image processing module 243 is the key to the onboard camera system 24, and the details will be discussed below.

机载惯导系统21采用惯性测量单元(Inertial Measurement Unit,IMU),它由固连的三个正交安装的加速度计和三个正交的陀螺仪组成。IMU的直接测量值为飞机在体轴向的三个加速度和三个角速度,可以通过信息融合子系统3计算出飞机的速度、位置和姿态等信息。一般认为,IMU具有优良的高更新率特性(可以达到100Hz以上),且为实时输出。The airborne inertial navigation system 21 uses an inertial measurement unit (Inertial Measurement Unit, IMU), which is composed of three orthogonally mounted accelerometers and three orthogonal gyroscopes fixedly connected. The direct measurements of the IMU are the three accelerations and three angular velocities of the aircraft in the body axis, and information such as the speed, position, and attitude of the aircraft can be calculated through the information fusion subsystem 3 . It is generally believed that the IMU has excellent high update rate characteristics (can reach more than 100Hz), and it is a real-time output.

高度表系统22采用了气压高度表221和雷达高度表222的组合系统。根据气压高度表221的低频特性和雷达高度表222的高频特性,利用互补滤波方式(其运算在数据处理模块33种完成),消除了气压高度表221产生的稳态误差以及雷达高度表222较大的测量噪声,得到高精度的高度估计。由于采用了两种高度表组合的方式,使得系统采样频率较低,但其输出是实时的。高度表系统22采样频率为30Hz。The altimeter system 22 employs a combined system of a barometric altimeter 221 and a radar altimeter 222 . According to the low-frequency characteristics of the barometric altimeter 221 and the high-frequency characteristics of the radar altimeter 222, the complementary filtering method (its operation is completed in 33 kinds of data processing modules) eliminates the steady-state error produced by the barometric altimeter 221 and the radar altimeter 222. Larger measurement noise, resulting in highly accurate height estimates. Due to the combination of two altimeters, the sampling frequency of the system is low, but its output is real-time. The sampling frequency of the altimeter system 22 is 30 Hz.

信息融合子系统3利用信息融合技术,可以充分融合机载传感器子系统2中各机载传感器在空间和时间上的冗余或互补信息,从而取长补短,精确地给出UAV的导航信息,消除传感器测量值的不确定性,提高系统的可靠性。考虑到UAV着陆时,对机载传感器子系统2的信息融合主要是检测级的融合,而卡尔曼滤波方法正好适用于实时融合动态的低层次冗余数据,所以本方案选用基于卡尔曼滤波方法的融合算法。Information fusion subsystem 3 uses information fusion technology to fully fuse the redundant or complementary information of each airborne sensor in airborne sensor subsystem 2 in space and time, so as to learn from each other's strengths and make up for its weaknesses, accurately give UAV navigation information, and eliminate sensor Uncertainty in measured values improves system reliability. Considering that when the UAV lands, the information fusion of the airborne sensor subsystem 2 is mainly detection-level fusion, and the Kalman filtering method is just suitable for real-time fusion of dynamic low-level redundant data, so this scheme uses the Kalman filtering method fusion algorithm.

关于计算机视觉算法,在自主着陆过程中,机载摄像机系统24根据对UAV状态的估计值,控制二自由度云台241和摄像机242的焦距,使摄像机能够搜索、捕捉和跟踪跑道特征图案,并在图像处理模块243中通过一系列的计算机视觉算法,得到用于UAV位置和姿态估计的特征点坐标。视觉处理过程主要分为:图像预处理、区域分割与标记和特征提取与目标识别三个阶段。Regarding the computer vision algorithm, during the autonomous landing process, the onboard camera system 24 controls the focal length of the two-degree-of-freedom gimbal 241 and the camera 242 according to the estimated value of the UAV state, so that the camera can search, capture and track the characteristic pattern of the runway, and In the image processing module 243, through a series of computer vision algorithms, the coordinates of feature points used for UAV position and attitude estimation are obtained. The visual processing process is mainly divided into three stages: image preprocessing, region segmentation and labeling, feature extraction and target recognition.

1)图像预处理1) Image preprocessing

一般情况下,摄像机242获得的原始图像由于受到种种条件限制和随机干扰,往往不能在视觉系统中直接进行分析处理,必须在图像预处理阶段对原始图像进行灰度校正、噪声过滤等图像预处理。In general, the original image obtained by the camera 242 cannot be directly analyzed and processed in the visual system due to various conditions and random interference, and the original image must be pre-processed in the image pre-processing stage, such as grayscale correction and noise filtering. .

根据UAV机载摄像机系统24的任务特点,考虑到对灰度图的处理算法比较成熟,同时运算速度较快、运算量相对比较少,首先通过公式:According to the task characteristics of the UAV airborne camera system 24, considering that the processing algorithm for the grayscale image is relatively mature, and at the same time, the calculation speed is relatively fast and the calculation amount is relatively small, firstly through the formula:

Y=0.299×R+0.596×G+0.211×B    (6)Y=0.299×R+0.596×G+0.211×B (6)

将获取的彩色图像转化为灰度图像。其中,R,G,B分别表示图像中红、绿、蓝三种颜色的分量。Convert the acquired color image to grayscale image. Among them, R, G, and B represent the three color components of red, green, and blue in the image, respectively.

由于获取的图像存在噪声,必须对其进行滤波处理。一般认为图像采集过程中存在的噪声可以近似为白噪声。中值滤波是一种具有很好低通特性,同时还能尽可能多的保存图像边缘细节的非线性平滑方法,它的主要思想是用邻域中亮度的中值代替图像的当前点。邻域中亮度的中值不受个别噪声毛刺的影响,因此中值滤波可以在相当好的消除脉冲噪声同时,还对图像的边缘细节进行保留。本方案选用7×7的中值滤波器对灰度图像进行处理。Due to the presence of noise in the acquired image, it must be filtered. It is generally believed that the noise existing in the image acquisition process can be approximated as white noise. Median filtering is a non-linear smoothing method that has very good low-pass characteristics and can preserve as much image edge details as possible. Its main idea is to replace the current point of the image with the median value of the brightness in the neighborhood. The median value of the brightness in the neighborhood is not affected by individual noise spikes, so the median filter can preserve the edge details of the image while eliminating the impulse noise quite well. In this program, a 7×7 median filter is used to process the grayscale image.

图3(a)是一幅UAV着陆过程中采集到的图像,转换为灰度并经过中值滤波的图像见图3(b)。Figure 3(a) is an image collected during UAV landing, and the image converted to grayscale and filtered by median is shown in Figure 3(b).

2)区域分割与标记2) Region Segmentation and Marking

本阶段任务是从经过预处理后的图像中得到特征识别所需的候选区域,需要经过阈值化分割(Thresholding)和区域生长与标记(Region growing andLabeling)两个过程。The task at this stage is to obtain the candidate regions required for feature recognition from the preprocessed image, which requires two processes of thresholding segmentation (Thresholding) and region growing and labeling (Region growing and Labeling).

对进行过滤波的图像进行阈值化,将灰度图转化为二值图,这样可以有效地减少后续处理工作的运算量。阈值化分割算法首先需要确定一个处在图像灰度取值范围之中的灰度阈值,然后将图像中各个像素的灰度值都与之比较,并根据比较的结果将对应的像素化分为两类:像素灰度大于阈值的为一类,即区域生长阶段的种子区域;像素灰度小于阈值的为另一类。考虑到在不同时间着陆,摄像机捕捉到的图像亮度会随着光照不同而变化,所以阈值化时不能选择固定的阈值。由于阈值化的目的是为了得到区域生长的种子,而且跑道特征在图像中的灰度值会比较大,所以选择的阈值可以相对比较高。经过对不同的阈值选取方法的实验比较,采用设置处于图像最大与最小灰度之间固定比例的灰度值作为阈值的方法,虽然简单,但该方法可以得到鲁棒性与高分割质量的最好折中。Threshold the filtered image and convert the grayscale image into a binary image, which can effectively reduce the amount of computation for subsequent processing. The thresholding segmentation algorithm first needs to determine a gray threshold within the gray value range of the image, then compares the gray value of each pixel in the image with it, and divides the corresponding pixel into There are two types: one is the pixel gray value greater than the threshold value, that is, the seed region in the region growth stage; the other type is the pixel gray value smaller than the threshold value. Considering the landing at different times, the brightness of the image captured by the camera will vary with the illumination, so a fixed threshold cannot be selected for thresholding. Since the purpose of thresholding is to obtain the seeds of regional growth, and the gray value of the runway features in the image will be relatively large, the selected threshold can be relatively high. After experimental comparison of different threshold selection methods, the method of setting the gray value with a fixed ratio between the maximum and minimum gray of the image as the threshold is adopted. Although simple, this method can obtain the best combination of robustness and high segmentation quality. Good compromise.

经过阈值化得到的种子区域,采用区域生长的方法,得到包括完整跑道特征图案在内的待处理区域。区域生长是根据预先定义的生长准则把像素或子区域集合成较大区域的处理方法,也就是以一组“种子”区域开始,将那些属性类似于种子的邻域像素附加到对应种子上。这里,生长准则定义为:若相邻待选像素的灰度值与种子区域的平均灰度值之间的差小于一个预先选定的固定值,则认为该像素与种子区域类似。标识就是在区域生长完成后给每个不相邻的区域标志一个唯一的数字(整数)。The seed area obtained by thresholding is used to obtain the area to be processed including the complete runway characteristic pattern by using the method of area growth. Region growing is a processing method of grouping pixels or sub-regions into larger regions according to predefined growth criteria, that is, starting with a set of "seed" regions, and attaching those neighboring pixels whose properties are similar to the seeds to the corresponding seeds. Here, the growth criterion is defined as: if the difference between the gray value of the adjacent candidate pixel and the average gray value of the seed area is smaller than a pre-selected fixed value, the pixel is considered similar to the seed area. The identification is to mark a unique number (integer) for each non-adjacent area after the area growth is completed.

经过区域分割与标记的图像见图3(c)。The image after region segmentation and labeling is shown in Figure 3(c).

3)特征提取与目标识别3) Feature extraction and target recognition

本阶段的任务就是从多个不相邻的候选区域中提取能够用于识别的特征,对各个区域进行抽象的描述,并通过对这些特征的分析,识别出与跑道标记所对应的两个区域,最后从这两个区域中得到所需的七个特征点在视平面坐标系{I}中的坐标值,用于飞机的位置和姿态估计。The task of this stage is to extract features that can be used for recognition from multiple non-adjacent candidate areas, abstractly describe each area, and through the analysis of these features, identify two areas corresponding to the runway markings , and finally get the coordinate values of the seven required feature points in the viewing plane coordinate system {I} from these two areas, which are used for the position and attitude estimation of the aircraft.

考虑到机载摄像机系统24在进行视觉处理之前,可以根据其他传感器信息得到跑道特征点在视平面成像的预估值,再根据跑道标志的特点,选择下面几种区域的描述作为特征:Considering that the onboard camera system 24 can obtain the estimated value of the imaging of the runway feature points in the visual plane according to other sensor information before performing visual processing, and then select the following descriptions of the areas as features according to the characteristics of the runway markings:

h.区域的面积:即区域包含的像素总个数。h. Area of the area: the total number of pixels contained in the area.

i.细长度:即包围区域的最小面积的矩形的长宽比。最小外接矩形是通过以离散步长旋转矩形,直到得到最小值来确定的。i. Slenderness: the aspect ratio of the rectangle with the smallest area enclosing the area. The minimum bounding rectangle is determined by rotating the rectangle in discrete steps until a minimum is obtained.

j.方向:即最小外接矩形的长边的方向。考虑到跑道标志在视平面上的成像为细长区域,所以使用方向作为特征是合理的。j. Direction: the direction of the long side of the smallest circumscribed rectangle. Considering that the image of the runway markings on the viewing plane is a slender area, it is reasonable to use the direction as a feature.

k.紧致度:即区域外部边界长度的平方与区域面积的比值。k. Compactness: the ratio of the square of the length of the outer boundary of the region to the area of the region.

l.区域的位置:即区域最小外接矩形对称中心在视平面上的坐标。l. Location of the area: the coordinates of the symmetric center of the smallest circumscribed rectangle of the area on the viewing plane.

m.投影:通过求取沿最小外接矩形的长边与短边方向的投影,分析区域的形状。m. Projection: Analyze the shape of the region by calculating the projection along the long and short sides of the smallest circumscribed rectangle.

n.两个区域之间相对位置:包括两个区域的距离与相对方向。n. Relative position between two areas: including the distance and relative direction of the two areas.

得到区域的特征后,与由预估跑道特征点坐标导出的特征值作比较,最终确定两个跑道标志区域。After obtaining the features of the area, compare them with the feature values derived from the estimated runway feature point coordinates, and finally determine two runway marking areas.

由于着陆过程UAV与跑道之间夹角较小,跑道特征区域成像时沿跑道方向会有较大的压缩,使得布置在跑道中心线上的三个顶点(点B、D和F)难以准确检测。所以本发明采取的特征点识别策略是:首先使用最小核值相似区(Smallest Univalue Segment Assimilating Nucleus,SUSAN)原则检测两个跑道特征区域的边界和角点,并选择每个区域曲率最大的两个角点与点A、C、E和G相对应;由于经过点E、C与经过点G、A的两直线的交点H和经过点E、A与经过点G、C的两直线的交点I都位于跑道中心线上,同时点B、D和F也位于这条线上,所以,可以通过求取过点H、I的直线与两个跑道区域边界的交点确定点B、D和F,如图3(d)所示。Due to the small angle between the UAV and the runway during the landing process, there will be a large compression along the direction of the runway when the runway feature area is imaged, making it difficult to accurately detect the three vertices (points B, D, and F) arranged on the centerline of the runway . Therefore, the feature point identification strategy adopted by the present invention is: first, use the Smallest Univalue Segment Assimilating Nucleus (SUSAN) principle to detect the boundaries and corner points of two runway feature areas, and select the two with the largest curvature in each area. The corner points correspond to the points A, C, E and G; because the intersection point H of the two straight lines passing through the points E, C and the points G, A and the intersection point I of the two straight lines passing through the points E, A and the points G, C They are all located on the center line of the runway, and points B, D and F are also located on this line. Therefore, the points B, D and F can be determined by finding the intersection of the straight line passing through points H and I and the boundary of the two runway areas. As shown in Figure 3(d).

关于视觉算法的实时性要求,视觉处理的过程是比较复杂的任务,需要一定时间,要做到较高的帧速率(如高于30帧/秒)是非常困难的,需要有非常高性能的图像处理硬件,因此机载摄像机系统24的测量信息一般输出频率较低,而且有一定的延迟。同时,由于环境的变化或处理任务的变化,图像处理的时间也是不定的。当系统进行搜索操作时,视觉算法会在整个图像平面进行。而当系统成功捕获目标并进行跟踪后,由于可以准确的预估特征标志的成像位置,图像处理仅在预估的一个较小范围进行。所以图像处理的时间会有较大的不同。Regarding the real-time requirements of the visual algorithm, the process of visual processing is a relatively complex task that takes a certain amount of time. It is very difficult to achieve a high frame rate (such as higher than 30 frames per second), and a very high-performance computer is required. Image processing hardware, so the measurement information of the on-board camera system 24 is generally output at a low frequency, and there is a certain delay. At the same time, due to changes in the environment or changes in processing tasks, the time for image processing is also uncertain. When the system performs a search operation, vision algorithms are performed across the entire image plane. However, when the system successfully captures and tracks the target, since it can accurately estimate the imaging position of the feature mark, the image processing is only carried out in a small estimated range. Therefore, the image processing time will be quite different.

根据对我们现有的原形系统进行测试,系统在搜索目标过程中,图像处理的帧速率约8帧/秒,延迟约150ms(包括采集卡采集时间);在跟踪目标过程中,图像处理的帧速率约18帧/秒,延迟约75ms(包括采集卡采集时间)。According to the test of our existing prototype system, in the process of searching for the target, the frame rate of image processing is about 8 frames per second, and the delay is about 150ms (including the acquisition time of the acquisition card); in the process of tracking the target, the frame rate of image processing is The rate is about 18 frames per second, and the delay is about 75ms (including the acquisition time of the acquisition card).

关于信息融合和状态估计算法,随着机载传感器数量的增多,多传感器信息融合技术已成为状态估计的重要手段。作为信息融合中进行位置估计的有效方法——卡尔曼滤波算法,由于采用了递推形式,数据存储量小,不仅可以处理平稳随机过程,而且可以处理多维和非平稳随机过程。目前卡尔曼滤波技术已成为实现多传感器信息融合的主要技术手段。本节主要论述如何将视觉信息与惯导、高度表信息融合,构造适合于UAV自主着陆中多传感器多速率扩展卡尔曼滤波器。Regarding information fusion and state estimation algorithms, with the increase in the number of airborne sensors, multi-sensor information fusion technology has become an important means of state estimation. As an effective method for position estimation in information fusion - Kalman filter algorithm, due to the use of recursive form, the data storage is small, not only can deal with stationary random processes, but also can deal with multi-dimensional and non-stationary random processes. At present, Kalman filter technology has become the main technical means to realize multi-sensor information fusion. This section mainly discusses how to fuse visual information with inertial navigation and altimeter information to construct a multi-sensor and multi-rate extended Kalman filter suitable for UAV autonomous landing.

关于系统测量方程和动态方程构造,由图2可知,状态估计中最重要的就是机载摄像机系统24通过计算机视觉技术得到的特征点信息,本方案将这7个特征点在视平面坐标系{I}中的测量值作为系统测量值的一部分。这里假设系统得到的特征点测量值由于图像采集、量化及处理等操作,存在相互独立的白噪声[ηui,ηvi]T,使之与真实值之间存在如下关系:As for the system measurement equation and dynamic equation construction, it can be seen from Figure 2 that the most important thing in state estimation is the feature point information obtained by the airborne camera system 24 through computer vision technology. The measurements in I} are taken as part of the system measurements. Here it is assumed that the measured values of the feature points obtained by the system Due to the operations of image acquisition, quantization and processing, there are mutually independent white noise [ηui , ηvi ]T , so that there is the following relationship between it and the true value:

zz22ii--11zz22ii==uuii~~vvii~~==uuiivvii++ηηuiuiηηvivi,,ii==1,21,2,,·&Center Dot;·&Center Dot;·&Center Dot;,,77..------((77))

除了上述14个测量值,系统还存在由高度表系统22得到的飞行高度和由磁罗盘23得到的偏航角的测量值。可表示为:In addition to the above-mentioned 14 measured values, the system also has the measured values of the flight altitude obtained by the altimeter system 22 and the yaw angle obtained by the magnetic compass 23 . Can be expressed as:

zz1515zz1616==Hh~~ψψ~~==Hhψψ++ηηHhηηψψ------((88))

由式(7)表示的图像特征点信息和式(8)表示的其他传感器信息可以通过关系式h(X(k))与UAV在惯性坐标系中的位置和姿态信息联系起来,在k时刻,可以表示为:The image feature point information represented by Equation (7) and other sensor information represented by Equation (8) can be related to the UAV’s position and attitude information in the inertial coordinate system through the relational expression h(X(k)), at time k ,It can be expressed as:

Z(k)=h(X(k))+ζz(k)    (9)Z(k)=h(X(k))+ζz (k) (9)

其中,Z(k)=[z1,z2,…,z16]T;X(k)是系统的状态向量,包括UAV在惯性坐标系中的位置(x,y,z)、速度(Vx,Vy,Vz)以及姿态角(φ,θ,ψ);ζz(k)是协方差阵为对角阵R(k)的白噪声向量。特征点测量值由机载摄像机系统24通过图像处理模块243直接得到,而特征点测量值的估计值,则是根据系统状态X(k),通过摄像机系统模型32对跑道模型31进行处理,按照图2所示,通过公式(1)-(5)计算得到。Among them, Z(k)=[z1 , z2 ,...,z16 ]T ; X(k) is the state vector of the system, including the position (x, y, z) and velocity ( Vx , Vy , Vz ) and attitude angles (φ, θ, ψ); ζz (k) is a white noise vector whose covariance matrix is a diagonal matrix R(k). The measured value of the feature point is directly obtained by the onboard camera system 24 through the image processing module 243, and the estimated value of the measured value of the feature point is processed on the runway model 31 through the camera system model 32 according to the system state X(k), according to As shown in Figure 2, it is calculated by formulas (1)-(5).

在状态估计中,我们取UAV在惯性坐标系中的位置(x,y,z)和速度(Vx,Vy,Vz)以及UAV的姿态角(φ,θ,ψ)作为系统的状态变量X(k),通过UAV的运动学关系,得到系统的动态方程:In the state estimation, we take the position (x, y, z) and velocity (Vx , Vy , Vz ) of the UAV in the inertial coordinate system and the attitude angle (φ, θ, ψ) of the UAV as the state of the system The variable X(k), through the kinematic relationship of UAV, obtains the dynamic equation of the system:

X(k+1)=Φ(X(k))+ζx(k)。        (10)X(k+1)=Φ(X(k))+ζx (k). (10)

其中,Φ(·)是系统的非线性状态方程,而ζx(k)是协方差阵为对角阵Q(k)的白噪声向量。Among them, Φ(·) is the nonlinear state equation of the system, and ζx (k) is the white noise vector whose covariance matrix is the diagonal matrix Q(k).

关于多速率扩展卡尔曼滤波器,通过式(9)给出的测量方程和式(10)给出的动态方程,利用扩展卡尔曼滤波器(EKF),建立UAV的动态估计器,从而可以估计出UAV在空间的位置和姿态。EKF方法,必须假设在每次状态更新和测量更新时,各个传感器在该时刻均有测量值。但是,我们注意到由于机载传感器子系统2中各传感器系统给出的信息具有不同的更新频率,不同时间延迟特性,特别是机载摄像机系统24,它的更新速率和延迟特性不是固定不变的。因此需要根据信息的特点,对扩展卡尔曼滤波器进行改进,设计出能真正处理UAV多传感系统信息的多速率扩展卡尔曼滤波器(MEKF)。Regarding the multi-rate extended Kalman filter, through the measurement equation given by formula (9) and the dynamic equation given by formula (10), the UAV dynamic estimator is established by using the extended Kalman filter (EKF), so that it can estimate The position and attitude of the UAV in space. In the EKF method, it must be assumed that at each state update and measurement update, each sensor has a measurement value at that moment. However, we noticed that since the information given by each sensor system in the airborne sensor subsystem 2 has different update frequencies and different time delay characteristics, especially the airborne camera system 24, its update rate and delay characteristics are not constant of. Therefore, it is necessary to improve the extended Kalman filter according to the characteristics of the information, and design a multi-rate extended Kalman filter (MEKF) that can truly process the information of the UAV multi-sensing system.

由系统动态方程式(10)可知,系统状态变量可由惯导系统给出。机载惯导系统21具有较高的输出频率,这里假设为1/T1;而量测方程式(9)的输出频率由机载摄像机系统24、高度表系统22和磁罗盘23的更新频率决定,而机载摄像机系统24相对于两个传感器输出频率较低,延迟最大。不妨假设三个传感器都与机载摄像机系统24输出频率1/TV相同,输出延迟都与摄像机系统延迟Td相同。为了简便起见,设TV=pT1,Td=qT1,其中p,q为可变正整数。According to the system dynamic equation (10), the system state variables can be given by the inertial navigation system. The airborne inertial navigation system 21 has a higher output frequency, which is assumed to be 1/T1 here; and the output frequency of the measurement equation (9) is determined by the update frequency of the airborne camera system 24, the altimeter system 22 and the magnetic compass 23 , while the onboard camera system 24 has a lower output frequency relative to the two sensors, and the delay is the largest. It may be assumed that all three sensors have the same output frequency 1/TV as the onboard camera system 24, and output delays are all the same as the camera system delay Td . For simplicity, it is assumed that TV =pT1 , Td =qT1 , where p and q are variable positive integers.

MEKF对多速率测量和测量延迟的处理,主要体现在下面两个运行机制上:MEKF's processing of multi-rate measurement and measurement delay is mainly reflected in the following two operating mechanisms:

1)基于事件驱动:当没有视觉测量数据时,滤波器只进行状态预估和误差协方差的传播;而当测量发生后,用测量的数据去校正滤波器的预估状态和误差协方差,如式(11)所示。1) Event-driven: When there is no visual measurement data, the filter only performs state estimation and error covariance propagation; and when the measurement occurs, the measured data is used to correct the estimated state and error covariance of the filter, As shown in formula (11).

2)基于延迟校正的数据平滑:由于摄像机系统存在延迟Td,kTl时刻获得的测量值与状态X(k-q)相对应,因此需要利用存储的各个时刻的状态值X(i),i=k-q,…k,先对原来的状态和误差协方差进行校正,再使用滤波器动态方程,进行数据平滑,更新当前的状态估计和协方差,即求得式(11)中的Φd(·)。2) Data smoothing based on delay correction: Since there is a delay Td in the camera system, the measured value obtained at time kTl corresponds to the state X(kq), so it is necessary to use the stored state value X(i) at each moment, i= kq,...k, first correct the original state and error covariance, and then use the filter dynamic equation to smooth the data and update the current state estimation and covariance, that is, to obtain Φd (· ).

通过MEKF对机载传感器子系统2各个测量值的处理,导航系统不但具有与惯导系统相同的高带宽,同时由于视觉系统的存在,可以修正惯导系统的漂移,得到高频率高精度的导航信息。Through the MEKF processing of each measurement value of the airborne sensor subsystem 2, the navigation system not only has the same high bandwidth as the inertial navigation system, but also can correct the drift of the inertial navigation system due to the existence of the vision system, and obtain high-frequency and high-precision navigation information.

Claims (6)

Translated fromChinese
1.一种基于视觉的无人驾驶飞机自主着陆导航系统,该无人驾驶飞机,以下简称:UAV,其特征在于,该系统由软件算法及硬件装置两部份组成;1. A vision-based unmanned aircraft autonomous landing navigation system, this unmanned aircraft, hereinafter referred to as: UAV, is characterized in that the system is composed of software algorithms and hardware devices;该软件算法包括有:计算机视觉算法及信息融合和状态估计算法;The software algorithm includes: computer vision algorithm, information fusion and state estimation algorithm;该硬件装置包括有:布置在跑道平面上的跑道特征(1)、用于测量UAV状态的机载传感器子系统(2)和用于处理传感器测量信息的信息融合子系统(3);The hardware device includes: a runway feature (1) arranged on the runway plane, an airborne sensor subsystem (2) for measuring the state of the UAV, and an information fusion subsystem (3) for processing sensor measurement information;该机载传感器子系统(2)包括有机载摄像机系统(24)、机载惯导系统(21)、高度表系统(22)、磁罗盘(23);通过机载惯导系统(21)、高度表系统(22)、磁罗盘(23)的传感器,对UAV真实状态进行测量,通过机载摄像机系统(24)对跑道特征(1)进行跟踪与分析,得到跑道特征点(11)的测量值,这些测量信息传给信息融合子系统(3);The airborne sensor subsystem (2) includes an airborne camera system (24), an airborne inertial navigation system (21), an altimeter system (22), and a magnetic compass (23); , the altimeter system (22), the sensors of the magnetic compass (23), measure the real state of the UAV, track and analyze the runway feature (1) by the onboard camera system (24), and obtain the runway feature point (11) Measured values, the measured information is passed to the information fusion subsystem (3);该信息融合子系统(3)根据前一时刻其对飞机状态的估计值以及机载传感器子系统(2)在当前时刻对飞机状态的测量值,通过跑道模型(31)、摄像机系统模型(32)得到跑道特征点(11)的估计值,与跑道特征点(11)的测量值进行比较,并融合其他测量信息,通过数据处理模块(33)的计算,最终得到高精度的导航信息The information fusion subsystem (3) passes the runway model (31) and the camera system model (32) according to the estimated value of the aircraft state at the previous moment and the measurement value of the aircraft state at the current moment by the airborne sensor subsystem (2). ) to obtain the estimated value of the runway feature point (11), compare it with the measured value of the runway feature point (11), and fuse other measurement information, and finally obtain high-precision navigation information through the calculation of the data processing module (33)2.根据权利要求1所述的一种基于视觉的无人驾驶飞机自主着陆导航系统,其特征在于:该跑道特征(1)能够在摄像机视平面上成像的每个跑道特征点(11),都会给出以UAV位置和姿态为参数的两个方程,为了得到唯一的解,至少需要4个一般布局的特征点,在自主着陆导航系统中,为了减小噪声对系统的影响,提高系统的估计精度,采用7个特征点作为跑道特征;而三角形和四边形的顶点曲率大,更易于顶点提取,采用两个不同图形组合的方式,可以帮助UAV判断着陆方向,所以选取在跑道起点附近喷涂白色三角形与四边形组合图案的方法来布置跑道特征(1),两个图形关于跑道中心线对称,同时点B、D和F位于跑道中心线上,这些跑道特征点(11)在惯性坐标系{E}中的坐标分别为:2. a kind of vision-based unmanned aircraft autonomous landing navigation system according to claim 1, is characterized in that: each runway feature point (11) that this runway feature (1) can be imaged on the camera visual plane, Both will give two equations with UAV position and attitude as parameters. In order to obtain a unique solution, at least four feature points of the general layout are required. In the autonomous landing navigation system, in order to reduce the impact of noise on the system and improve the system To estimate the accuracy, 7 feature points are used as runway features; while triangles and quadrilaterals have large vertex curvatures, which are easier to extract vertices. The combination of two different graphics can help UAVs judge the landing direction, so choose to spray white near the start of the runway Triangular and quadrilateral patterns are used to arrange runway features (1). The two figures are symmetrical about the runway centerline, and points B, D and F are located on the runway centerline. These runway feature points (11) are in the inertial coordinate system {E } in the coordinates are:A(30,-15,0)  B(10,0,0)  C(30,15,0)  D(70,0,0)  E(200,-20,0)  F(150,0,0)G(200,20,0)A(30,-15,0) B(10,0,0) C(30,15,0) D(70,0,0) E(200,-20,0) F(150,0,0) G(200, 20, 0)3.根据权利要求1所述的一种基于视觉的无人驾驶飞机自主着陆导航系统,其特征在于:该机载传感器子系统(2)中的机载摄像机系统(24),由二自由度云台(241)、摄像机(242)以及图像处理模块(243)组成;3. a kind of vision-based unmanned aircraft autonomous landing navigation system according to claim 1, is characterized in that: the airborne camera system (24) in this airborne sensor subsystem (2), by two degrees of freedom It consists of a cloud platform (241), a camera (242) and an image processing module (243);该二自由度云台(241)固定在UAV下部,可以在云台控制器的控制下,相对于UAV作俯仰运动和连续的水平运动,使系统完成搜索目标和锁定目标的任务;The two-degree-of-freedom pan-tilt (241) is fixed on the bottom of the UAV, and can perform pitching motion and continuous horizontal motion relative to the UAV under the control of the pan-tilt controller, so that the system can complete the tasks of searching for targets and locking targets;该摄像机(242)采用PAL制彩色摄像机一体机,可以在控制指令的控制下进行变倍操作,使得跑道特征(11)在摄像机(242)中的成像大小利于视觉算法的运算;The camera (242) adopts a PAL color camera all-in-one machine, which can perform zoom operation under the control of the control command, so that the imaging size of the runway feature (11) in the camera (242) is conducive to the calculation of the visual algorithm;该图像处理模块(243)可以对摄像机(242)拍摄的场景进行采集,同时对采集的图像进行处理,得到跑道特征点(11)在视平面上成像的坐标值,图像处理模块(243)中对图像进行处理的计算机视觉技术是机载摄像机系统(24)的关键。The image processing module (243) can collect the scene captured by the camera (242), and process the image collected simultaneously to obtain the coordinate values of the runway feature point (11) imaging on the visual plane, and in the image processing module (243) Computer vision techniques for image processing are key to onboard camera systems (24).4.根据权利要求1所述的一种基于视觉的无人驾驶飞机自主着陆导航系统,其特征在于:该机载传感器子系统(2)中的高度表系统(22),采用气压高度表(221)和雷达高度表(222)的组合;根据气压高度表(221)的低频特性和雷达高度表(222)的高频特性,利用互补滤波方式其运算在数据处理模块(33)中完成,消除了气压高度表(221)产生的稳态误差以及雷达高度表(222)较大的测量噪声,得到高精度的高度估计;由于采用了两种高度表组合的方式,使得系统采样频率较低,但其输出是实时的,高度表系统(22)采样频率为30Hz。4. a kind of vision-based unmanned aircraft autonomous landing navigation system according to claim 1, is characterized in that: the altimeter system (22) in this airborne sensor subsystem (2) adopts barometric altimeter ( 221) and the combination of radar altimeter (222); According to the low-frequency characteristic of barometric altimeter (221) and the high-frequency characteristic of radar altimeter (222), utilize its operation of complementary filter mode to finish in data processing module (33), The steady-state error produced by the barometric altimeter (221) and the large measurement noise of the radar altimeter (222) are eliminated, and high-precision altitude estimation is obtained; due to the combination of two altimeters, the system sampling frequency is low , but its output is real-time, and the altimeter system (22) sampling frequency is 30Hz.5.根据权利要求1所述的一种基于视觉的无人驾驶飞机自主着陆导航系统,其特征在于:该计算机视觉算法是在自主着陆过程中,机载摄像机系统(24)根据对UAV状态的估计值,控制二自由度云台(241)和摄像机(242)的焦距,使摄像机能够搜索、捕捉和跟踪跑道特征图案,并在图像处理模块(243)中通过一系列的计算机视觉算法,得到用于UAV位置和姿态估计的特征点坐标;视觉处理过程主要分为:图像预处理、区域分割与标记和特征提取与目标识别三个阶段。5. a kind of vision-based unmanned aircraft autonomous landing navigation system according to claim 1, is characterized in that: this computer vision algorithm is in autonomous landing process, airborne camera system (24) according to UAV state Estimated value, control the focal length of the two-degree-of-freedom pan/tilt (241) and camera (242), so that the camera can search, capture and track the characteristic pattern of the runway, and pass a series of computer vision algorithms in the image processing module (243), get Feature point coordinates for UAV position and attitude estimation; the visual processing process is mainly divided into three stages: image preprocessing, region segmentation and marking, feature extraction and target recognition.6.根据权利要求1所述的一种基于视觉的无人驾驶飞机自主着陆导航系统,其特征在于:该信息融合和状态估计算法,随着机载传感器数量的增多,多传感器信息融合技术已成为状态估计的重要手段,作为信息融合中进行位置估计的有效方法是卡尔曼滤波算法,由于采用了递推形式,数据存储量小,不仅可以处理平稳随机过程,而且可以处理多维和非平稳随机过程。6. A vision-based autonomous landing navigation system for unmanned aircraft according to claim 1, characterized in that: the information fusion and state estimation algorithm, along with the increase in the number of airborne sensors, multi-sensor information fusion technology has It has become an important means of state estimation. As an effective method for position estimation in information fusion, the Kalman filter algorithm is used. Due to the use of recursive form, the data storage capacity is small, and it can not only deal with stationary random processes, but also deal with multi-dimensional and non-stationary random processes. process.
CNA2006100888367A2006-07-192006-07-19 Vision-based autonomous landing navigation system for unmanned aircraftPendingCN101109640A (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
CNA2006100888367ACN101109640A (en)2006-07-192006-07-19 Vision-based autonomous landing navigation system for unmanned aircraft

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CNA2006100888367ACN101109640A (en)2006-07-192006-07-19 Vision-based autonomous landing navigation system for unmanned aircraft

Publications (1)

Publication NumberPublication Date
CN101109640Atrue CN101109640A (en)2008-01-23

Family

ID=39041809

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CNA2006100888367APendingCN101109640A (en)2006-07-192006-07-19 Vision-based autonomous landing navigation system for unmanned aircraft

Country Status (1)

CountryLink
CN (1)CN101109640A (en)

Cited By (53)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
WO2010108301A1 (en)*2009-03-272010-09-30Yu QifengGround-based videometrics guiding method for aircraft landing or unmanned aerial vehicles recovery
CN101907714A (en)*2010-06-252010-12-08陶洋GPS aided positioning system and method based on multi-sensor data fusion
CN101949709A (en)*2010-08-192011-01-19中国测绘科学研究院Onboard GPS aerial photography navigation control system and control method thereof
CN102538782A (en)*2012-01-042012-07-04浙江大学Helicopter landing guide device and method based on computer vision
CN102636799A (en)*2012-04-232012-08-15中国航天空气动力技术研究院Method for determining outdoor emergency runway of unmanned aerial vehicle
CN102636796A (en)*2012-04-232012-08-15中国航天空气动力技术研究院System and method for determining airfield runway of unmanned plane
CN103018761A (en)*2011-09-222013-04-03空中客车营运有限公司Method and system for determining the position of an aircraft during its approach to a landing runway
CN103256931A (en)*2011-08-172013-08-21清华大学Visual navigation system of unmanned planes
CN103287584A (en)*2012-03-012013-09-11上海工程技术大学Video-assisted system for airplane landing
CN103308058A (en)*2012-03-072013-09-18通用汽车环球科技运作有限责任公司Enhanced data association of fusion using weighted bayesian filtering
CN104006790A (en)*2013-02-212014-08-27成都海存艾匹科技有限公司 Vision-Based Aircraft Landing Aids
WO2014169353A1 (en)*2013-04-162014-10-23Bae Systems Australia LimitedLanding site tracker
WO2014169354A1 (en)*2013-04-162014-10-23Bae Systems Australia LimitedLanding system for an aircraft
FR3009117A1 (en)*2013-07-242015-01-30Airbus Operations Sas AUTONOMOUS AUTOMATIC LANDING METHOD AND SYSTEM
CN104463889A (en)*2014-12-192015-03-25中国人民解放军国防科学技术大学Unmanned plane autonomous landing target extracting method based on CV model
CN104670666A (en)*2015-02-272015-06-03中国民航大学Aircraft landing attitude alarming system and alarming control method
CN104764447A (en)*2014-01-032015-07-08空中客车运营简化股份公司METHOD AND DEVICE FOR VERTICALLY GUIDING AIRCRAFT DURING APPROACH OF landing RUNWAY
CN104808685A (en)*2015-04-272015-07-29中国科学院长春光学精密机械与物理研究所Vision auxiliary device and method for automatic landing of unmanned aerial vehicle
CN105302146A (en)*2014-07-252016-02-03空中客车运营简化股份公司Method and system for automatic autonomous landing of an aircraft
CN105701261A (en)*2014-11-262016-06-22沈阳飞机工业(集团)有限公司Near-field aircraft automatic tracking and monitoring method
CN106030430A (en)*2013-11-272016-10-12宾夕法尼亚大学理事会 Multisensor Fusion for Robust Autonomous Flight in Indoor and Outdoor Environments Using Rotorwing Micro Aerial Vehicles (MAVs)
CN106054931A (en)*2016-07-292016-10-26北方工业大学Unmanned aerial vehicle fixed-point flight control system based on visual positioning
CN104197928B (en)*2014-08-292017-01-18西北工业大学Multi-camera collaboration-based method for detecting, positioning and tracking unmanned aerial vehicle
CN106371460A (en)*2016-09-072017-02-01四川天辰智创科技有限公司Target searching method and apparatus
CN106709223A (en)*2015-07-292017-05-24中国科学院沈阳自动化研究所Sampling inertial guidance-based visual IMU direction estimation method
CN106708066A (en)*2015-12-202017-05-24中国电子科技集团公司第二十研究所Autonomous landing method of unmanned aerial vehicle based on vision/inertial navigation
CN107209514A (en)*2014-12-312017-09-26深圳市大疆创新科技有限公司 Selective processing of sensor data
CN107407937A (en)*2015-03-162017-11-28赛峰电子与防务公司 Automatic assist method for aircraft landing
CN107741229A (en)*2017-10-102018-02-27北京航空航天大学 A combined photoelectric/radar/inertial guidance method for carrier aircraft landing
CN107808175A (en)*2016-09-092018-03-16埃森哲环球解决方案有限公司Positioned using the automation loading bridge of coding applique
CN105786020B (en)*2016-04-282018-07-10深圳飞马机器人科技有限公司A kind of short distance downhill race method of unmanned plane
CN108759826A (en)*2018-04-122018-11-06浙江工业大学A kind of unmanned plane motion tracking method based on mobile phone and the more parameter sensing fusions of unmanned plane
CN108974373A (en)*2018-07-192018-12-11西安恒宇众科空间技术有限公司Based on binocular vision aircraft independent landing device
CN109341685A (en)*2018-12-042019-02-15中国航空工业集团公司西安航空计算技术研究所A kind of fixed wing aircraft vision auxiliary landing navigation method based on homograph
CN109341686A (en)*2018-12-042019-02-15中国航空工业集团公司西安航空计算技术研究所A kind of tightly coupled aircraft lands position and orientation estimation method of view-based access control model-inertia
CN109341700A (en)*2018-12-042019-02-15中国航空工业集团公司西安航空计算技术研究所Fixed wing aircraft vision assists landing navigation method under a kind of low visibility
CN110058604A (en)*2019-05-242019-07-26中国科学院地理科学与资源研究所A kind of accurate landing system of unmanned plane based on computer vision
US10395115B2 (en)2015-01-272019-08-27The Trustees Of The University Of PennsylvaniaSystems, devices, and methods for robotic remote sensing for precision agriculture
CN110170081A (en)*2019-05-142019-08-27广州医软智能科技有限公司A kind of ICU instrument alarm processing method and system
US10884430B2 (en)2015-09-112021-01-05The Trustees Of The University Of PennsylvaniaSystems and methods for generating safe trajectories for multi-vehicle teams
US10908580B2 (en)2016-09-092021-02-02Accenture Global Solutions LimitedDevices, systems, and methods for automated loading bridge positioning using shapes associated with a vehicle
CN112461222A (en)*2020-11-102021-03-09中航通飞华南飞机工业有限公司Virtual compass field and method suitable for aircraft airborne compass calibration
CN112560922A (en)*2020-12-102021-03-26中国航空工业集团公司沈阳飞机设计研究所Vision-based foggy-day airplane autonomous landing method and system
CN112904895A (en)*2021-01-202021-06-04中国商用飞机有限责任公司北京民用飞机技术研究中心Image-based airplane guide method and device
CN113269100A (en)*2021-05-272021-08-17南京航空航天大学Vision-based aircraft offshore platform landing flight visual simulation system and method
CN113282098A (en)*2021-07-082021-08-20北京航空航天大学东营研究院Method for improving flight verification accuracy of instrument landing system
US11126201B2 (en)2016-12-292021-09-21Israel Aerospace Industries Ltd.Image sensor based autonomous landing
CN113534849A (en)*2021-09-162021-10-22中国商用飞机有限责任公司Flight combination guidance system, method and medium integrating machine vision
CN115050215A (en)*2022-04-292022-09-13北京航空航天大学Door-to-door full-autonomous flight landing guiding method based on machine vision assistance
CN115183780A (en)*2022-07-292022-10-14中国商用飞机有限责任公司 Flight guidance system for aircraft approach phase, verification method for its image acquisition module, and flight guidance method
CN115280398A (en)*2020-03-132022-11-01Wing航空有限责任公司Ad hoc geographic reference pad for landing UAV
CN117033949A (en)*2023-10-082023-11-10成都凯天电子股份有限公司Method for detecting, classifying and maintaining and guiding high-load landing event of airplane
CN117930869A (en)*2024-03-212024-04-26山东智航智能装备有限公司Vision-based landing method and device for flight device

Cited By (84)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
WO2010108301A1 (en)*2009-03-272010-09-30Yu QifengGround-based videometrics guiding method for aircraft landing or unmanned aerial vehicles recovery
US9057609B2 (en)2009-03-272015-06-16National University Of Defense TechnologyGround-based camera surveying and guiding method for aircraft landing and unmanned aerial vehicle recovery
CN101907714A (en)*2010-06-252010-12-08陶洋GPS aided positioning system and method based on multi-sensor data fusion
CN101907714B (en)*2010-06-252013-04-03陶洋GPS aided positioning system and method based on multi-sensor data fusion
CN101949709A (en)*2010-08-192011-01-19中国测绘科学研究院Onboard GPS aerial photography navigation control system and control method thereof
CN101949709B (en)*2010-08-192013-03-20中国测绘科学研究院Onboard GPS aerial photography navigation control system and control method thereof
CN103256931A (en)*2011-08-172013-08-21清华大学Visual navigation system of unmanned planes
CN103256931B (en)*2011-08-172014-11-26清华大学Visual navigation system of unmanned planes
CN103018761A (en)*2011-09-222013-04-03空中客车营运有限公司Method and system for determining the position of an aircraft during its approach to a landing runway
CN103018761B (en)*2011-09-222016-08-03空中客车营运有限公司The method and system of Aircraft position information is determined when landing runway is marched into the arena
CN102538782B (en)*2012-01-042014-08-27浙江大学Helicopter landing guide device and method based on computer vision
CN102538782A (en)*2012-01-042012-07-04浙江大学Helicopter landing guide device and method based on computer vision
CN103287584A (en)*2012-03-012013-09-11上海工程技术大学Video-assisted system for airplane landing
CN103287584B (en)*2012-03-012015-12-16上海工程技术大学A kind of aircraft video landing ancillary system
CN103308058A (en)*2012-03-072013-09-18通用汽车环球科技运作有限责任公司Enhanced data association of fusion using weighted bayesian filtering
CN103308058B (en)*2012-03-072016-04-13通用汽车环球科技运作有限责任公司Use the enhancing data correlation of the fusion of weighting Bayesian filter
CN102636799A (en)*2012-04-232012-08-15中国航天空气动力技术研究院Method for determining outdoor emergency runway of unmanned aerial vehicle
CN102636796B (en)*2012-04-232013-07-10中国航天空气动力技术研究院System and method for determining airfield runway of unmanned plane
CN102636796A (en)*2012-04-232012-08-15中国航天空气动力技术研究院System and method for determining airfield runway of unmanned plane
CN104006790A (en)*2013-02-212014-08-27成都海存艾匹科技有限公司 Vision-Based Aircraft Landing Aids
WO2014127607A1 (en)*2013-02-212014-08-28Zhang GuobiaoVisual perception-based airplane landing assisting device
WO2014169353A1 (en)*2013-04-162014-10-23Bae Systems Australia LimitedLanding site tracker
WO2014169354A1 (en)*2013-04-162014-10-23Bae Systems Australia LimitedLanding system for an aircraft
CN104340371B (en)*2013-07-242018-04-03空中客车营运有限公司Autonomous and automatic landing concept and system
CN104340371A (en)*2013-07-242015-02-11空中客车营运有限公司Autonomous and automatic landing method and system
FR3009117A1 (en)*2013-07-242015-01-30Airbus Operations Sas AUTONOMOUS AUTOMATIC LANDING METHOD AND SYSTEM
US9260180B2 (en)2013-07-242016-02-16Airbus Operations (S.A.S.)Autonomous and automatic landing method and system
US10732647B2 (en)2013-11-272020-08-04The Trustees Of The University Of PennsylvaniaMulti-sensor fusion for robust autonomous flight in indoor and outdoor environments with a rotorcraft micro-aerial vehicle (MAV)
CN106030430A (en)*2013-11-272016-10-12宾夕法尼亚大学理事会 Multisensor Fusion for Robust Autonomous Flight in Indoor and Outdoor Environments Using Rotorwing Micro Aerial Vehicles (MAVs)
CN104764447A (en)*2014-01-032015-07-08空中客车运营简化股份公司METHOD AND DEVICE FOR VERTICALLY GUIDING AIRCRAFT DURING APPROACH OF landing RUNWAY
CN104764447B (en)*2014-01-032018-03-30空中客车运营简化股份公司In the method and apparatus close to vertical guide aircraft during landing runway
CN105302146A (en)*2014-07-252016-02-03空中客车运营简化股份公司Method and system for automatic autonomous landing of an aircraft
US9939818B2 (en)2014-07-252018-04-10Airbus Operations (S.A.S.)Method and system for automatic autonomous landing of an aircraft
CN104197928B (en)*2014-08-292017-01-18西北工业大学Multi-camera collaboration-based method for detecting, positioning and tracking unmanned aerial vehicle
CN105701261A (en)*2014-11-262016-06-22沈阳飞机工业(集团)有限公司Near-field aircraft automatic tracking and monitoring method
CN104463889A (en)*2014-12-192015-03-25中国人民解放军国防科学技术大学Unmanned plane autonomous landing target extracting method based on CV model
CN104463889B (en)*2014-12-192017-06-16中国人民解放军国防科学技术大学A kind of unmanned plane independent landing target extraction method based on CV models
CN107209514A (en)*2014-12-312017-09-26深圳市大疆创新科技有限公司 Selective processing of sensor data
US10802509B2 (en)2014-12-312020-10-13SZ DJI Technology Co., Ltd.Selective processing of sensor data
US10395115B2 (en)2015-01-272019-08-27The Trustees Of The University Of PennsylvaniaSystems, devices, and methods for robotic remote sensing for precision agriculture
CN104670666A (en)*2015-02-272015-06-03中国民航大学Aircraft landing attitude alarming system and alarming control method
CN107407937A (en)*2015-03-162017-11-28赛峰电子与防务公司 Automatic assist method for aircraft landing
CN107407937B (en)*2015-03-162020-08-04赛峰电子与防务公司Automatic auxiliary method for aircraft landing
CN104808685A (en)*2015-04-272015-07-29中国科学院长春光学精密机械与物理研究所Vision auxiliary device and method for automatic landing of unmanned aerial vehicle
CN106709223A (en)*2015-07-292017-05-24中国科学院沈阳自动化研究所Sampling inertial guidance-based visual IMU direction estimation method
CN106709223B (en)*2015-07-292019-01-22中国科学院沈阳自动化研究所 Visual IMU Orientation Estimation Method Based on Inertial Guided Sampling
US10884430B2 (en)2015-09-112021-01-05The Trustees Of The University Of PennsylvaniaSystems and methods for generating safe trajectories for multi-vehicle teams
CN106708066A (en)*2015-12-202017-05-24中国电子科技集团公司第二十研究所Autonomous landing method of unmanned aerial vehicle based on vision/inertial navigation
CN105786020B (en)*2016-04-282018-07-10深圳飞马机器人科技有限公司A kind of short distance downhill race method of unmanned plane
CN106054931A (en)*2016-07-292016-10-26北方工业大学Unmanned aerial vehicle fixed-point flight control system based on visual positioning
CN106054931B (en)*2016-07-292019-11-05北方工业大学A kind of unmanned plane fixed point flight control system of view-based access control model positioning
CN106371460A (en)*2016-09-072017-02-01四川天辰智创科技有限公司Target searching method and apparatus
CN107808175A (en)*2016-09-092018-03-16埃森哲环球解决方案有限公司Positioned using the automation loading bridge of coding applique
US10908580B2 (en)2016-09-092021-02-02Accenture Global Solutions LimitedDevices, systems, and methods for automated loading bridge positioning using shapes associated with a vehicle
US11126201B2 (en)2016-12-292021-09-21Israel Aerospace Industries Ltd.Image sensor based autonomous landing
CN107741229B (en)*2017-10-102020-09-25北京航空航天大学 An optoelectronic/radar/inertial combined carrier-based aircraft landing guidance method
CN107741229A (en)*2017-10-102018-02-27北京航空航天大学 A combined photoelectric/radar/inertial guidance method for carrier aircraft landing
CN108759826B (en)*2018-04-122020-10-27浙江工业大学 A UAV motion tracking method based on the fusion of multi-sensing parameters of mobile phones and UAVs
CN108759826A (en)*2018-04-122018-11-06浙江工业大学A kind of unmanned plane motion tracking method based on mobile phone and the more parameter sensing fusions of unmanned plane
CN108974373B (en)*2018-07-192019-12-13西安恒宇众科空间技术有限公司Aircraft autonomous landing method and aircraft autonomous landing device based on binocular vision
CN108974373A (en)*2018-07-192018-12-11西安恒宇众科空间技术有限公司Based on binocular vision aircraft independent landing device
CN109341685A (en)*2018-12-042019-02-15中国航空工业集团公司西安航空计算技术研究所A kind of fixed wing aircraft vision auxiliary landing navigation method based on homograph
CN109341686A (en)*2018-12-042019-02-15中国航空工业集团公司西安航空计算技术研究所A kind of tightly coupled aircraft lands position and orientation estimation method of view-based access control model-inertia
CN109341686B (en)*2018-12-042023-10-27中国航空工业集团公司西安航空计算技术研究所Aircraft landing pose estimation method based on visual-inertial tight coupling
CN109341700A (en)*2018-12-042019-02-15中国航空工业集团公司西安航空计算技术研究所Fixed wing aircraft vision assists landing navigation method under a kind of low visibility
CN110170081A (en)*2019-05-142019-08-27广州医软智能科技有限公司A kind of ICU instrument alarm processing method and system
CN110058604A (en)*2019-05-242019-07-26中国科学院地理科学与资源研究所A kind of accurate landing system of unmanned plane based on computer vision
CN115280398A (en)*2020-03-132022-11-01Wing航空有限责任公司Ad hoc geographic reference pad for landing UAV
CN112461222B (en)*2020-11-102022-05-27中航通飞华南飞机工业有限公司Virtual compass field and method suitable for aircraft airborne compass calibration
CN112461222A (en)*2020-11-102021-03-09中航通飞华南飞机工业有限公司Virtual compass field and method suitable for aircraft airborne compass calibration
CN112560922A (en)*2020-12-102021-03-26中国航空工业集团公司沈阳飞机设计研究所Vision-based foggy-day airplane autonomous landing method and system
CN112904895A (en)*2021-01-202021-06-04中国商用飞机有限责任公司北京民用飞机技术研究中心Image-based airplane guide method and device
CN113269100A (en)*2021-05-272021-08-17南京航空航天大学Vision-based aircraft offshore platform landing flight visual simulation system and method
CN113269100B (en)*2021-05-272024-03-22南京航空航天大学Aircraft offshore platform landing flight visual simulation system and method based on vision
CN113282098B (en)*2021-07-082021-10-08北京航空航天大学东营研究院 A method for improving the accuracy of flight verification of instrument landing system
CN113282098A (en)*2021-07-082021-08-20北京航空航天大学东营研究院Method for improving flight verification accuracy of instrument landing system
CN113534849A (en)*2021-09-162021-10-22中国商用飞机有限责任公司Flight combination guidance system, method and medium integrating machine vision
CN115050215A (en)*2022-04-292022-09-13北京航空航天大学Door-to-door full-autonomous flight landing guiding method based on machine vision assistance
CN115050215B (en)*2022-04-292023-12-26北京航空航天大学Door-to-door full-autonomous flight landing guiding method based on machine vision assistance
CN115183780A (en)*2022-07-292022-10-14中国商用飞机有限责任公司 Flight guidance system for aircraft approach phase, verification method for its image acquisition module, and flight guidance method
CN117033949A (en)*2023-10-082023-11-10成都凯天电子股份有限公司Method for detecting, classifying and maintaining and guiding high-load landing event of airplane
CN117033949B (en)*2023-10-082024-02-20成都凯天电子股份有限公司Method for detecting, classifying and maintaining and guiding high-load landing event of airplane
CN117930869A (en)*2024-03-212024-04-26山东智航智能装备有限公司Vision-based landing method and device for flight device
CN117930869B (en)*2024-03-212024-06-28山东智航智能装备有限公司Vision-based landing method and device for flight device

Similar Documents

PublicationPublication DateTitle
CN101109640A (en) Vision-based autonomous landing navigation system for unmanned aircraft
CN109911188B (en) Bridge detection UAV system for non-satellite navigation and positioning environment
CN110426046B (en) A method for judging and tracking obstacles in the runway area for autonomous UAV landing
CN105644785B (en) A UAV landing method based on optical flow method and horizon detection
CN106054929B (en)A kind of unmanned plane based on light stream lands bootstrap technique automatically
CN111426320B (en)Vehicle autonomous navigation method based on image matching/inertial navigation/milemeter
CN110222612B (en) Dynamic target recognition and tracking method for autonomous landing of UAV
Cesetti et al.A vision-based guidance system for UAV navigation and safe landing using natural landmarks
CN104848867B (en)The pilotless automobile Combinated navigation method of view-based access control model screening
CN102967305B (en)Multi-rotor unmanned aerial vehicle pose acquisition method based on markers in shape of large and small square
CN100567898C (en) Landing guidance method and device for unmanned helicopter
CN102353377B (en)High altitude long endurance unmanned aerial vehicle integrated navigation system and navigating and positioning method thereof
CN107229063A (en)A kind of pilotless automobile navigation and positioning accuracy antidote merged based on GNSS and visual odometry
CN110222581A (en)A kind of quadrotor drone visual target tracking method based on binocular camera
CN110221625B (en) Autonomous Landing Guidance Method for Precise Position of UAV
CN106017463A (en)Aircraft positioning method based on positioning and sensing device
Martínez et al.On-board and ground visual pose estimation techniques for UAV control
CN104808685A (en)Vision auxiliary device and method for automatic landing of unmanned aerial vehicle
CN105405126A (en)Multi-scale air-ground parameter automatic calibration method based on monocular vision system
CN102788580A (en)Flight path synthetic method in unmanned aerial vehicle visual navigation
CN114326765A (en) A landmark tracking control system and method for UAV visual landing
CN114689030A (en)Unmanned aerial vehicle auxiliary positioning method and system based on airborne vision
CN110058604A (en)A kind of accurate landing system of unmanned plane based on computer vision
CN112904895A (en)Image-based airplane guide method and device
Williams et al.Feature and pose constrained visual aided inertial navigation for computationally constrained aerial vehicles

Legal Events

DateCodeTitleDescription
C06Publication
PB01Publication
C10Entry into substantive examination
SE01Entry into force of request for substantive examination
C02Deemed withdrawal of patent application after publication (patent law 2001)
WD01Invention patent application deemed withdrawn after publication

Open date:20080123


[8]ページ先頭

©2009-2025 Movatter.jp