技术领域technical field
本发明属于无人机导航定位技术领域,涉及一种基于景象匹配/视觉里程的惯性组合导航方法。 The invention belongs to the technical field of unmanned aerial vehicle navigation and positioning, and relates to an inertial integrated navigation method based on scene matching/visual mileage. the
背景技术Background technique
高精度、高动态及高可靠的自主导航是保证无人机顺利完成各种任务的关键技术之一,对于增强无人机自主行为能力,提高作战效能具有十分重要的意义。导航方式分为卫星导航、无线电导航、惯性导航等,其中惯性导航(INS)以其高度自主的突出优点,在导航技术中占有特殊的位置,现有无人机导航系统都是以惯性导航为核心构成组合导航系统,完成复杂环境下无人机的自主导航。 High-precision, high-dynamic and high-reliability autonomous navigation is one of the key technologies to ensure that UAVs can successfully complete various tasks. It is of great significance to enhance the autonomous behavior of UAVs and improve combat effectiveness. Navigation methods are divided into satellite navigation, radio navigation, inertial navigation, etc. Among them, inertial navigation (INS) occupies a special position in navigation technology due to its outstanding advantages of high autonomy. The existing UAV navigation systems are all based on inertial navigation. The core constitutes an integrated navigation system to complete the autonomous navigation of UAVs in complex environments. the
目前,无人机导航的最主要方式是INS/GPS组合导航系统,但在未知的、动态变化的复杂环境下,GPS信号功率弱,易受到电磁干扰,在信号盲区甚至会停止工作,导致组合导航系统出现极大错误,产生不可预估的后果,而我国的北斗卫星导航系统处于不断发展阶段,其可靠性和精确性尚难满足诸如军事应用等高精度导航要求。 At present, the most important way of UAV navigation is the INS/GPS integrated navigation system, but in the unknown and dynamically changing complex environment, the GPS signal power is weak, vulnerable to electromagnetic interference, and even stops working in the signal blind area, resulting in combination The navigation system has made great mistakes, resulting in unpredictable consequences. However, my country's Beidou satellite navigation system is in a stage of continuous development, and its reliability and accuracy are still difficult to meet the high-precision navigation requirements such as military applications. the
发明内容Contents of the invention
要解决的技术问题 technical problem to be solved
为了避免现有技术的不足之处,本发明提出一种基于景象匹配/视觉里程的惯性组合导航方法,实现无人机复杂环境下全自主导航。 In order to avoid the deficiencies of the prior art, the present invention proposes an inertial integrated navigation method based on scene matching/visual mileage, so as to realize fully autonomous navigation of UAVs in complex environments. the
技术方案 Technical solutions
一种基于景象匹配/视觉里程的惯性组合导航方法,其特征在于步骤如下: A method for inertial integrated navigation based on scene matching/visual mileage, characterized in that the steps are as follows:
步骤1:在无人机飞行过程中,机载下视摄像机实时获取地面图像a; Step 1: During the flight of the UAV, the airborne downward-looking camera acquires the ground image a in real time;
步骤2:利用图像a与前一帧图像a′,确定无人机的视觉里程,步骤如下: Step 2: Use the image a and the previous frame image a′ to determine the visual mileage of the UAV, the steps are as follows:
A、使用Harris角点检测算法分别在连续两帧实时图像a与a′中提取特征点; A. Use the Harris corner detection algorithm to extract feature points in two consecutive frames of real-time images a and a′ respectively;
B、在图像a中以(x,y)T为中心的方形区域中搜索与图像a′中的每个特征点(x,y)T具有最高邻域互相关的匹配点;同时,在图像a′中以(x,y)T为中心的方形区域中搜索 与图像a中的每个特征点(x,y)T具有最高邻域互相关的匹配点; B. Search for a matching point with the highest neighborhood cross-correlation with each feature point (x, y)T in the image a' in the square area centered on (x, y)T in the image a; at the same time, in the image Search for a matching point with the highest neighborhood cross-correlation with each feature point (x, y)T in image a in a square area centered on (x, y)T ;
C、运用RANSAC鲁棒估计方法得到最大的一致点集和单应矩阵H的估计,过程为首先随机抽取4组匹配点对组成一个随机样本,并计算单应矩阵H;然后对步骤B中的每个匹配点对,计算距离dis;再设定阈值t,若dis<t,则此匹配点对为内点,否则剔除,并统计内点数目;重复上述过程k次,选择H; C. Use the RANSAC robust estimation method to obtain the largest consistent point set and the estimation of the homography matrix H. The process is to first randomly select 4 sets of matching point pairs to form a random sample, and calculate the homography matrix H; then in step B For each matching point pair, calculate the distance dis; then set the threshold t, if dis<t, then the matching point pair is an inlier, otherwise remove it, and count the number of inliers; repeat the above process k times, select H;
D、由划定为内点的所有匹配点对具有最大内点数的H重新估计使用重新获得的H计算出图像a′中与图像a中每个特征点(x,y)T相对应的点(x1,y1)T。并使用步骤B中方法分别在图像a中以(x,y)T为中心的方形区域中搜索与图像a′中的每个特征点(x1,y1)T具有最高邻域互相关的匹配点;同时,在图像a′中以(x1,y1)T为中心的方形区域中搜索与图像a中的每个特征点(x,y)T具有最高邻域互相关的匹配点; D. Re-estimate H with the largest number of inliers from all matching points designated as inliers Points (x1 , y1 )T in image a′ corresponding to each feature point (x, y)T in image a are calculated using the retrieved H. And use the method in step B to search for the highest neighborhood cross-correlation with each feature point (x1 , y1 )T in image a′ in the square area centered on (x, y)T in image a At the same time, in the square area centered on (x1 , y1 )T in the image a′, search for the matching point with the highest neighborhood cross-correlation with each feature point (x, y)T in the image a ;
E、重复步骤B到步骤D,直到匹配点对的数目稳定为止; E. Repeat step B to step D until the number of matching point pairs is stable;
步骤3:当无人机进入适配区时,根据惯性导航系统对无人机进行粗定位,在机载存储设备中找到与粗定位结果对应的基准图像b,并与实时图像a进行景象匹配,确定无人机的位置; Step 3: When the UAV enters the adaptation area, perform rough positioning of the UAV according to the inertial navigation system, find the reference image b corresponding to the rough positioning result in the onboard storage device, and perform scene matching with the real-time image a , to determine the position of the UAV;
步骤4:在适配区中利用步骤3得到的无人机位置校正视觉里程产生的误差,得到无人机较精确地位置; Step 4: Use the UAV position obtained in Step 3 to correct the error caused by the visual mileage in the adaptation area, and obtain a more accurate position of the UAV;
步骤5:利用惯性导航系统给出无人机当前的位置和姿态; Step 5: Use the inertial navigation system to give the current position and attitude of the UAV;
步骤6:使用惯性导航系统的误差方程作为组合导航系统的状态方程,导航坐标系选择为东北天坐标系,将步骤4中得出位置与步骤5中得出位置的差值作为量测。用卡尔曼滤波器估计出惯性系统的漂移误差,并使用该漂移误差校正惯性导航系统,得到融合后的导航参数。 Step 6: Use the error equation of the inertial navigation system as the state equation of the integrated navigation system, choose the northeast sky coordinate system as the navigation coordinate system, and take the difference between the position obtained in step 4 and the position obtained in step 5 as the measurement. The drift error of the inertial system is estimated by the Kalman filter, and the drift error is used to correct the inertial navigation system to obtain the fused navigation parameters. the
所述步骤3的过程如下:首先无人机机载下视摄像机实时获取地面图像a,对图像a进行预处理,得到图像c;然后提取图像b与图像c的FAST特征;再使用FREAK特征描述符对提取的FAST特征进行描述;利用最近汉明距离的相似性准则进行特征匹配,得到匹配位置,即当前无人机的位置。 The process of step 3 is as follows: first, the UAV airborne down-view camera acquires the ground image a in real time, preprocesses the image a to obtain the image c; then extracts the FAST features of the image b and the image c; then uses the FREAK feature description character to describe the extracted FAST features; use the similarity criterion of the nearest Hamming distance to perform feature matching, and obtain the matching position, that is, the current position of the UAV. the
所述地面图像a为光学图像或红外图像。 The ground image a is an optical image or an infrared image. the
有益效果 Beneficial effect
本发明提出的一种基于景象匹配/视觉里程的惯性组合导航方法,根据视觉里程原理,计算无人机航拍实时图像序列的单应矩阵,通过累积连续两帧实时图之间的相对位移,递推计算出无人机的当前位置;由于视觉里程导航随时间的增加会产生累积误差,因而引入基于FREAK特征的景象匹配算法进行辅助修正,景象匹配具有定位精度高、自主性强、抗电磁干扰等优点,在适配区可以进行高精度定位,有效补偿视觉里程导航长时间工作产生的累积误差;建立惯性导航系统的误差模型以及视觉数据的量测模型,通过卡尔曼滤波得出最优估计结果,并对惯性导航系统进行校正。本发明有效改善了导航精度,有助于提高无人机自主飞行能力。 The present invention proposes an inertial combined navigation method based on scene matching/visual mileage. According to the principle of visual mileage, it calculates the homography matrix of the real-time image sequence of aerial photography of the UAV, and accumulates the relative displacement between two consecutive real-time images. Push to calculate the current position of the UAV; because the visual mileage navigation will produce cumulative errors over time, the scene matching algorithm based on FREAK features is introduced for auxiliary correction. The scene matching has high positioning accuracy, strong autonomy, and anti-electromagnetic interference and other advantages, high-precision positioning can be performed in the adaptation area, and the cumulative error generated by the long-term work of visual mileage navigation can be effectively compensated; the error model of the inertial navigation system and the measurement model of visual data are established, and the optimal estimate is obtained through Kalman filtering As a result, corrections are made to the inertial navigation system. The invention effectively improves the navigation accuracy and helps to improve the autonomous flight capability of the unmanned aerial vehicle. the
但是,景象匹配适配区分度的研究表明,只有在适配区的匹配才能够为无人机提供较为可靠的位置信息,而在非适配区如沙漠、海面等,景象匹配是无法正常工作的。 However, the research on the adaptation discrimination of scene matching shows that only matching in the adaptation area can provide more reliable position information for the UAV, while in non-adaptation areas such as deserts, seas, etc., scene matching cannot work normally. of. the
视觉里程是通过处理连续两帧图像估计运动信息的技术,该技术作为一种新的导航定位方式,已成功应用于自主移动机器人中。在无人机飞行过程中,由于连续两帧图像均为无人机平台下同一传感器、同一时间段、同一条件拍摄的图像,具有相同的噪声分布和成像误差,且不存在自然条件变化造成的成像差异,因此能够在非适配区提供较精确的定位信息;同时,连续两帧图像尺寸相比于景象匹配中基准图尺寸要小得多,因此匹配时间较少,从而提高导航系统的实时性。 Visual odometry is a technique for estimating motion information by processing two consecutive frames of images. As a new navigation and positioning method, this technique has been successfully applied to autonomous mobile robots. During the flight of the UAV, since the two consecutive frames of images are images taken by the same sensor, the same time period, and the same conditions under the UAV platform, they have the same noise distribution and imaging error, and there is no natural condition change. Imaging difference, so it can provide more accurate positioning information in the non-adaptation area; at the same time, the size of two consecutive frames of images is much smaller than that of the reference image in scene matching, so the matching time is less, thereby improving the real-time accuracy of the navigation system. sex. the
因此,本发明提出一种基于景象匹配/视觉里程的无人机惯性组合导航方法,该方法具有自主性强、载荷轻、设备成本低以及抗干扰性强等优点,为无人机导航系统的工程应用提供一种可行的技术方案。是一种基于计算机视觉的导航方式,具有定位精度高、抗电子干扰能力强及机载设备尺寸小、成本低等显著优点,可消除惯性导航系统长时间工作的累积误差,大幅度提高惯性导航系统的定位精度,成为GPS失效、故障或精度下降等情况下的备用导航发法。 Therefore, the present invention proposes a UAV inertial integrated navigation method based on scene matching/visual mileage. Engineering application provides a feasible technical solution. It is a navigation method based on computer vision, which has significant advantages such as high positioning accuracy, strong anti-electronic interference ability, small size of airborne equipment, and low cost. The positioning accuracy of the system has become a backup navigation method in case of GPS failure, failure or accuracy decline. the
附图说明Description of drawings
图1是本发明的框架流程 Fig. 1 is the frame process of the present invention
图2是两视图单应变换关系图 Figure 2 is a two-view homography transformation relationship diagram
图3是FREAK描述子采样模式 Figure 3 is the FREAK description sub-sampling mode
具体实施方式Detailed ways
现结合实施例、附图对本发明作进一步描述: Now in conjunction with embodiment, accompanying drawing, the present invention will be further described:
本发明中基于景象匹配/视觉里程的惯性组合导航方法,利用侦察手段获取无人机预定飞行区域的地物景象作为基准图,当携带视觉传感器的无人机飞过预定的区域时,获取当地图像并按像素点分辨率、飞行高度和视场等参数生成一定大小的地物景象作为实时图,通过实时图与基准图的匹配,找出实时图在基准图中的位置,进而确定出无人机当前的准确位置。景象匹配导航精度不受导航时间影响,具备抗电磁干扰能力强、自主性好、精度高、成本低、体积小、信息丰富等优点,将其与惯性导航进行组合能够大大提高导航系统的整体性能,因此在无人机导航中开展惯性/景象匹配自主组合导航研究,有利于摆脱外界辅助系统,增强无人机的可靠性、机动性、隐蔽性、抗干扰性和生存能力。主要包括惯性导航、景象匹配、视觉里程、融合校正与组合导航卡尔曼滤波五个部分,包括以下步骤: In the present invention, the inertial integrated navigation method based on scene matching/visual mileage uses reconnaissance means to obtain the scene of the ground objects in the predetermined flight area of the UAV as a reference map. When the UAV with the visual sensor flies over the predetermined area, the local According to the pixel resolution, flight height and field of view and other parameters to generate a certain size of the ground object scene as a real-time map, through the matching of the real-time map and the reference map, find out the position of the real-time map in the reference map, and then determine the unrestricted The current accurate position of the HMI. Scene matching navigation accuracy is not affected by navigation time, and has the advantages of strong anti-electromagnetic interference, good autonomy, high precision, low cost, small size, and rich information. Combining it with inertial navigation can greatly improve the overall performance of the navigation system , so the research on inertial/scene matching autonomous integrated navigation in UAV navigation is conducive to getting rid of external auxiliary systems and enhancing the reliability, maneuverability, concealment, anti-interference and survivability of UAVs. It mainly includes five parts: inertial navigation, scene matching, visual mileage, fusion correction and integrated navigation Kalman filter, including the following steps:
第一步,机载下视摄像机实时获取地面图像a,通过估计实时图像a与前一帧实时图像a′的单应矩阵,确定无人机的视觉里程。 In the first step, the airborne downward-looking camera acquires the ground image a in real time, and determines the visual mileage of the UAV by estimating the homography matrix between the real-time image a and the previous frame real-time image a′. the
具体实施步骤如下: The specific implementation steps are as follows:
1、运用Harris角点检测算法分别在连续两帧实时图像a与a′中提取特征点; 1. Use the Harris corner detection algorithm to extract feature points in two consecutive frames of real-time images a and a′ respectively;
2、对a′中的每个特征点(x,y)T,在a中以(x,y)T为中心的方形区域中搜索具有最高邻域互相关的匹配点。同理,对a中的每个特征点在a′中搜索其匹配点,最终确定匹配特征点对; 2. For each feature point (x, y)T in a′, search for the matching point with the highest neighborhood cross-correlation in the square area centered on (x, y)T in a. Similarly, for each feature point in a, search for its matching point in a', and finally determine the pair of matching feature points;
3、运用RANSAC鲁棒估计方法得到最大的一致点集并且计算连续两帧实时图像a与a′之间的单应矩阵H。具体流程包括:(1)随机抽取4组匹配点对组成一个随机样本,并计算单应矩阵H;(2)对步骤2中的每个匹配点对,计算距离dis;(3)设定阈值t(t小于实时图边长的一半),若dis<t,则此匹配点对为内点,否则剔除,并统计内点数目;(4)重复上述过程k次,选择具有最大内点数的H; 3. Use the RANSAC robust estimation method to obtain the largest consistent point set and calculate the homography matrix H between two consecutive frames of real-time images a and a'. The specific process includes: (1) randomly select 4 groups of matching point pairs to form a random sample, and calculate the homography matrix H; (2) calculate the distance dis for each matching point pair in step 2; (3) set the threshold t (t is less than half the side length of the real-time graph), if dis<t, then the matching point pair is an inlier, otherwise it will be eliminated, and the number of inliers will be counted; (4) Repeat the above process k times, and select the one with the largest number of inliers H;
4、由划定为内点的所有匹配点重新计算单应矩阵H,并由H决定步骤2中的搜索区域的位置,从而确定更准确的匹配点对; 4. Recalculate the homography matrix H from all matching points designated as interior points, and determine the position of the search area in step 2 by H, so as to determine more accurate matching point pairs;
5、重复步骤2到步骤4,直到匹配点对的数目稳定为止,并且使用最终确定的稳定匹配点对计算出单应矩阵H。 5. Repeat steps 2 to 4 until the number of matching point pairs is stable, and use the finally determined stable matching point pairs to calculate the homography matrix H. the
6、使用步骤5得到的单应矩阵H,并且根据惯性元件提供的姿态信息与气压高度计提供的高度信息,求出摄像机在拍摄连续两帧图像时的相对位移,通过累积摄像机的位移,利用之前估计的位置,循环地计算出无人机当前的位置。 6. Use the homography matrix H obtained in step 5, and according to the attitude information provided by the inertial element and the altitude information provided by the barometric altimeter, the relative displacement of the camera when shooting two consecutive frames of images is obtained. By accumulating the displacement of the camera, use the previous Estimated position, which loops to calculate the current position of the drone. the
第二步,当无人机进入适配区时,根据惯性导航系统对无人机进行粗定位,根据粗定位结果,在预先存储的无人机飞行区域航拍基准图中裁剪圆形基准子图像b,基准子图半径需大于惯性元件单位时间平均漂移距离与无人机在两个适配区之间飞行时间乘积的1.5倍。将实时图像a与基准子图像b进行景象匹配,确定无人机的位置。景象匹配具体步骤如下: In the second step, when the UAV enters the adaptation area, the UAV is roughly positioned according to the inertial navigation system, and according to the rough positioning result, the circular reference sub-image is cropped in the pre-stored aerial reference map of the UAV flight area b. The radius of the reference submap must be greater than 1.5 times the product of the average drift distance per unit time of the inertial element and the flight time of the UAV between the two adaptation areas. The real-time image a is matched with the reference sub-image b to determine the position of the UAV. The specific steps of scene matching are as follows:
1、分别对实时图像a与基准子图像b进行灰度化,并且提取实时图像a与基准子图像b的FAST特征点。 1. Grayscale the real-time image a and the reference sub-image b respectively, and extract the FAST feature points of the real-time image a and the reference sub-image b. the
2、对每一个FAST特征点求出其FREAK特征描述算子,其步骤如下: 2. Calculate the FREAK feature description operator for each FAST feature point, the steps are as follows:
如图3所示,FREAK以每个特征点为中心构建圆形采样,中心密集,四周稀疏,采样数量以指数形式递减。使用不同的高斯核预平滑每个采样圆,高斯核的标准差与圆的大小成正比。 As shown in Figure 3, FREAK constructs a circular sampling centered on each feature point, dense in the center and sparse around, and the number of samples decreases exponentially. Each sampled circle is pre-smoothed using a different Gaussian kernel with a standard deviation proportional to the size of the circle. the
FREAK描述子由差分高斯的比特串构成,定义采样圆的准则T为 The FREAK descriptor consists of a differential Gaussian bit string, and the criterion T for defining the sampling circle is
Pa代表采样对,I(·)代表平滑后的采样圆强度值。选择N个采样对,定义二进制准则 Pa represents the sampling pair, and I(·) represents the smoothed sampling circle intensity value. Select N sampling pairs, define the binary criterion
即可得到N维的二进制比特串。 An N-dimensional binary bit string can be obtained. the
采样对的选取是一个由粗到精的过程,准则如下: The selection of sampling pairs is a process from coarse to fine, and the criteria are as follows:
a)创建一个能包含5万个特征点的大矩阵D,每行代表一个特征点,通过对每个特征点的43个采样圆强度进行两两比较,得到1000多维描述子; a) Create a large matrix D that can contain 50,000 feature points, each row represents a feature point, and obtain 1,000 multi-dimensional descriptors by comparing the 43 sampling circle intensities of each feature point;
b)计算矩阵D每一列的均值与方差,当均值为0.5时方差最大,可以保证特征描述符的独特性。 b) Calculate the mean value and variance of each column of the matrix D. When the mean value is 0.5, the variance is the largest, which can ensure the uniqueness of the feature descriptor. the
c)根据方差的大小对每一列进行排序,方差最大的位于第一列。 c) Sort each column according to the size of the variance, and the one with the largest variance is in the first column. the
d)保留前N列,对每个特征点进行描述,得到N维二进制比特串。本发明选择N=256。 d) Retain the first N columns, describe each feature point, and obtain an N-dimensional binary bit string. The present invention selects N=256. the
如图3,在所有采样圆中选择M(M=45)个对称采样对,计算局部梯度为 As shown in Figure 3, select M (M=45) symmetrical sampling pairs in all sampling circles, and calculate the local gradient as
对应采样圆的二维向量,I(·)代表平滑后的采样圆强度值。 The two-dimensional vector corresponding to the sampling circle, I( ) represents the smoothed sampling circle intensity value.
3、计算进行匹配的两个特征点FREAK描述符的最近汉明距离。为了滤除误匹配,本发明设置一个阈值10,高于该则直接滤除,低于该阈值的点被认为是相互匹配的两点。 3. Calculate the nearest Hamming distance of the two matching feature point FREAK descriptors. In order to filter out mismatches, the present invention sets a threshold of 10, above which it is directly filtered out, and points below this threshold are considered as two points that match each other. the
4、根据步骤2和步骤3中的方法进行特征匹配,通过确定实时图像a中特征点在基准子图像b中的位置来确定实时图像a在基准子图像b中的位置,从而确定飞机当前位置。 4. Perform feature matching according to the method in step 2 and step 3, and determine the position of the real-time image a in the reference sub-image b by determining the position of the feature points in the real-time image a in the reference sub-image b, thereby determining the current position of the aircraft . the
第三步,利用第二步得到的无人机位置校正视觉里程产生的误差,得到无人机较精确的位置Pvision,并通过惯性导航系统给出无人机当前的位置Pins与姿态Ains。 The third step is to use the UAV position obtained in the second step to correct the error caused by the visual mileage, to obtain a more accurate position Pvision of the UAV, and to give the current position Pins and attitude A of the UAV through the inertial navigation systemins .
假设在适配区通过景象匹配得到的无人机位置误差很小,在此步骤中直接使用景象匹配的结果对视觉里程计算出的位置进行重置。 Assuming that the UAV position error obtained through scene matching in the adaptation area is very small, in this step, the result of scene matching is directly used to reset the position calculated by visual odometry. the
第四步,使用卡尔曼滤波器对Pvision、Pins和Ains进行估计,得到最优导航信息。 The fourth step is to use the Kalman filter to estimate Pvision , Pins and Ains to obtain optimal navigation information.
具体实施例如下: Specific examples are as follows:
1、在无人机飞行过程中,机载下视摄像机实时获取地面图像a。 1. During the flight of the UAV, the airborne downward-looking camera acquires the ground image a in real time. the
利用无人机机载的下视光学摄像机或者红外摄像机实时获取地面图像序列,但只需保存当前帧与前一帧图像即可。 Use the downward-looking optical camera or infrared camera onboard the UAV to obtain the ground image sequence in real time, but only need to save the current frame and the previous frame image. the
2、利用图像a与前一帧图像a′,通过估计a与a′的单应矩阵,确定无人机的视觉里程。 2. Use the image a and the previous frame image a' to determine the visual mileage of the UAV by estimating the homography matrix of a and a'. the
如图2所示,无人机在飞行状态下,机载摄像机在不同位姿下连续拍摄两帧图像I1和I2,对应的摄像机坐标系为F和F′,假定平面π上的点P映射为图像I1中的点p和I2中的点p′,对应于F和F′中的向量为和则存在 As shown in Figure 2, when the UAV is in flight, the onboard camera continuously shoots two frames of images I1 and I2 in different poses, and the corresponding camera coordinate systems are F and F′, assuming a point on the plane π P is mapped to point p in imageI1 and point p′ inI2 , corresponding to vectors in F and F′ as and exists
和t分别代表两次拍摄间无人机的旋转和平移运动。 and t denote the rotational and translational motion of the drone between two shots, respectively.
由图2,n是c1处相对于平面π的法向量,d是c1到平面π的距离,有 From Figure 2, n is the normal vector at c1 relative to the plane π, and d is the distance from c1 to the plane π, we have
因此, therefore,
其中, in,
称为平面π关于摄像机的单应矩阵。 is called the homography matrix of plane π with respect to the camera. the
考虑摄像机内参数,则 Considering the internal parameters of the camera, then
因此, therefore,
p′=KHcK-1p=Hp p'=KHc K-1 p=Hp
其中, in,
称为平面π关于两帧图像的单应矩阵。 It is called the homography matrix of the plane π with respect to the two frames of images. the
根据文献可知,单应矩阵是自由度为8的3×3矩阵,因此需要知道两幅图像之间4组匹配点对来计算,为了防止出现退化,这4组匹配点对所在平面要求不能通过摄像机光心,且空间4个点任意3个点不共线。 According to the literature, the homography matrix is a 3×3 matrix with 8 degrees of freedom, so it is necessary to know 4 sets of matching point pairs between two images to calculate. In order to prevent degradation, the plane requirements of these 4 sets of matching point pairs cannot pass The optical center of the camera, and any 3 points of 4 points in space are not collinear. the
另外,由于摄像机固连在无人机上,可以认为摄像机的姿态与无人机的姿态一致,由机载的惯导器件提供,分别为偏航角ψ、俯仰角θ、横滚角γ,因此有 In addition, since the camera is fixedly connected to the UAV, it can be considered that the attitude of the camera is consistent with that of the UAV, provided by the airborne inertial navigation device, which are the yaw angle ψ, the pitch angle θ, and the roll angle γ respectively. Therefore have
其中是c2处摄像机坐标系到导航坐标系的旋转矩阵,为 in is the rotation matrix from the camera coordinate system at c2 to the navigation coordinate system, which is
同理可得Empathy
如果假设地面为平面,则n=(0,0,1)T,当计算出单应矩阵后,根据惯导提供的姿态信息与气压高度计提供的高度信息,使用上述公式求出摄像机在拍摄连续两帧图像时的相对位移,通过累积摄像机的位移,利用之前估计的位置,循环地计算出无人机当前的位置。具体实施步骤如下: If it is assumed that the ground is flat, then n=(0,0,1)T , after calculating the homography matrix, according to the attitude information provided by the inertial navigation and the altitude information provided by the barometric altimeter, use the above formula to find the The relative displacement of the two frames of images, by accumulating the displacement of the camera, using the previously estimated position, cyclically calculates the current position of the drone. The specific implementation steps are as follows:
2.1、运用Harris角点检测算法分别在连续两帧实时图像a与a′中提取特征点; 2.1. Use the Harris corner detection algorithm to extract feature points in two consecutive frames of real-time images a and a′ respectively;
2.2、对a′中的每个特征点(x,y)T,在a中以(x,y)T为中心的方形区域中搜索具有最高邻域互相关的匹配,对称的,对a中的每个特征点在a′中搜索其匹配,最终确定匹配特征点对; 2.2. For each feature point (x, y)T in a′, search for a match with the highest neighborhood cross-correlation in a square area centered at (x, y)T in a, symmetrically, for a Each feature point in a' searches for its match, and finally determines the pair of matching feature points;
2.3、运用RANSAC鲁棒估计方法得到最大的一致点集和单应矩阵H的估计。具体流程包括:(1)随机抽取4组匹配点对组成一个随机样本,并计算单应矩阵H;(2)对步骤B中的每个匹配点对,计算距离dis;(3)设定阈值t,若dis<t,则此匹配点对为内点,否则剔除,并统计内点数目;(4)重复上述过程k次,选择具有最大内点数的H; 2.3. Use the RANSAC robust estimation method to obtain the largest consistent point set and the estimation of the homography matrix H. The specific process includes: (1) randomly select 4 groups of matching point pairs to form a random sample, and calculate the homography matrix H; (2) calculate the distance dis for each matching point pair in step B; (3) set the threshold t, if dis<t, then the matching point pair is an inlier, otherwise it is eliminated, and the number of inliers is counted; (4) Repeat the above process k times, and select H with the largest number of inliers;
2.4、由划定为内点的所有匹配点对H重新估计,并由H决定步骤2.2中的搜索区域,更准确地确定匹配点对; 2.4. Re-evaluate H from all matching points designated as interior points, and determine the search area in step 2.2 by H to more accurately determine matching point pairs;
2.5、重复步骤2.2到步骤2.4,直到匹配点对的数目稳定为止。 2.5. Repeat steps 2.2 to 2.4 until the number of matching point pairs is stable. the
3、当无人机进入适配区时,根据惯性导航系统对无人机进行粗定位,在机载存储 设备中找到与粗定位结果对应的基准图像b,并与实时图像a进行景象匹配,确定无人机的位置。 3. When the UAV enters the adaptation area, the UAV is roughly positioned according to the inertial navigation system, the reference image b corresponding to the rough positioning result is found in the onboard storage device, and the scene is matched with the real-time image a, Determine the location of the drone. the
本发明采用基于FREAK描述符的景象匹配算法对实时图与基准图进行匹配,如图3所示,FREAK描述子采用类似视网膜采样模式,以每个特征点为中心构建圆形采样,中心密集,四周稀疏,采样数量以指数形式递减。为了增强采样圆的鲁棒性,提高描述符的稳定性与独特性,使用不同的高斯核预平滑每个采样圆,高斯核的标准差与圆的大小成正比。 The present invention uses the scene matching algorithm based on the FREAK descriptor to match the real-time image with the reference image, as shown in Figure 3, the FREAK descriptor adopts a retinal sampling mode, and constructs a circular sampling centered on each feature point, the center is dense, The surrounding area is sparse, and the number of samples decreases exponentially. In order to enhance the robustness of the sampling circles and improve the stability and uniqueness of the descriptors, each sampling circle is pre-smoothed with a different Gaussian kernel whose standard deviation is proportional to the size of the circle. the
FREAK描述子由差分高斯的比特串构成,定义采样圆的准则T为 The FREAK descriptor consists of a differential Gaussian bit string, and the criterion T for defining the sampling circle is
Pa代表采样对,I(·)代表平滑后的采样圆强度值。选择N个采样对,定义二进制准则 Pa represents the sampling pair, and I(·) represents the smoothed sampling circle intensity value. Select N sampling pairs, define the binary criterion
即可得到N维的二进制比特串。 An N-dimensional binary bit string can be obtained. the
采样对的选取是一个由粗到精的过程,准则如下: The selection of sampling pairs is a process from coarse to fine, and the criteria are as follows:
a)创建一个能包含5万个特征点的大矩阵D,每行代表一个特征点,通过对每个特征点的43个采样圆强度进行两两比较,得到1000多维描述子; a) Create a large matrix D that can contain 50,000 feature points, each row represents a feature point, and obtain 1,000 multi-dimensional descriptors by comparing the 43 sampling circle intensities of each feature point;
b)计算矩阵D每一列的均值与方差,当均值为0.5时方差最大,可以保证特征描述符的独特性。 b) Calculate the mean value and variance of each column of the matrix D. When the mean value is 0.5, the variance is the largest, which can ensure the uniqueness of the feature descriptor. the
c)根据方差的大小对每一列进行排序,方差最大的位于第一列。 c) Sort each column according to the size of the variance, and the one with the largest variance is in the first column. the
d)保留前N列,对每个特征点进行描述,得到N维二进制比特串。本文选择N=256。 d) Retain the first N columns, describe each feature point, and obtain an N-dimensional binary bit string. This paper chooses N=256. the
采样对的选取准则保证了描述符的灰度不变性。 The selection criterion of sampling pairs guarantees the gray level invariance of the descriptor. the
如图3,在所有采样圆中选择M(M=45)个对称采样对,计算局部梯度为 As shown in Figure 3, select M (M=45) symmetrical sampling pairs in all sampling circles, and calculate the local gradient as
对应采样圆的二维向量,I(·)代表平滑后的采样圆强度值。采样对的对称性保证了描述子的旋转不变性。 The two-dimensional vector corresponding to the sampling circle, I( ) represents the smoothed sampling circle intensity value. The symmetry of the sampling pairs guarantees the rotation invariance of the descriptor.
FREAK描述符是1和0组成的二进制比特串,因此相似性准则采用最近汉明距离方法,即对进行匹配的两个描述符按位进行异或操作。针对512维FREAK描述符,最大汉明距离为512,最小为0,为了滤除误匹配,设置一个阈值T,高于T则直接滤除,本文中阈值设为10。 The FREAK descriptor is a binary bit string composed of 1 and 0, so the similarity criterion adopts the nearest Hamming distance method, that is, the bitwise XOR operation is performed on the two matched descriptors. For the 512-dimensional FREAK descriptor, the maximum Hamming distance is 512, and the minimum is 0. In order to filter out mismatches, a threshold T is set, and if it is higher than T, it is directly filtered out. In this paper, the threshold is set to 10. the
4、在适配区中利用景象匹配得到的无人机位置校正视觉里程产生的误差,得到无人机较精确地位置。 4. In the adaptation area, the UAV position obtained by scene matching is used to correct the error caused by the visual mileage, and a more accurate position of the UAV is obtained. the
由于在适配区中景象匹配的定位结果是可靠的,因此直接利用景象匹配得到的无人机位置对视觉里程计算得到的位置进行重置,消除视觉里程长时间工作产生的累积误差。 Since the positioning result of scene matching in the adaptation area is reliable, the UAV position obtained by scene matching is directly used to reset the position calculated by visual mileage to eliminate the cumulative error caused by long-term work of visual mileage. the
5、利用惯性导航系统给出无人机当前的位置和姿态。 5. Use the inertial navigation system to give the current position and attitude of the UAV. the
6、利用卡尔曼滤波算法估计无人机当前的精确位置和姿态。 6. Use the Kalman filter algorithm to estimate the current precise position and attitude of the UAV. the
使用惯性导航系统的误差方程作为组合导航系统的状态方程,导航坐标系选择为东北天坐标系,将4中位置与5中位置的差值作为量测值,用卡尔曼滤波器估计出惯性系统的漂移误差,然后校正惯性导航系统,得到融合后的导航参数。 Using the error equation of the inertial navigation system as the state equation of the integrated navigation system, the navigation coordinate system is selected as the northeast sky coordinate system, and the difference between the position in 4 and the position in 5 is used as the measurement value, and the inertial system is estimated by the Kalman filter drift error, and then correct the inertial navigation system to obtain the fused navigation parameters. the
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201410128459.XACN103954283B (en) | 2014-04-01 | 2014-04-01 | Inertia integrated navigation method based on scene matching aided navigation/vision mileage |
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201410128459.XACN103954283B (en) | 2014-04-01 | 2014-04-01 | Inertia integrated navigation method based on scene matching aided navigation/vision mileage |
| Publication Number | Publication Date |
|---|---|
| CN103954283Atrue CN103954283A (en) | 2014-07-30 |
| CN103954283B CN103954283B (en) | 2016-08-31 |
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN201410128459.XAExpired - Fee RelatedCN103954283B (en) | 2014-04-01 | 2014-04-01 | Inertia integrated navigation method based on scene matching aided navigation/vision mileage |
| Country | Link |
|---|---|
| CN (1) | CN103954283B (en) |
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN105045276A (en)* | 2015-07-03 | 2015-11-11 | 深圳一电科技有限公司 | Method and apparatus for controlling flight of unmanned plane |
| CN105180933A (en)* | 2015-09-14 | 2015-12-23 | 中国科学院合肥物质科学研究院 | Mobile robot track plotting correcting system based on straight-running intersection and mobile robot track plotting correcting method |
| CN105222772A (en)* | 2015-09-17 | 2016-01-06 | 泉州装备制造研究所 | A kind of high-precision motion track detection system based on Multi-source Information Fusion |
| CN105374043A (en)* | 2015-12-02 | 2016-03-02 | 福州华鹰重工机械有限公司 | Method and device of background filtering of visual odometry |
| CN105675013A (en)* | 2014-11-21 | 2016-06-15 | 中国飞行试验研究院 | Civil aircraft inertial navigation dynamic calibration method |
| CN105865451A (en)* | 2016-04-19 | 2016-08-17 | 深圳市神州云海智能科技有限公司 | Method and device applied to indoor location of mobile robot |
| CN105953796A (en)* | 2016-05-23 | 2016-09-21 | 北京暴风魔镜科技有限公司 | Stable motion tracking method and stable motion tracking device based on integration of simple camera and IMU (inertial measurement unit) of smart cellphone |
| CN107067415A (en)* | 2017-03-21 | 2017-08-18 | 南京航空航天大学 | A kind of quick accurate positioning method of target based on images match |
| CN107167140A (en)* | 2017-05-26 | 2017-09-15 | 江苏大学 | A kind of unmanned plane vision positioning accumulated error suppressing method |
| CN107270904A (en)* | 2017-06-23 | 2017-10-20 | 西北工业大学 | Unmanned plane auxiliary guiding control system and method based on image registration |
| CN107462244A (en)* | 2017-04-24 | 2017-12-12 | 北京航空航天大学 | A kind of air remote sensing platform attitude angle high-precision measuring method matched based on GPS location and aerial map picture |
| CN107498559A (en)* | 2017-09-26 | 2017-12-22 | 珠海市微半导体有限公司 | The detection method and chip that the robot of view-based access control model turns to |
| CN107843240A (en)* | 2017-09-14 | 2018-03-27 | 中国人民解放军92859部队 | A kind of seashore region unmanned plane image same place information rapid extracting method |
| CN107967691A (en)* | 2016-10-20 | 2018-04-27 | 株式会社理光 | A kind of visual odometry calculates method and apparatus |
| CN108196285A (en)* | 2017-11-30 | 2018-06-22 | 中山大学 | A kind of Precise Position System based on Multi-sensor Fusion |
| CN108544494A (en)* | 2018-05-31 | 2018-09-18 | 珠海市微半导体有限公司 | A kind of positioning device, method and robot based on inertia and visual signature |
| CN108846857A (en)* | 2018-06-28 | 2018-11-20 | 清华大学深圳研究生院 | The measurement method and visual odometry of visual odometry |
| CN108981692A (en)* | 2018-06-14 | 2018-12-11 | 兰州晨阳启创信息科技有限公司 | It is a kind of based on inertial navigation/visual odometry train locating method and system |
| CN109102013A (en)* | 2018-08-01 | 2018-12-28 | 重庆大学 | A kind of improvement FREAK Feature Points Matching digital image stabilization method suitable for tunnel environment characteristic |
| CN109143305A (en)* | 2018-09-30 | 2019-01-04 | 百度在线网络技术(北京)有限公司 | Automobile navigation method and device |
| CN109341700A (en)* | 2018-12-04 | 2019-02-15 | 中国航空工业集团公司西安航空计算技术研究所 | Fixed wing aircraft vision assists landing navigation method under a kind of low visibility |
| CN109341685A (en)* | 2018-12-04 | 2019-02-15 | 中国航空工业集团公司西安航空计算技术研究所 | A kind of fixed wing aircraft vision auxiliary landing navigation method based on homograph |
| CN109360295A (en)* | 2018-10-31 | 2019-02-19 | 张维玲 | A kind of mileage measuring system and method based on Aberration Analysis |
| CN109523579A (en)* | 2018-11-12 | 2019-03-26 | 北京联海信息系统有限公司 | A kind of matching process and device of UAV Video image and three-dimensional map |
| CN109782012A (en)* | 2018-12-29 | 2019-05-21 | 中国电子科技集团公司第二十研究所 | A Speed Measurement Method Based on Photoelectric Image Feature Correlation |
| CN109791048A (en)* | 2016-08-01 | 2019-05-21 | 无限增强现实以色列有限公司 | Method and system for calibrating components of an inertial measurement unit (IMU) using scene capture data |
| CN110388939A (en)* | 2018-04-23 | 2019-10-29 | 湖南海迅自动化技术有限公司 | One kind being based on the matched vehicle-mounted inertial navigation position error modification method of Aerial Images |
| CN111238488A (en)* | 2020-03-18 | 2020-06-05 | 湖南云顶智能科技有限公司 | Aircraft accurate positioning method based on heterogeneous image matching |
| CN112577493A (en)* | 2021-03-01 | 2021-03-30 | 中国人民解放军国防科技大学 | Unmanned aerial vehicle autonomous positioning method and system based on remote sensing map assistance |
| CN113390410A (en)* | 2021-08-04 | 2021-09-14 | 北京云恒科技研究院有限公司 | Inertial integrated navigation method suitable for unmanned aerial vehicle |
| CN113432594A (en)* | 2021-07-05 | 2021-09-24 | 北京鑫海宜科技有限公司 | Unmanned aerial vehicle automatic navigation system based on map and environment |
| CN114265427A (en)* | 2021-12-06 | 2022-04-01 | 江苏方天电力技术有限公司 | Inspection unmanned aerial vehicle auxiliary navigation system and method based on infrared image matching |
| CN114764005A (en)* | 2021-03-11 | 2022-07-19 | 深圳市科卫泰实业发展有限公司 | Monocular vision odometer method for unmanned aerial vehicle |
| CN116518981B (en)* | 2023-06-29 | 2023-09-22 | 中国人民解放军国防科技大学 | Aircraft visual navigation method based on deep learning matching and Kalman filtering |
| CN120252746A (en)* | 2025-06-06 | 2025-07-04 | 中国科学院国家空间科学中心 | A long-range and long-duration integrated navigation method and system combining visual-inertial joint optimization and image matching |
| CN120426955A (en)* | 2025-06-25 | 2025-08-05 | 杭州迅蚁网络科技有限公司 | A method for estimating the height of a UAV |
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN107796417B (en)* | 2016-09-06 | 2021-02-05 | 北京自动化控制设备研究所 | Method for adaptively estimating scene matching and inertial navigation installation error |
| CN107966147B (en)* | 2016-10-20 | 2021-02-05 | 北京自动化控制设备研究所 | Scene matching method under large-locomotive condition |
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US6856894B1 (en)* | 2003-10-23 | 2005-02-15 | International Business Machines Corporation | Navigating a UAV under remote control and manual control with three dimensional flight depiction |
| EP1975646A2 (en)* | 2007-03-28 | 2008-10-01 | Honeywell International Inc. | Lader-based motion estimation for navigation |
| CN101598556A (en)* | 2009-07-15 | 2009-12-09 | 北京航空航天大学 | A vision/inertial integrated navigation method for unmanned aerial vehicle in unknown environment |
| CN102435188A (en)* | 2011-09-15 | 2012-05-02 | 南京航空航天大学 | A Monocular Vision/Inertial Fully Autonomous Navigation Method for Indoor Environment |
| CN102538781A (en)* | 2011-12-14 | 2012-07-04 | 浙江大学 | Machine vision and inertial navigation fusion-based mobile robot motion attitude estimation method |
| CN102829785A (en)* | 2012-08-30 | 2012-12-19 | 中国人民解放军国防科学技术大学 | Air vehicle full-parameter navigation method based on sequence image and reference image matching |
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US6856894B1 (en)* | 2003-10-23 | 2005-02-15 | International Business Machines Corporation | Navigating a UAV under remote control and manual control with three dimensional flight depiction |
| EP1975646A2 (en)* | 2007-03-28 | 2008-10-01 | Honeywell International Inc. | Lader-based motion estimation for navigation |
| CN101598556A (en)* | 2009-07-15 | 2009-12-09 | 北京航空航天大学 | A vision/inertial integrated navigation method for unmanned aerial vehicle in unknown environment |
| CN102435188A (en)* | 2011-09-15 | 2012-05-02 | 南京航空航天大学 | A Monocular Vision/Inertial Fully Autonomous Navigation Method for Indoor Environment |
| CN102538781A (en)* | 2011-12-14 | 2012-07-04 | 浙江大学 | Machine vision and inertial navigation fusion-based mobile robot motion attitude estimation method |
| CN102829785A (en)* | 2012-08-30 | 2012-12-19 | 中国人民解放军国防科学技术大学 | Air vehicle full-parameter navigation method based on sequence image and reference image matching |
| Title |
|---|
| 庄瞳等: "一种基于单目视觉的微型无人机姿态算法", 《计算机工程》* |
| 李耀军等: "基于空间关系几何约束的无人机景象匹配导航", 《计算机应用研究》* |
| 陈方等: "惯性组合导航系统中的快速景象匹配算法研究", 《宇航学报》* |
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN105675013A (en)* | 2014-11-21 | 2016-06-15 | 中国飞行试验研究院 | Civil aircraft inertial navigation dynamic calibration method |
| CN105675013B (en)* | 2014-11-21 | 2019-03-01 | 中国飞行试验研究院 | Civil aircraft inertial navigation dynamic calibration method |
| CN105045276A (en)* | 2015-07-03 | 2015-11-11 | 深圳一电科技有限公司 | Method and apparatus for controlling flight of unmanned plane |
| CN105180933B (en)* | 2015-09-14 | 2017-11-21 | 中国科学院合肥物质科学研究院 | Mobile robot reckoning update the system and method based on the detection of straight trip crossing |
| CN105180933A (en)* | 2015-09-14 | 2015-12-23 | 中国科学院合肥物质科学研究院 | Mobile robot track plotting correcting system based on straight-running intersection and mobile robot track plotting correcting method |
| CN105222772A (en)* | 2015-09-17 | 2016-01-06 | 泉州装备制造研究所 | A kind of high-precision motion track detection system based on Multi-source Information Fusion |
| CN105222772B (en)* | 2015-09-17 | 2018-03-16 | 泉州装备制造研究所 | A kind of high-precision motion track detection system based on Multi-source Information Fusion |
| CN105374043A (en)* | 2015-12-02 | 2016-03-02 | 福州华鹰重工机械有限公司 | Method and device of background filtering of visual odometry |
| CN105865451B (en)* | 2016-04-19 | 2019-10-01 | 深圳市神州云海智能科技有限公司 | Method and apparatus for mobile robot indoor positioning |
| CN105865451A (en)* | 2016-04-19 | 2016-08-17 | 深圳市神州云海智能科技有限公司 | Method and device applied to indoor location of mobile robot |
| CN105953796A (en)* | 2016-05-23 | 2016-09-21 | 北京暴风魔镜科技有限公司 | Stable motion tracking method and stable motion tracking device based on integration of simple camera and IMU (inertial measurement unit) of smart cellphone |
| CN109791048A (en)* | 2016-08-01 | 2019-05-21 | 无限增强现实以色列有限公司 | Method and system for calibrating components of an inertial measurement unit (IMU) using scene capture data |
| US11125581B2 (en) | 2016-08-01 | 2021-09-21 | Alibaba Technologies (Israel) LTD. | Method and system for calibrating components of an inertial measurement unit (IMU) using scene-captured data |
| CN107967691B (en)* | 2016-10-20 | 2021-11-23 | 株式会社理光 | Visual mileage calculation method and device |
| CN107967691A (en)* | 2016-10-20 | 2018-04-27 | 株式会社理光 | A kind of visual odometry calculates method and apparatus |
| CN107067415B (en)* | 2017-03-21 | 2019-07-30 | 南京航空航天大学 | A kind of object localization method based on images match |
| CN107067415A (en)* | 2017-03-21 | 2017-08-18 | 南京航空航天大学 | A kind of quick accurate positioning method of target based on images match |
| CN107462244A (en)* | 2017-04-24 | 2017-12-12 | 北京航空航天大学 | A kind of air remote sensing platform attitude angle high-precision measuring method matched based on GPS location and aerial map picture |
| CN107167140A (en)* | 2017-05-26 | 2017-09-15 | 江苏大学 | A kind of unmanned plane vision positioning accumulated error suppressing method |
| CN107167140B (en)* | 2017-05-26 | 2019-11-08 | 江苏大学 | A Cumulative Error Suppression Method for Unmanned Aerial Vehicle Visual Positioning |
| CN107270904A (en)* | 2017-06-23 | 2017-10-20 | 西北工业大学 | Unmanned plane auxiliary guiding control system and method based on image registration |
| CN107270904B (en)* | 2017-06-23 | 2020-07-03 | 西北工业大学 | UAV-assisted guidance control system and method based on image registration |
| CN107843240A (en)* | 2017-09-14 | 2018-03-27 | 中国人民解放军92859部队 | A kind of seashore region unmanned plane image same place information rapid extracting method |
| CN107498559A (en)* | 2017-09-26 | 2017-12-22 | 珠海市微半导体有限公司 | The detection method and chip that the robot of view-based access control model turns to |
| CN108196285A (en)* | 2017-11-30 | 2018-06-22 | 中山大学 | A kind of Precise Position System based on Multi-sensor Fusion |
| CN110388939A (en)* | 2018-04-23 | 2019-10-29 | 湖南海迅自动化技术有限公司 | One kind being based on the matched vehicle-mounted inertial navigation position error modification method of Aerial Images |
| CN108544494A (en)* | 2018-05-31 | 2018-09-18 | 珠海市微半导体有限公司 | A kind of positioning device, method and robot based on inertia and visual signature |
| CN108544494B (en)* | 2018-05-31 | 2023-10-24 | 珠海一微半导体股份有限公司 | Positioning device, method and robot based on inertia and visual characteristics |
| CN108981692A (en)* | 2018-06-14 | 2018-12-11 | 兰州晨阳启创信息科技有限公司 | It is a kind of based on inertial navigation/visual odometry train locating method and system |
| CN108846857A (en)* | 2018-06-28 | 2018-11-20 | 清华大学深圳研究生院 | The measurement method and visual odometry of visual odometry |
| CN109102013A (en)* | 2018-08-01 | 2018-12-28 | 重庆大学 | A kind of improvement FREAK Feature Points Matching digital image stabilization method suitable for tunnel environment characteristic |
| CN109143305A (en)* | 2018-09-30 | 2019-01-04 | 百度在线网络技术(北京)有限公司 | Automobile navigation method and device |
| CN109360295A (en)* | 2018-10-31 | 2019-02-19 | 张维玲 | A kind of mileage measuring system and method based on Aberration Analysis |
| CN109523579A (en)* | 2018-11-12 | 2019-03-26 | 北京联海信息系统有限公司 | A kind of matching process and device of UAV Video image and three-dimensional map |
| CN109341700A (en)* | 2018-12-04 | 2019-02-15 | 中国航空工业集团公司西安航空计算技术研究所 | Fixed wing aircraft vision assists landing navigation method under a kind of low visibility |
| CN109341685B (en)* | 2018-12-04 | 2023-06-30 | 中国航空工业集团公司西安航空计算技术研究所 | Fixed wing aircraft vision auxiliary landing navigation method based on homography transformation |
| CN109341685A (en)* | 2018-12-04 | 2019-02-15 | 中国航空工业集团公司西安航空计算技术研究所 | A kind of fixed wing aircraft vision auxiliary landing navigation method based on homograph |
| CN109782012A (en)* | 2018-12-29 | 2019-05-21 | 中国电子科技集团公司第二十研究所 | A Speed Measurement Method Based on Photoelectric Image Feature Correlation |
| CN111238488A (en)* | 2020-03-18 | 2020-06-05 | 湖南云顶智能科技有限公司 | Aircraft accurate positioning method based on heterogeneous image matching |
| CN112577493A (en)* | 2021-03-01 | 2021-03-30 | 中国人民解放军国防科技大学 | Unmanned aerial vehicle autonomous positioning method and system based on remote sensing map assistance |
| CN114764005A (en)* | 2021-03-11 | 2022-07-19 | 深圳市科卫泰实业发展有限公司 | Monocular vision odometer method for unmanned aerial vehicle |
| CN114764005B (en)* | 2021-03-11 | 2025-06-03 | 深圳市科卫泰实业发展有限公司 | A monocular visual odometer method for unmanned aerial vehicles |
| CN113432594A (en)* | 2021-07-05 | 2021-09-24 | 北京鑫海宜科技有限公司 | Unmanned aerial vehicle automatic navigation system based on map and environment |
| CN113390410A (en)* | 2021-08-04 | 2021-09-14 | 北京云恒科技研究院有限公司 | Inertial integrated navigation method suitable for unmanned aerial vehicle |
| CN114265427A (en)* | 2021-12-06 | 2022-04-01 | 江苏方天电力技术有限公司 | Inspection unmanned aerial vehicle auxiliary navigation system and method based on infrared image matching |
| CN114265427B (en)* | 2021-12-06 | 2024-02-02 | 江苏方天电力技术有限公司 | Inspection unmanned aerial vehicle auxiliary navigation system and method based on infrared image matching |
| CN116518981B (en)* | 2023-06-29 | 2023-09-22 | 中国人民解放军国防科技大学 | Aircraft visual navigation method based on deep learning matching and Kalman filtering |
| CN120252746A (en)* | 2025-06-06 | 2025-07-04 | 中国科学院国家空间科学中心 | A long-range and long-duration integrated navigation method and system combining visual-inertial joint optimization and image matching |
| CN120252746B (en)* | 2025-06-06 | 2025-09-23 | 中国科学院国家空间科学中心 | Remote long-endurance integrated navigation method and system combining visual inertia joint optimization and image matching |
| CN120426955A (en)* | 2025-06-25 | 2025-08-05 | 杭州迅蚁网络科技有限公司 | A method for estimating the height of a UAV |
| Publication number | Publication date |
|---|---|
| CN103954283B (en) | 2016-08-31 |
| Publication | Publication Date | Title |
|---|---|---|
| CN103954283B (en) | Inertia integrated navigation method based on scene matching aided navigation/vision mileage | |
| CN105865454B (en) | A kind of Navigation of Pilotless Aircraft method generated based on real-time online map | |
| CN101598556B (en) | Unmanned aerial vehicle vision/inertia integrated navigation method in unknown environment | |
| US8213706B2 (en) | Method and system for real-time visual odometry | |
| Chen et al. | Real-time geo-localization using satellite imagery and topography for unmanned aerial vehicles | |
| CN102435188A (en) | A Monocular Vision/Inertial Fully Autonomous Navigation Method for Indoor Environment | |
| CN109596121B (en) | A method for automatic target detection and spatial positioning of a mobile station | |
| CN104835115A (en) | Imaging method for aerial camera, and system thereof | |
| CN106780729A (en) | A kind of unmanned plane sequential images batch processing three-dimensional rebuilding method | |
| CN112419374A (en) | A UAV Localization Method Based on Image Registration | |
| CN107478220A (en) | Unmanned plane indoor navigation method, device, unmanned plane and storage medium | |
| US20170186164A1 (en) | Method for fast camera pose refinement for wide area motion imagery | |
| CN102607532B (en) | Quick low-level image matching method by utilizing flight control data | |
| Boukas et al. | Introducing a globally consistent orbital‐based localization system | |
| CN115371673A (en) | A binocular camera target location method based on Bundle Adjustment in an unknown environment | |
| Grelsson et al. | GPS‐level accurate camera localization with HorizonNet | |
| CN108036786A (en) | Position and posture detection method, device and computer-readable recording medium based on auxiliary line | |
| Lu et al. | Vision-based localization methods under GPS-denied conditions | |
| Chunhui et al. | Visual odometry and scene matching integrated navigation system in UAV | |
| Zhao et al. | Improved vision-based algorithm for unmanned aerial vehicles autonomous landing | |
| CN115127554A (en) | Unmanned aerial vehicle autonomous navigation method and system based on multi-source vision assistance | |
| CN113239936B (en) | Unmanned aerial vehicle visual navigation method based on deep learning and feature point extraction | |
| Hoang et al. | Motion estimation based on two corresponding points and angular deviation optimization | |
| Qiu et al. | Image moment extraction based aerial photo selection for UAV high-precision geolocation without GPS | |
| Shahid et al. | A cross-platform hd dataset and a two-step framework for robust aerial image matching |
| Date | Code | Title | Description |
|---|---|---|---|
| C06 | Publication | ||
| PB01 | Publication | ||
| C10 | Entry into substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| C14 | Grant of patent or utility model | ||
| GR01 | Patent grant | ||
| TR01 | Transfer of patent right | ||
| TR01 | Transfer of patent right | Effective date of registration:20190826 Address after:Room 404, Material Building, Northwest Polytechnic University, 127 Youyi West Road, Xi'an City, Shaanxi Province, 710072 Patentee after:Xi'an Northwestern Polytechnical University Asset Management Co.,Ltd. Address before:710072 Xi'an friendship West Road, Shaanxi, No. 127 Patentee before:Northwestern Polytechnical University | |
| TR01 | Transfer of patent right | Effective date of registration:20191113 Address after:710072 floor 19, building B, innovation and technology building, northwest Polytechnic University, No.127, Youyi West Road, Beilin District, Xi'an, Shaanxi Province Patentee after:Shaanxi CISCO Rudi Network Security Technology Co.,Ltd. Address before:Room 404, Material Building, Northwest Polytechnic University, 127 Youyi West Road, Xi'an City, Shaanxi Province, 710072 Patentee before:Xi'an Northwestern Polytechnical University Asset Management Co.,Ltd. | |
| TR01 | Transfer of patent right | ||
| CP01 | Change in the name or title of a patent holder | ||
| CP01 | Change in the name or title of a patent holder | Address after:Floor 19, block B, innovation and technology building, Northwest University of technology, 127 Youyi West Road, Beilin District, Xi'an City, Shaanxi Province, 710072 Patentee after:Shaanxi University of technology Ruidi Information Technology Co.,Ltd. Address before:Floor 19, block B, innovation and technology building, Northwest University of technology, 127 Youyi West Road, Beilin District, Xi'an City, Shaanxi Province, 710072 Patentee before:Shaanxi CISCO Rudi Network Security Technology Co.,Ltd. | |
| TR01 | Transfer of patent right | ||
| TR01 | Transfer of patent right | Effective date of registration:20231008 Address after:518000 Unit 204, Xingyuanju 2, Xilihu Road, Xili Street, Nanshan District, Shenzhen, Guangdong Province Patentee after:Shenzhen Onoan Technology Co.,Ltd. Address before:Floor 19, block B, innovation and technology building, Northwest University of technology, 127 Youyi West Road, Beilin District, Xi'an City, Shaanxi Province, 710072 Patentee before:Shaanxi University of technology Ruidi Information Technology Co.,Ltd. | |
| TR01 | Transfer of patent right | ||
| TR01 | Transfer of patent right | Effective date of registration:20231220 Address after:710000, No. 581, East Zone, National E-commerce Demonstration Base, No. 528 Tianguba Road, Software New City, High tech Zone, Xi'an City, Shaanxi Province Patentee after:Xi'an Chenxiang Zhuoyue Technology Co.,Ltd. Address before:518000 Unit 204, Xingyuanju 2, Xilihu Road, Xili Street, Nanshan District, Shenzhen, Guangdong Province Patentee before:Shenzhen Onoan Technology Co.,Ltd. | |
| CF01 | Termination of patent right due to non-payment of annual fee | ||
| CF01 | Termination of patent right due to non-payment of annual fee | Granted publication date:20160831 |