Movatterモバイル変換


[0]ホーム

URL:


CN110261877B - A ground-to-air collaborative visual navigation method and device based on improved graph optimization SLAM - Google Patents

A ground-to-air collaborative visual navigation method and device based on improved graph optimization SLAM
Download PDF

Info

Publication number
CN110261877B
CN110261877BCN201910561547.1ACN201910561547ACN110261877BCN 110261877 BCN110261877 BCN 110261877BCN 201910561547 ACN201910561547 ACN 201910561547ACN 110261877 BCN110261877 BCN 110261877B
Authority
CN
China
Prior art keywords
data
ground
module
information
feature
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910561547.1A
Other languages
Chinese (zh)
Other versions
CN110261877A (en
Inventor
王晓龙
刘海颖
冯建鑫
徐子牟
王景琪
陈捷
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University of Aeronautics and Astronautics
Original Assignee
Nanjing University of Aeronautics and Astronautics
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University of Aeronautics and AstronauticsfiledCriticalNanjing University of Aeronautics and Astronautics
Priority to CN201910561547.1ApriorityCriticalpatent/CN110261877B/en
Publication of CN110261877ApublicationCriticalpatent/CN110261877A/en
Application grantedgrantedCritical
Publication of CN110261877BpublicationCriticalpatent/CN110261877B/en
Activelegal-statusCriticalCurrent
Anticipated expirationlegal-statusCritical

Links

Classifications

Landscapes

Abstract

The invention discloses a ground-air collaborative visual navigation method and a ground-air collaborative visual navigation device based on improved graph optimization SLAM, which belong to the technical field of navigation, wherein the invention uses sensors to combine with power structures to respectively form air and ground intelligent bodies, and the improved graph optimized ground-air collaborative visual navigation method designed by the invention is carried in each intelligent body, and comprises four modules: the signal acquisition module is used for acquiring a position signal and a visual signal in a position environment; the front-end processing module is used for processing the acquired information and converting various signals into a matrix; the back-end processing module performs bit estimation and state update using the matrix provided by the front-end; the improved graph optimization algorithm module is used for accelerating the calculation speed of the back end module, reducing the calculation pressure of the back end module, optimizing the positioning navigation system for the multi-agent system placed in the unknown environment by using the method, improving the positioning accuracy of the navigation system, accelerating the positioning speed, and reducing the calculation complexity.

Description

Translated fromChinese
一种基于改进图优化SLAM的地空协同视觉导航方法及装置A ground-to-air collaborative visual navigation method and device based on improved graph optimization SLAM

技术领域Technical Field

本发明属于导航技术领域,具体涉及一种基于改进图优化SLAM的地空协同视觉导航方法及装置。The present invention belongs to the field of navigation technology, and in particular relates to a ground-to-air collaborative visual navigation method and device based on improved graph-optimized SLAM.

背景技术Background technique

视觉导航系统近年来成为导航研究中较为热门的研究领域。从1986年即时定位与地图构建(SLAM)提出以来,得到了迅猛的发展。而这一技术更多应用于如无人机,无人车等智能无人设备中。当移动机器人进入一个陌生环境中时,需要通过自身的传感器对环境地图进行构建,并同时确定自身在地图中的位置。由于相机具有体积小,重量轻,价格便宜等优良特点,且摄像机可以获得场景中二维信息,并通过相应算法得到位姿信息和运动状态信息,使得SLAM有了长足的发展。Visual navigation system has become a hot research field in navigation research in recent years. Since the simultaneous localization and mapping (SLAM) was proposed in 1986, it has developed rapidly. This technology is more widely used in intelligent unmanned devices such as drones and unmanned vehicles. When a mobile robot enters an unfamiliar environment, it needs to construct an environmental map through its own sensors and determine its position in the map at the same time. Since the camera has excellent characteristics such as small size, light weight, and low price, and the camera can obtain two-dimensional information in the scene, and obtain posture information and motion state information through corresponding algorithms, SLAM has made great progress.

传统的单目相机由于缺少深度无法提供足够维度的信息以供解算,因此在精度方面导致效率不高;而双目相机和深度相机虽然解决的数据维度的问题,但是也增加了硬件体积,使得在无人机等一些应用场景下无法发挥作用。因此,在单目相机的基础上改进特征提取方法并优化算法才能够在不增大设备的基础上解决维度信息问题。经典的单目视觉SLAM算法采用基于点特征的卡尔曼滤波(EKF)实现定位与建图,这一方法的主要思想是使用状态向量存储相机的位姿信息和地图中特征点的三维坐标,用概率密度函数表示观测的不确定性,通过对观测模型的递归计算,最终获得更新状态向量的均值和方差,但是由于引进了EKF,对于SLAM算法计算的时间和空间复杂度带来了不确定性和线性化问题,同时,使用点特征方法增大了矩阵的维度,也增加了计算的复杂度。Traditional monocular cameras cannot provide enough dimensional information for solving due to the lack of depth, so the accuracy is not high. Although binocular cameras and depth cameras solve the problem of data dimension, they also increase the hardware volume, making them ineffective in some application scenarios such as drones. Therefore, improving the feature extraction method and optimizing the algorithm based on the monocular camera can solve the dimensional information problem without increasing the size of the equipment. The classic monocular vision SLAM algorithm uses the point feature-based Kalman filter (EKF) to achieve positioning and mapping. The main idea of this method is to use the state vector to store the camera's posture information and the three-dimensional coordinates of the feature points in the map, and use the probability density function to represent the uncertainty of the observation. Through the recursive calculation of the observation model, the mean and variance of the updated state vector are finally obtained. However, due to the introduction of EKF, the time and space complexity of the SLAM algorithm calculation brings uncertainty and linearization problems. At the same time, the use of the point feature method increases the dimension of the matrix and the complexity of the calculation.

为了弥补EKF的线性化结果带来的影响,先后出现了无迹卡尔曼滤波,粒子滤波等多种滤波方式。这些方法虽然解决了EKF的线性化问题,但是计算复杂度上依然没有显著地提升。目前,SLAM技术大多应用于单一无人设备中,在一些多设备协同的场景下,多个单一无人设备在同一场景中会重复多次处理相同的特征,这对于整个群体在计算资源上产生了浪费。对于群体智能设备(多智能体)而言,现阶段研究更多停留在既定路线的规划上,诸如蜂群,蚁群等研究在路径规划领域有极其显著的效果,但在陌生环境下,路径规划系统难以发挥自身优势,使得多智能体在处理这类场景时仍然存在系统效率低下,系统运行不稳定,导航精度低等问题。In order to compensate for the impact of the linearization results of EKF, various filtering methods such as unscented Kalman filtering and particle filtering have emerged. Although these methods solve the linearization problem of EKF, the computational complexity has not been significantly improved. At present, SLAM technology is mostly used in single unmanned equipment. In some scenarios where multiple devices are coordinated, multiple single unmanned equipment will repeatedly process the same features in the same scene, which wastes computing resources for the entire group. For swarm intelligent devices (multi-agents), the current research is more focused on the planning of established routes. Research such as bee colonies and ant colonies has extremely significant effects in the field of path planning, but in unfamiliar environments, the path planning system is difficult to play its own advantages, which makes multi-agents still have problems such as low system efficiency, unstable system operation, and low navigation accuracy when dealing with such scenarios.

发明内容Summary of the invention

本发明提供了一种基于改进图优化SLAM的地空协同视觉导航方法及装置,通过引入图优化的概率计算理论,在不抛弃EKF基本的计算框架下,图优化能够大大降低EKF预测过程中需要计算矩阵的维度,使得矩阵变得稀疏,解决了由于矩阵维度带来的算法复杂度的问题,而且解决了现有技术中面对多智能体置于位置场景下导航效率低,系统运行不稳定等问题。The present invention provides a ground-to-air collaborative visual navigation method and device based on improved graph optimization SLAM. By introducing the probability calculation theory of graph optimization, without abandoning the basic calculation framework of EKF, graph optimization can greatly reduce the dimension of the matrix required to be calculated in the EKF prediction process, making the matrix sparse, solving the problem of algorithm complexity caused by the matrix dimension, and solving the problems of low navigation efficiency and unstable system operation in the prior art when multiple intelligent agents are placed in a positioning scenario.

为实现以上目的,本发明采用以下技术方案:To achieve the above objectives, the present invention adopts the following technical solutions:

一种基于改进图优化SLAM的地空协同视觉导航装置,包括:信号采集模块,前端处理模块,后端处理模块,信息通讯模块;所述信号采集模块包括单目视觉传感器;所述前端处理模块包括信号处理系统与数传;所述后端处理模块包括数据计算系统;所述信息通讯模块包括数传模块和图传模块;所述信号采集模块将视频信号采集后,传送给所述前端处理模块进行前期处理,得到关键帧信息与特征点信息后,传送给所述后端处理模块,所述后端处理模块对对应关键帧的特征点进行位姿解算和状态估计,并将结果传送给控制系统,各个模块之间以及各个模块与控制系统之间的联系都依靠信息通讯模块实现。A ground-to-air collaborative visual navigation device based on improved graph-optimized SLAM comprises: a signal acquisition module, a front-end processing module, a back-end processing module, and an information communication module; the signal acquisition module comprises a monocular visual sensor; the front-end processing module comprises a signal processing system and a data transmission; the back-end processing module comprises a data calculation system; the information communication module comprises a data transmission module and a graph transmission module; the signal acquisition module acquires a video signal and transmits it to the front-end processing module for preliminary processing, obtains key frame information and feature point information, and transmits it to the back-end processing module; the back-end processing module performs posture solution and state estimation on the feature points of the corresponding key frames, and transmits the results to the control system, and the connections between the modules and between the modules and the control system are realized by the information communication module.

以上所述装置中,所述前端处理模块中的信号处理系统为基于STM32的信号处理系统,所述数传为433Mhz数传;所述后端处理模块中的数据计算系统为基于单片机的数据计算系统;所述信息通讯模块中的数传模块为安置在各个智能体上用于传输数据的433Mhz的数传模块,所述图传模块为5.8Ghz的图传模块;所述智能体包括:空中智能体和地面智能体,所述空中智能体和地面智能体使用Mavlink通讯协议通过433MHz数传与蘑菇天线进行数据交互和传输,所述地面智能体为携带有GNSS接收机、惯性导航传感器、单目视觉传感器、433MHz数传和基于STM32的处理系统构成的无人智能小车,所述空中智能体为携带有GNSS接收机、加速度传感器、陀螺仪、单目视觉传感器、433MHz数传和基于STM32的处理器系统构成的智能无人机;各传感器均通过连接到STM32,将数据传送至处理器中进行处理,需要交互的信息通过连接在STM32通信端口的数传和蘑菇天线进行。In the above device, the signal processing system in the front-end processing module is a signal processing system based on STM32, and the data transmission is 433Mhz data transmission; the data calculation system in the back-end processing module is a data calculation system based on a single-chip microcomputer; the data transmission module in the information communication module is a 433Mhz data transmission module installed on each intelligent body for transmitting data, and the image transmission module is a 5.8Ghz image transmission module; the intelligent body includes: an aerial intelligent body and a ground intelligent body, and the aerial intelligent body and the ground intelligent body use the Mavlink communication protocol to communicate with the mushroom antenna through 433MHz data transmission Data interaction and transmission are carried out, the ground intelligent body is an unmanned intelligent car carrying a GNSS receiver, an inertial navigation sensor, a monocular vision sensor, a 433MHz data transmission and a processing system based on STM32, and the aerial intelligent body is an intelligent drone carrying a GNSS receiver, an acceleration sensor, a gyroscope, a monocular vision sensor, a 433MHz data transmission and a processor system based on STM32; each sensor is connected to STM32 to transmit data to the processor for processing, and the information that needs to be interacted is carried out through the data transmission and mushroom antenna connected to the STM32 communication port.

一种基于改进图优化SLAM的地空协同视觉导航方法,包括以下步骤:A ground-to-air collaborative visual navigation method based on improved graph optimization SLAM includes the following steps:

(1)单目视觉传感器对未知场景进行视频数据的采集,将视频数据按照一定的时间间隔进行帧采样,得到需要进行处理的每一帧图像;(1) The monocular vision sensor collects video data of the unknown scene, samples the video data at a certain time interval, and obtains each frame image that needs to be processed;

(2)使用算法将所述帧图像中的拐点和分界线等的特征提取出来,为后端提供视野内的特征数据;(2) using an algorithm to extract features such as inflection points and dividing lines in the frame image, and providing feature data within the field of view to the back end;

(3)将得到的特征数据传输到图优化算法进行位姿信息的获取。(3) The obtained feature data is transmitted to the graph optimization algorithm to obtain the pose information.

以上所述步骤中,步骤(2)中所述算法为改进的FAST角点提取方法,所述改进的FAST角点提取方法在提取的同时为特征点增加主方向信息,提取特征点的主要对象是图像中像素转变快的角点或拐点,即图像中的基础特征;而对于空间中一些平面交界较为清晰的位置,拐点和角点的在提取过程中会陷入局部最优解,采用Plucker直线特征描述,提取的主要对象是图像中明显的边界,即图像中的进阶特征。通过点线特征结合的方法,使用STM32作为为处理载体,可以得到该帧的点线融合特征和载体的初始状态估计,为后续的后端处理算法提供初始数据进行初始化;所述的后端算法是在单目视觉传感器和前端算法对位置场景进行视频数据的采集和处理得到帧单位的图像以后,使用FAST特征点处理算法,对帧图像特征提使用矩阵进行描述,并将得到的特征数据传输到图优化算法进行位姿信息的获取;In the above steps, the algorithm described in step (2) is an improved FAST corner point extraction method. The improved FAST corner point extraction method adds main direction information to the feature points while extracting. The main object of extracting feature points is the corner points or inflection points with fast pixel transition in the image, that is, the basic features in the image; and for some positions in space where the plane intersection is relatively clear, the inflection points and corner points will fall into the local optimal solution during the extraction process, and the Plucker straight line feature description is adopted. The main object of extraction is the obvious boundary in the image, that is, the advanced features in the image. Through the method of combining point and line features, using STM32 as the processing carrier, the point and line fusion features of the frame and the initial state estimation of the carrier can be obtained, and the initial data is provided for the subsequent back-end processing algorithm to initialize; the back-end algorithm is to use the FAST feature point processing algorithm to describe the frame image features using a matrix after the monocular vision sensor and the front-end algorithm collect and process the video data of the position scene to obtain the frame unit image, and transmit the obtained feature data to the graph optimization algorithm to obtain the posture information;

步骤(3)中图优化算法的具体步骤为:The specific steps of the graph optimization algorithm in step (3) are:

(a)将得到的初始状态估计作为算法的初始化参数,结合系统模型对初始化参数进行解算得到空间状态参数;(a) The obtained initial state estimate is used as the initialization parameter of the algorithm, and the initialization parameter is solved in combination with the system model to obtain the spatial state parameter;

(b)根据系统状态模型,估计智能体下一时刻的位姿信息。将特征点和线作为图中的节点,各个特征点和线之间的关系作为边,想成一个有向无环的贝叶斯网络图,使用最优路径图优化理论,降低计算过程中使用矩阵的维度,在这一过程中,如果有新的特征点和线,将在这一步骤进行初始化;如果有特征点或线离开视野将在这步骤中被移除;(b) Estimate the next moment’s position information of the intelligent agent based on the system state model. The feature points and lines are regarded as nodes in the graph, and the relationship between each feature point and line is regarded as an edge. Think of it as a directed acyclic Bayesian network graph. Use the optimal path graph optimization theory to reduce the dimension of the matrix used in the calculation process. In this process, if there are new feature points and lines, they will be initialized in this step; if there are feature points or lines that leave the field of view, they will be removed in this step.

(c)通过计算系统增益对系统的状态估计和协方差矩阵进行更新,完成这一帧的整个算法流程。(c) The system state estimation and covariance matrix are updated by calculating the system gain, completing the entire algorithm process of this frame.

以上所述的图优化算法采用了拉普拉斯矩阵计算方法,在进行估计和更新操作之前,将转换后的矩阵进行稀疏化处理,使得参与计算的海森矩阵具有系数矩阵的一般特征,通过利用这种特征能够明显降低矩阵计算的难度,加快数据的更新速度。The graph optimization algorithm described above uses the Laplace matrix calculation method. Before the estimation and update operations, the converted matrix is sparsely processed so that the Hessian matrix involved in the calculation has the general characteristics of the coefficient matrix. By utilizing this feature, the difficulty of matrix calculation can be significantly reduced and the data update speed can be accelerated.

多智能体分层SLAM方法为:在多智能体应用于同一未知场景的条件下,定义固连在智能体个体上的坐标系为本地坐标系;定义与地球固连的坐标系为世界坐标系;由每个单一智能体收集到的特征点信息在本地坐标系中进行处理和转化,由于系统特征仍需要在世界坐标系中进行转换,而多个智能体之间需要进行特征信息的交互以形成完整的多智能体系统,因此定义一种新的数据合并场景:广义回环检测;设计广义回环检测的场景包括两种:一个是与回环检测相同的单一智能体回到原来位置时的回环检测,另一个是当某一智能体到其他智能体已经经过的位置时,也会出发数据的合并;将系统刚在本地坐标系中提取到的点,线等特征,通过坐标转换,矩阵运算等将数据转移到世界坐标系,并与其它智能体通信,以减少智能体回环检测次数,加快准确定位收敛速度。The multi-agent hierarchical SLAM method is as follows: when multiple agents are applied to the same unknown scene, the coordinate system fixed on the individual agents is defined as the local coordinate system; the coordinate system fixed to the earth is defined as the world coordinate system; the feature point information collected by each single agent is processed and transformed in the local coordinate system. Since the system features still need to be converted in the world coordinate system, and multiple agents need to interact with each other to form a complete multi-agent system, a new data merging scenario is defined: generalized loop detection; there are two scenarios for designing generalized loop detection: one is the loop detection when a single agent returns to its original position, which is the same as loop detection, and the other is that when an agent reaches a position that other agents have passed, data merging will also be triggered; the points, lines and other features just extracted by the system in the local coordinate system are transferred to the world coordinate system through coordinate transformation, matrix operations, etc., and communicate with other agents to reduce the number of loop detections of agents and speed up the convergence of accurate positioning.

有益效果:本发明提供了一种基于改进图优化SLAM的地空协同视觉导航方法及装置,搭建了基于视觉导航的多智能体视觉导航算法,通过使用多个空地智能体之间的协调,减少了单一智能体需要处理的特征点数据,通过设置数据合并场景,简化了需要进行的计算和数据交互,解决了由于大量数据传输导致的信道闭塞问题,加快了整个系统导航数据的解算速度。Beneficial effect: The present invention provides a ground-to-air collaborative visual navigation method and device based on improved graph-optimized SLAM, and builds a multi-agent visual navigation algorithm based on visual navigation. By using the coordination between multiple air-to-ground agents, the feature point data that needs to be processed by a single agent is reduced. By setting a data merging scenario, the required calculations and data interactions are simplified, the channel blocking problem caused by large-scale data transmission is solved, and the speed of solving the navigation data of the entire system is accelerated.

本发明融合了点线特征,点线特征融合的特征提取算法,将整个系统的特征向量维数进行了初步的降低,在进行特征信息处理时,用于表示各点之间关系的矩阵维数也随之降低,这能够明显降低特征提取时所需要的时间,保证了特征提取的即时性。The present invention integrates point and line features, and the feature extraction algorithm of the point and line feature fusion has preliminarily reduced the dimension of the feature vector of the entire system. When performing feature information processing, the dimension of the matrix used to represent the relationship between points is also reduced accordingly, which can significantly reduce the time required for feature extraction and ensure the immediacy of feature extraction.

本发明改进了图优化算法,通过采用结合了拉普拉斯矩阵计算方法的图优化算法,在降低了特征向量维数的基础上,进一步降低在转化成关系图后,用于表示关系图的矩阵的维数。由于在进行预测和更新计算过程中,需要对这一矩阵进行多次复杂的矩阵运算。通过降低矩阵维数加快了相应计算的速度。因此改进图优化算法也在EKF的基础上降低了算法复杂度,使得在使用特征数据对智能体的位姿信息进行预测和更新的速度有明显的提高。本发明的基于多个智能体的导航系统可以在民用、商用、军用等多个领域发挥作用。The present invention improves the graph optimization algorithm. By adopting the graph optimization algorithm combined with the Laplace matrix calculation method, the dimension of the matrix used to represent the relationship graph after being converted into the relationship graph is further reduced on the basis of reducing the dimension of the feature vector. Since it is necessary to perform multiple complex matrix operations on this matrix during the prediction and update calculation process. The speed of the corresponding calculation is accelerated by reducing the matrix dimension. Therefore, the improved graph optimization algorithm also reduces the algorithm complexity on the basis of EKF, so that the speed of predicting and updating the posture information of the intelligent body using feature data is significantly improved. The navigation system based on multiple intelligent bodies of the present invention can play a role in multiple fields such as civil, commercial, and military.

附图说明BRIEF DESCRIPTION OF THE DRAWINGS

图1是本发明实施例的硬件系统硬件结构图;FIG1 is a hardware structure diagram of a hardware system according to an embodiment of the present invention;

图2是本发明实施例的点、线特征结合示意图;FIG2 is a schematic diagram of a point and line feature combination according to an embodiment of the present invention;

图3是本发明实施例的系统流程图;FIG3 is a system flow chart of an embodiment of the present invention;

图4是本发明实施例的改进图优化算法流程图;FIG4 is a flow chart of an improved graph optimization algorithm according to an embodiment of the present invention;

图5是本发明实施例的多智能体分层系统设计图。FIG5 is a design diagram of a multi-agent hierarchical system according to an embodiment of the present invention.

具体实施方式Detailed ways

下面结合附图和具体实施方式对本发明进行详细说明:The present invention is described in detail below with reference to the accompanying drawings and specific embodiments:

如图3所示,一种基于改进图优化SLAM的地空协同视觉导航装置,包括:信号采集模块,前端处理模块,后端处理模块,信息通讯模块;所述信号采集模块包括单目视觉传感器;所述前端处理模块包括信号处理系统与数传;所述后端处理模块包括数据计算系统;所述信息通讯模块包括数传模块和图传模块;所述信号采集模块将视频信号采集后,传送给所述前端处理模块进行前期处理,得到关键帧信息与特征点信息后,传送给所述后端处理模块,所述后端处理模块对对应关键帧的特征点进行位姿解算和状态估计,并将结果传送给控制系统,各个模块之间以及各个模块与控制系统之间的联系都依靠信息通讯模块实现。所述信号采集模块的主要作用是使用传感器探测场景和位置信息;前端处理模块将信号采集模块得到的信息进行简单的预处理,将图像信息转换为矩阵信息以便运算;后端处理模块取得前端处理模块传递的矩阵信息对矩阵进行处理;改进图优化算法模块从属于后端处理模块,这一模块用于加速信息处理的速度。As shown in Figure 3, a ground-to-air collaborative visual navigation device based on improved graph optimization SLAM includes: a signal acquisition module, a front-end processing module, a back-end processing module, and an information communication module; the signal acquisition module includes a monocular visual sensor; the front-end processing module includes a signal processing system and a data transmission; the back-end processing module includes a data calculation system; the information communication module includes a data transmission module and a graph transmission module; the signal acquisition module collects the video signal and transmits it to the front-end processing module for preliminary processing, and after obtaining the key frame information and feature point information, transmits it to the back-end processing module, the back-end processing module performs posture solution and state estimation on the feature points of the corresponding key frame, and transmits the result to the control system, and the connection between each module and the control system is realized by the information communication module. The main function of the signal acquisition module is to use sensors to detect scene and position information; the front-end processing module performs simple preprocessing on the information obtained by the signal acquisition module, and converts the image information into matrix information for calculation; the back-end processing module obtains the matrix information transmitted by the front-end processing module to process the matrix; the improved graph optimization algorithm module is subordinate to the back-end processing module, and this module is used to accelerate the speed of information processing.

以上所述装置中,所述前端处理模块中的信号处理系统为基于STM32的信号处理系统,所述数传为433Mhz数传;所述后端处理模块中的数据计算系统为基于单片机的数据计算系统;所述信息通讯模块中的数传模块为安置在各个智能体上用于传输数据的433Mhz的数传模块,所述图传模块为5.8Ghz的图传模块;如图1所示,所述智能体包括:空中智能体和地面智能体,所述地面智能体为携带有GNSS接收机、惯性导航传感器、单目视觉传感器、433MHz数传和基于STM32的处理系统构成的无人智能小车,所述空中智能体为携带有GNSS接收机、加速度传感器、陀螺仪、单目视觉传感器、433MHz数传和基于STM32的处理器系统构成的智能无人机;各传感器均通过连接到STM32,将数据传送至处理器中进行处理,需要交互的信息通过连接在STM32通信端口的数传和蘑菇天线进行。所述空中智能体和地面智能体使用Mavlink通讯协议通过433MHz数传与蘑菇天线进行数据交互和传输,并设计了数据传输场景。当数据传输场景条件触发时,将会发生数据传输事件;GNSS接收机,惯性导航传感器用于提供未知场景下的位置信息为粗略确定位置与检测回环时提供数据;视觉传感器用于采集未知场景下的图像,通过对图像进行初步处理,提取图像中的点特征,线特征。通过算法完成特征提取的过程,为后端提供视野内的特征数据;STM32作为算法载体和处理器,通过编写程序将算法烧写进闪存,在视觉传感器提供视野内的特征数据以后,按照改进图优化算法完成位姿估计,协调数传更新本地与全局数据等功能;多个智能体通过分层SLAM协同进行环境特征相关数据与信息的交互。In the above device, the signal processing system in the front-end processing module is a signal processing system based on STM32, and the data transmission is 433Mhz data transmission; the data calculation system in the back-end processing module is a data calculation system based on a single-chip microcomputer; the data transmission module in the information communication module is a 433Mhz data transmission module placed on each intelligent body for transmitting data, and the image transmission module is a 5.8Ghz image transmission module; as shown in Figure 1, the intelligent body includes: an aerial intelligent body and a ground intelligent body, the ground intelligent body is an unmanned intelligent car carrying a GNSS receiver, an inertial navigation sensor, a monocular vision sensor, a 433MHz data transmission and a processing system based on STM32, and the aerial intelligent body is an intelligent drone carrying a GNSS receiver, an acceleration sensor, a gyroscope, a monocular vision sensor, a 433MHz data transmission and a processor system based on STM32; each sensor is connected to STM32 to transmit data to the processor for processing, and the information that needs to be interacted is carried out through the data transmission and mushroom antenna connected to the STM32 communication port. The aerial agent and the ground agent use the Mavlink communication protocol to interact and transmit data with the mushroom antenna through 433MHz data transmission, and a data transmission scenario is designed. When the data transmission scenario conditions are triggered, a data transmission event will occur; the GNSS receiver and inertial navigation sensor are used to provide location information in unknown scenes to provide data for roughly determining the location and detecting loopbacks; the visual sensor is used to collect images in unknown scenes, and extract point features and line features in the image by performing preliminary processing on the image. The feature extraction process is completed through the algorithm to provide feature data within the field of view for the back end; STM32 is used as an algorithm carrier and processor, and the algorithm is burned into the flash memory by writing a program. After the visual sensor provides the feature data within the field of view, the pose estimation is completed according to the improved graph optimization algorithm, and the data transmission is coordinated to update the local and global data and other functions; multiple agents collaborate through hierarchical SLAM to interact with environmental feature-related data and information.

如图2所示,一种基于改进图优化SLAM的地空协同视觉导航方法,包括以下步骤:As shown in FIG2 , a ground-to-air collaborative visual navigation method based on improved graph optimization SLAM includes the following steps:

(1)单目视觉传感器对未知场景进行视频数据的采集,将视频数据按照一定的时间间隔进行帧采样,得到需要进行处理的每一帧图像;(1) The monocular vision sensor collects video data of the unknown scene, samples the video data at a certain time interval, and obtains each frame image that needs to be processed;

(2)使用算法将所述帧图像中的拐点和分界线等的特征提取出来,为后端提供视野内的特征数据;(2) using an algorithm to extract features such as inflection points and dividing lines in the frame image, and providing feature data within the field of view to the back end;

(3)将得到的特征数据传输到图优化算法进行位姿信息的获取。(3) The obtained feature data is transmitted to the graph optimization algorithm to obtain the pose information.

以上所述步骤中,步骤(2)中所述算法为改进的FAST角点提取方法,所述改进的FAST角点提取方法在提取的同时为特征点增加主方向信息,提取特征点的主要对象是图像中像素转变快的角点或拐点,即图像中的基础特征;而对于空间中一些平面交界较为清晰的位置,拐点和角点的在提取过程中会陷入局部最优解,采用Plucker直线特征描述,提取的主要对象是图像中明显的边界,即图像中的进阶特征。通过点线特征结合的方法,使用STM32作为为处理载体,可以得到该帧的点线融合特征和载体的初始状态估计,为后续的后端处理算法提供初始数据进行初始化;所述的后端算法是在单目视觉传感器和前端算法对位置场景进行视频数据的采集和处理得到帧单位的图像以后,使用FAST特征点处理算法,对帧图像特征提使用矩阵进行描述,并将得到的特征数据传输到图优化算法进行位姿信息的获取;In the above steps, the algorithm described in step (2) is an improved FAST corner point extraction method. The improved FAST corner point extraction method adds main direction information to the feature points while extracting. The main object of extracting feature points is the corner points or inflection points with fast pixel transition in the image, that is, the basic features in the image; and for some positions in space where the plane intersection is relatively clear, the inflection points and corner points will fall into the local optimal solution during the extraction process, and the Plucker straight line feature description is adopted. The main object of extraction is the obvious boundary in the image, that is, the advanced features in the image. Through the method of combining point and line features, using STM32 as the processing carrier, the point and line fusion features of the frame and the initial state estimation of the carrier can be obtained, and the initial data is provided for the subsequent back-end processing algorithm to initialize; the back-end algorithm is to use the FAST feature point processing algorithm to describe the frame image features using a matrix after the monocular vision sensor and the front-end algorithm collect and process the video data of the position scene to obtain the frame unit image, and transmit the obtained feature data to the graph optimization algorithm to obtain the posture information;

步骤(3)中图优化算法的具体步骤为:The specific steps of the graph optimization algorithm in step (3) are:

(a)将得到的初始状态估计作为算法的初始化参数,结合系统模型对初始化参数进行解算得到空间状态参数;(a) The obtained initial state estimate is used as the initialization parameter of the algorithm, and the initialization parameter is solved in combination with the system model to obtain the spatial state parameter;

(b)根据系统状态模型,估计智能体下一时刻的位姿信息。将特征点和线作为图中的节点,各个特征点和线之间的关系作为边,想成一个有向无环的贝叶斯网络图,使用最优路径图优化理论,降低计算过程中使用矩阵的维度,在这一过程中,如果有新的特征点和线,将在这一步骤进行初始化;如果有特征点或线离开视野将在这步骤中被移除;(b) Estimate the next moment’s position information of the intelligent agent based on the system state model. The feature points and lines are regarded as nodes in the graph, and the relationship between each feature point and line is regarded as an edge. Think of it as a directed acyclic Bayesian network graph. Use the optimal path graph optimization theory to reduce the dimension of the matrix used in the calculation process. In this process, if there are new feature points and lines, they will be initialized in this step; if there are feature points or lines that leave the field of view, they will be removed in this step.

(c)通过计算系统增益对系统的状态估计和协方差矩阵进行更新,完成这一帧的整个算法流程。(c) The system state estimation and covariance matrix are updated by calculating the system gain, completing the entire algorithm process of this frame.

以上所述的图优化算法采用了拉普拉斯矩阵计算方法,在进行估计和更新操作之前,将转换后的矩阵进行稀疏化处理,使得参与计算的海森矩阵具有系数矩阵的一般特征,通过利用这种特征能够明显降低矩阵计算的难度,加快数据的更新速度。The graph optimization algorithm described above uses the Laplace matrix calculation method. Before the estimation and update operations, the converted matrix is sparsely processed so that the Hessian matrix involved in the calculation has the general characteristics of the coefficient matrix. By utilizing this feature, the difficulty of matrix calculation can be significantly reduced and the data update speed can be accelerated.

所述的改进图优化算法中,由于未知环境的特点,能够提取的特征点和特征线段的数量不能预知,若这个数量十分庞大,在转换成图后其节点和边的数量是十分可观的。而在后面的计算中,对这些矩阵进行的计算会随着矩阵维数的增加而增加,使得计算速度变得十分缓慢,因此本发明采用了拉普拉斯矩阵计算方法,在进行估计和更新操作之前,将转换后的矩阵进行稀疏化处理,使得参与计算的海森矩阵具有稀疏矩阵的一般特征,通过利用这种特征能够明显降低矩阵计算的难度,加快数据的更新速度。In the improved graph optimization algorithm, due to the characteristics of the unknown environment, the number of feature points and feature segments that can be extracted cannot be predicted. If this number is very large, the number of nodes and edges after conversion into a graph is very considerable. In the subsequent calculations, the calculations performed on these matrices will increase with the increase of the matrix dimension, making the calculation speed very slow. Therefore, the present invention adopts the Laplace matrix calculation method. Before the estimation and update operations, the converted matrix is sparsely processed so that the Hessian matrix involved in the calculation has the general characteristics of a sparse matrix. By utilizing this feature, the difficulty of matrix calculation can be significantly reduced and the data update speed can be accelerated.

如图4所示,改进图优化算法的流程为,前端将特征数据转化为矩阵后,图优化算法将所有点、线特征抽象为图的节点,将各个特征之间的联系抽象为图的边。将相关点之间用边链接,就可以得到一个有向无环的贝叶斯网络。将这个有向无环的贝叶斯网络抽象出关系矩阵,即设定一阈值,相关性大于此阈值其边权值为1,反之为0。然后使用路径最优算法,对贝叶斯网络进行寻优,得到优化后的节点关系矩阵,使用拉普拉斯矩阵计算方法进行计算处理,最后得到可以用于更新系统协方差和状态的向量。As shown in Figure 4, the process of improving the graph optimization algorithm is that after the front end converts the feature data into a matrix, the graph optimization algorithm abstracts all point and line features into nodes of the graph, and abstracts the connections between the features into edges of the graph. By linking the related points with edges, a directed acyclic Bayesian network can be obtained. The relationship matrix is abstracted from this directed acyclic Bayesian network, that is, a threshold is set, and the edge weight is 1 when the correlation is greater than this threshold, otherwise it is 0. Then, the path optimization algorithm is used to optimize the Bayesian network to obtain the optimized node relationship matrix, which is calculated and processed using the Laplace matrix calculation method, and finally a vector that can be used to update the system covariance and state is obtained.

所述的多智能体分层SLAM方法设计为,在多智能体应用于同一未知场景的条件下,定义固连在智能体个体上的坐标系为本地坐标系;定义与地球固连的坐标系为世界坐标系。本地地图信息通过转换矩阵将本地图的转换到全局图中/>对于任意一个智能体而言,在其本地坐标系中将位姿向量定义为The multi-agent hierarchical SLAM method is designed to define the coordinate system fixed on the individual agent as the local coordinate system under the condition that multiple agents are applied to the same unknown scene; and define the coordinate system fixed on the earth as the world coordinate system. The local map information is converted into the local map information through the transformation matrix. Transform to global map/> For any agent, the pose vector is defined in its local coordinate system as

其中,Ri表示第i个智能体位姿信息,表示地图中的路标点信息。机器人的姿态是连续变化的,其姿态由当前时刻的局部地图中机器人的姿态与之前时刻状态的相对变换得出的,可以表示为Among them,Ri represents the posture information of the i-th intelligent body, Represents the landmark information in the map. The robot's posture changes continuously. Its posture is obtained by the relative transformation of the robot's posture in the local map at the current moment and the state at the previous moment, which can be expressed as

对于全局级地图中的位姿信息可以定义为The pose information in the global map can be defined as

全局图中同一特征点由L个智能体提供信息,因此采用最小二乘法对信息进行处理The same feature point in the global graph is provided by L agents, so the least squares method is used to process the information.

由每个单一智能体收集到的特征点信息在本地坐标系中进行处理和转化。由于系统特征仍需要在世界坐标系中进行转换,而多个智能体之间需要进行特征信息的交互以形成完整的多智能体系统,因此定义一种新的数据合并场景:广义回环检测。设计广义回环检测的场景包括两种:一个是与回环检测相同的单一智能体回到原来位置时的回环检测,另一个是当某一智能体到其他智能体已经经过的位置时,也会出发数据的合并。将系统刚在本地坐标系中提取到的点,线等特征,通过以上运算将位姿信息转换到世界坐标系,并与其它智能体通信,共享该位置的特征点信息。以减少智能体回环检测次数,加快准确定位收敛速度。The feature point information collected by each single agent is processed and transformed in the local coordinate system. Since the system features still need to be converted in the world coordinate system, and multiple agents need to interact with each other to form a complete multi-agent system, a new data merging scenario is defined: generalized loop detection. There are two scenarios for designing generalized loop detection: one is the loop detection when a single agent returns to the original position, and the other is when an agent reaches a position that other agents have passed, data merging will also be triggered. The points, lines and other features that the system has just extracted in the local coordinate system are converted to the world coordinate system through the above operations, and communicate with other agents to share the feature point information at that location. This can reduce the number of loop detections of agents and speed up the convergence of accurate positioning.

如图5所示,多智能体分层SLAM设计流程为,首先使用传感器对未知环境进行采集,然后在前端进行信息解算,得到位置和视觉特征信息向量,此时通过检测本地坐标系中位置信息,如果有该位置信息,说明该智能体检测到回环,即本地的回环完成;如果没有检测到则检测世界坐标系下是否有此位置的信息,如果有,说明其他智能体曾到达此位置,即全局回环完成。无论本地回环还是全局回环可以统一为广义回环,该回环完成时可以使用本次信息与原始信息,通过最小二乘方法进行未知预测和更新,而如果不能够完成回环检测,则使用本次测量信息进行未知预测和更新。As shown in Figure 5, the design process of multi-agent hierarchical SLAM is to first use sensors to collect unknown environments, then perform information calculations at the front end to obtain position and visual feature information vectors. At this time, by detecting the position information in the local coordinate system, if there is such position information, it means that the agent has detected the loop, that is, the local loop is completed; if not detected, then detect whether there is information about this position in the world coordinate system. If so, it means that other agents have reached this position, that is, the global loop is completed. Both local and global loops can be unified into generalized loops. When the loop is completed, the current information and the original information can be used to perform unknown prediction and update through the least squares method. If the loop detection cannot be completed, the current measurement information is used for unknown prediction and update.

以上所述仅为本发明的优选实施方式,但本发明的保护范围并不局限于此,本领域技术人员在本发明揭露的技术范围内,可轻易想到的变化或替换,都应涵盖在本发明的保护范围之内。The above description is only a preferred embodiment of the present invention, but the protection scope of the present invention is not limited thereto. Any changes or substitutions that can be easily conceived by those skilled in the art within the technical scope disclosed in the present invention should be included in the protection scope of the present invention.

Claims (7)

Translated fromChinese
1.一种基于改进图优化SLAM的地空协同视觉导航方法,其特征在于,包括以下步骤:1. A ground-to-air collaborative visual navigation method based on improved graph-optimized SLAM, characterized in that it comprises the following steps:(1)单目视觉传感器对未知场景进行视频数据的采集,将视频数据按照一定的时间间隔进行帧采样,得到需要进行处理的每一帧图像;(1) The monocular vision sensor collects video data of the unknown scene, samples the video data at a certain time interval, and obtains each frame image that needs to be processed;(2)使用改进的FAST角点提取方法将所述帧图像中的拐点和分界线等的特征提取出来,为后端提供视野内的特征数据;所述改进的FAST角点提取方法在提取的同时为特征点增加主方向信息,提取特征点的主要对象是图像中像素转变快的角点或拐点,即图像中的基础特征;而对于空间中一些平面交界较为清晰的位置,拐点和角点的在提取过程中会陷入局部最优解,采用Plucker直线特征描述,提取的主要对象是图像中明显的边界,即图像中的进阶特征,通过点线特征结合的方法,使用STM32作为为处理载体,可以得到该帧的点线融合特征和载体的初始状态估计,为后续的后端处理算法提供初始数据进行初始化;(2) Using the improved FAST corner point extraction method, the features of the inflection points and boundary lines in the frame image are extracted to provide feature data within the field of view for the back end; the improved FAST corner point extraction method adds the main direction information to the feature points while extracting, and the main object of extracting feature points is the corner points or inflection points with fast pixel transition in the image, that is, the basic features in the image; while for some positions with clear plane intersections in space, the inflection points and corner points will fall into the local optimal solution during the extraction process, and the Plucker straight line feature description is used. The main object of extraction is the obvious boundary in the image, that is, the advanced features in the image. By combining the point and line features and using STM32 as the processing carrier, the point and line fusion features of the frame and the initial state estimation of the carrier can be obtained, providing initial data for initialization of the subsequent back-end processing algorithm;(3)将得到的特征数据传输到图优化算法进行位姿信息的获取,所述图优化算法采用了拉普拉斯矩阵计算方法,在进行估计和更新操作之前,将转换后的矩阵进行稀疏化处理,使得参与计算的海森矩阵具有稀疏矩阵的一般特征。(3) The obtained feature data is transmitted to a graph optimization algorithm to obtain the pose information. The graph optimization algorithm adopts a Laplace matrix calculation method. Before performing estimation and update operations, the converted matrix is sparsely processed so that the Hessian matrix involved in the calculation has the general characteristics of a sparse matrix.2.根据权利要求1所述的基于改进图优化SLAM的地空协同视觉导航方法,其特征在于,后端算法是在单目视觉传感器和前端算法对位置场景进行视频数据的采集和处理得到帧单位的图像以后,使用FAST特征点处理算法,对帧图像特征提使用矩阵进行描述,并将得到的特征数据传输到图优化算法进行位姿信息的获取。2. The ground-to-air collaborative visual navigation method based on improved graph optimization SLAM according to claim 1 is characterized in that the back-end algorithm uses a FAST feature point processing algorithm to describe the frame image features using a matrix after the monocular vision sensor and the front-end algorithm collect and process the video data of the position scene to obtain the image of the frame unit, and transmits the obtained feature data to the graph optimization algorithm to obtain the posture information.3.根据权利要求1所述的基于改进图优化SLAM的地空协同视觉导航方法,其特征在于,步骤(3)中图优化算法的具体步骤为:3. The ground-to-air collaborative visual navigation method based on improved graph optimization SLAM according to claim 1, characterized in that the specific steps of the graph optimization algorithm in step (3) are:(a)将得到的初始状态估计作为算法的初始化参数,结合系统模型对初始化参数进行解算得到空间状态参数;(a) The obtained initial state estimate is used as the initialization parameter of the algorithm, and the initialization parameter is solved in combination with the system model to obtain the spatial state parameter;(b)根据系统状态模型,估计智能体下一时刻的位姿信息,将特征点和线作为图中的节点,各个特征点和线之间的关系作为边,想成一个有向无环的贝叶斯网络图,使用最优路径图优化理论,降低计算过程中使用矩阵的维度,在这一过程中,如果有新的特征点和线,将在这一步骤进行初始化;如果有特征点或线离开视野将在这步骤中被移除;(b) According to the system state model, estimate the posture information of the agent at the next moment, take the feature points and lines as nodes in the graph, and the relationship between each feature point and line as the edge, think of it as a directed acyclic Bayesian network graph, use the optimal path graph optimization theory to reduce the dimension of the matrix used in the calculation process. In this process, if there are new feature points and lines, they will be initialized in this step; if there are feature points or lines that leave the field of view, they will be removed in this step;(c)通过计算系统增益对系统的状态估计和协方差矩阵进行更新,完成这一帧的整个算法流程。(c) The system state estimation and covariance matrix are updated by calculating the system gain, completing the entire algorithm process of this frame.4.根据权利要求1所述的基于改进图优化SLAM的地空协同视觉导航方法,其特征在于,多智能体分层SLAM方法为:在多智能体应用于同一未知场景的条件下,定义固连在智能体个体上的坐标系为本地坐标系;定义与地球固连的坐标系为世界坐标系;由每个单一智能体收集到的特征点信息在本地坐标系中进行处理和转化,由于系统特征仍需要在世界坐标系中进行转换,而多个智能体之间需要进行特征信息的交互以形成完整的多智能体系统,定义一种新的数据合并场景:广义回环检测;设计广义回环检测的场景包括两种:一个是与回环检测相同的单一智能体回到原来位置时的回环检测,另一个是当某一智能体到其他智能体已经经过的位置时,也会出发数据的合并;将系统刚在本地坐标系中提取到的点,线等特征,通过坐标转换,矩阵运算等将数据转移到世界坐标系,并与其它智能体通信,以减少智能体回环检测次数,加快准确定位收敛速度。4. The ground-to-air collaborative visual navigation method based on improved graph optimization SLAM according to claim 1 is characterized in that the multi-agent hierarchical SLAM method is: under the condition that multiple agents are applied to the same unknown scene, the coordinate system fixed on the agent individual is defined as the local coordinate system; the coordinate system fixed to the earth is defined as the world coordinate system; the feature point information collected by each single agent is processed and transformed in the local coordinate system, because the system characteristics still need to be transformed in the world coordinate system, and the feature information needs to be interacted between multiple agents to form a complete multi-agent system, a new data merging scenario is defined: generalized loop detection; the scenarios for designing generalized loop detection include two types: one is the loop detection when a single agent returns to the original position that is the same as the loop detection, and the other is when a certain agent reaches a position that other agents have passed, the data will also be triggered to merge; the points, lines and other features just extracted by the system in the local coordinate system are transferred to the world coordinate system through coordinate conversion, matrix operations, etc., and communicated with other agents to reduce the number of agent loop detections and speed up the convergence speed of accurate positioning.5.权利要求1-4中任一项所述方法中采用的基于改进图优化SLAM的地空协同视觉导航装置,其特征在于,包括:信号采集模块,前端处理模块,后端处理模块,信息通讯模块;所述信号采集模块包括单目视觉传感器;所述前端处理模块包括信号处理系统与数传;所述后端处理模块包括数据计算系统;所述信息通讯模块包括数传模块和图传模块;所述信号采集模块将视频信号采集后,传送给所述前端处理模块进行前期处理,得到关键帧信息与特征点信息后,传送给所述后端处理模块,所述后端处理模块对对应关键帧的特征点进行位姿解算和状态估计,并将结果传送给控制系统,各个模块之间以及各个模块与控制系统之间的联系都依靠信息通讯模块实现;所述前端处理模块中的信号处理系统为基于STM32的信号处理系统,所述数传为433Mhz数传;所述后端处理模块中的数据计算系统为基于单片机的数据计算系统;所述信息通讯模块中的数传模块为安置在各个智能体上用于传输数据的433Mhz的数传模块,所述图传模块为5.8Ghz的图传模块。5. The ground-to-air collaborative visual navigation device based on improved graph-optimized SLAM adopted in the method described in any one of claims 1 to 4 is characterized in that it includes: a signal acquisition module, a front-end processing module, a back-end processing module, and an information communication module; the signal acquisition module includes a monocular vision sensor; the front-end processing module includes a signal processing system and a data transmission; the back-end processing module includes a data computing system; the information communication module includes a data transmission module and a graph transmission module; the signal acquisition module collects the video signal and transmits it to the front-end processing module for preliminary processing, and after obtaining the key frame information and feature point information, transmits it to the back-end processing module The back-end processing module performs posture solution and state estimation on the feature points of the corresponding key frames, and transmits the results to the control system. The connections between the modules and between the modules and the control system are realized by the information communication module. The signal processing system in the front-end processing module is a signal processing system based on STM32, and the data transmission is 433Mhz data transmission. The data calculation system in the back-end processing module is a data calculation system based on a single-chip microcomputer. The data transmission module in the information communication module is a 433Mhz data transmission module installed on each intelligent body for transmitting data, and the image transmission module is a 5.8Ghz image transmission module.6.根据权利要求5所述的基于改进图优化SLAM的地空协同视觉导航装置,其特征在于,所述智能体包括:空中智能体和地面智能体,所述空中智能体和地面智能体使用Mavlink通讯协议通过433MHz数传与蘑菇天线进行数据交互和传输。6. According to claim 5, the ground-to-air collaborative visual navigation device based on improved graph optimized SLAM is characterized in that the intelligent agent includes: an aerial intelligent agent and a ground intelligent agent, and the aerial intelligent agent and the ground intelligent agent use the Mavlink communication protocol to interact and transmit data with the mushroom antenna through 433MHz data transmission.7.根据权利要求6所述的基于改进图优化SLAM的地空协同视觉导航装置,其特征在于,所述地面智能体为携带有GNSS接收机、惯性导航传感器、单目视觉传感器、433MHz数传和基于STM32的处理系统构成的无人智能小车,所述空中智能体为携带有GNSS接收机、加速度传感器、陀螺仪、单目视觉传感器、433MHz数传和基于STM32的处理器系统构成的智能无人机;各传感器均通过连接到STM32,将数据传送至处理器中进行处理,需要交互的信息通过连接在STM32通信端口的数传和蘑菇天线进行。7. The ground-to-air collaborative visual navigation device based on improved graph optimized SLAM according to claim 6 is characterized in that the ground intelligent body is an unmanned intelligent car carrying a GNSS receiver, an inertial navigation sensor, a monocular vision sensor, a 433MHz data transmission and an STM32-based processing system, and the aerial intelligent body is an intelligent unmanned aerial vehicle carrying a GNSS receiver, an acceleration sensor, a gyroscope, a monocular vision sensor, a 433MHz data transmission and an STM32-based processor system; each sensor is connected to the STM32 to transmit data to the processor for processing, and the information that needs to be interacted is carried out through the data transmission and mushroom antenna connected to the STM32 communication port.
CN201910561547.1A2019-06-262019-06-26 A ground-to-air collaborative visual navigation method and device based on improved graph optimization SLAMActiveCN110261877B (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
CN201910561547.1ACN110261877B (en)2019-06-262019-06-26 A ground-to-air collaborative visual navigation method and device based on improved graph optimization SLAM

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN201910561547.1ACN110261877B (en)2019-06-262019-06-26 A ground-to-air collaborative visual navigation method and device based on improved graph optimization SLAM

Publications (2)

Publication NumberPublication Date
CN110261877A CN110261877A (en)2019-09-20
CN110261877Btrue CN110261877B (en)2024-06-11

Family

ID=67921876

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN201910561547.1AActiveCN110261877B (en)2019-06-262019-06-26 A ground-to-air collaborative visual navigation method and device based on improved graph optimization SLAM

Country Status (1)

CountryLink
CN (1)CN110261877B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN112987032A (en)*2019-12-172021-06-18无锡市电子仪表工业有限公司Internet of things multidata collaborative protocol based on Beidou positioning
CN112714165B (en)*2020-12-222023-04-04声耕智能科技(西安)研究院有限公司Distributed network cooperation strategy optimization method and device based on combination mechanism
CN112948411B (en)*2021-04-152022-10-18深圳市慧鲤科技有限公司Pose data processing method, interface, device, system, equipment and medium
CN113470089B (en)*2021-07-212022-05-03中国人民解放军国防科技大学 A method and system for cross-domain co-location and mapping based on 3D point cloud
CN114942029B (en)*2022-05-312024-09-10哈尔滨工业大学Multi-robot mutual positioning method and system based on anonymous relative angle measurement

Citations (9)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN101625573A (en)*2008-07-092010-01-13中国科学院自动化研究所Digital signal processor based inspection robot monocular vision navigation system
CN101839721A (en)*2010-03-122010-09-22西安电子科技大学Visual navigation method in autonomous rendezvous and docking
CN104764440A (en)*2015-03-122015-07-08大连理工大学Rolling object monocular pose measurement method based on color image
CN105388908A (en)*2015-12-112016-03-09国网四川省电力公司电力应急中心Machine vision-based unmanned aerial vehicle positioned landing method and system
CN106447579A (en)*2016-10-102017-02-22成都理工大学 Space-ground-space integrated collaborative search and rescue system suitable for complex mountainous scenic spots
CN107144281A (en)*2017-06-302017-09-08飞智控(天津)科技有限公司Unmanned plane indoor locating system and localization method based on cooperative target and monocular vision
CN107687850A (en)*2017-07-262018-02-13哈尔滨工业大学深圳研究生院A kind of unmanned vehicle position and orientation estimation method of view-based access control model and Inertial Measurement Unit
CN109307508A (en)*2018-08-292019-02-05中国科学院合肥物质科学研究院 A Panoramic Inertial Navigation SLAM Method Based on Multiple Keyframes
CN211878189U (en)*2019-06-262020-11-06南京航空航天大学 A ground-air collaborative visual navigation device based on improved graph optimization SLAM

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
JP5745067B2 (en)*2010-09-242015-07-08アイロボット・コーポレーション System and method for VSLAM optimization

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN101625573A (en)*2008-07-092010-01-13中国科学院自动化研究所Digital signal processor based inspection robot monocular vision navigation system
CN101839721A (en)*2010-03-122010-09-22西安电子科技大学Visual navigation method in autonomous rendezvous and docking
CN104764440A (en)*2015-03-122015-07-08大连理工大学Rolling object monocular pose measurement method based on color image
CN105388908A (en)*2015-12-112016-03-09国网四川省电力公司电力应急中心Machine vision-based unmanned aerial vehicle positioned landing method and system
CN106447579A (en)*2016-10-102017-02-22成都理工大学 Space-ground-space integrated collaborative search and rescue system suitable for complex mountainous scenic spots
CN107144281A (en)*2017-06-302017-09-08飞智控(天津)科技有限公司Unmanned plane indoor locating system and localization method based on cooperative target and monocular vision
CN107687850A (en)*2017-07-262018-02-13哈尔滨工业大学深圳研究生院A kind of unmanned vehicle position and orientation estimation method of view-based access control model and Inertial Measurement Unit
CN109307508A (en)*2018-08-292019-02-05中国科学院合肥物质科学研究院 A Panoramic Inertial Navigation SLAM Method Based on Multiple Keyframes
CN211878189U (en)*2019-06-262020-11-06南京航空航天大学 A ground-air collaborative visual navigation device based on improved graph optimization SLAM

Also Published As

Publication numberPublication date
CN110261877A (en)2019-09-20

Similar Documents

PublicationPublication DateTitle
CN110261877B (en) A ground-to-air collaborative visual navigation method and device based on improved graph optimization SLAM
CN110068335B (en) A method and system for real-time positioning of UAV swarms in GPS-denied environment
CN112734765B (en)Mobile robot positioning method, system and medium based on fusion of instance segmentation and multiple sensors
Wang et al.Pointloc: Deep pose regressor for lidar point cloud localization
CN110125928A (en)A kind of binocular inertial navigation SLAM system carrying out characteristic matching based on before and after frames
CN113758488B (en)Indoor positioning method and equipment based on UWB and VIO
CN112965507B (en)Cluster unmanned aerial vehicle cooperative work system and method based on intelligent optimization
CN112529962A (en)Indoor space key positioning technical method based on visual algorithm
CN116989772B (en) An air-ground multi-modal multi-agent collaborative positioning and mapping method
CN114047766B (en) Mobile robot data collection system and method for long-term application in indoor and outdoor scenes
CN117289298B (en)Multi-machine collaborative online mapping method, system and terminal equipment based on laser radar
Zhao et al.Review of slam techniques for autonomous underwater vehicles
CN118111443A (en)Unmanned aerial vehicle cluster decentralization distributed type cooperative positioning method
Wang et al.Communication efficient, distributed relative state estimation in UAV networks
CN111812978B (en)Cooperative SLAM method and system for multiple unmanned aerial vehicles
CN117496322A (en)Multi-mode 3D target detection method and device based on cloud edge cooperation
CN211878189U (en) A ground-air collaborative visual navigation device based on improved graph optimization SLAM
WO2025190241A1 (en)Beidou-based multi-source fusion positioning method in disaster environment
CN119784948A (en) A method for reconstructing 3D color geometric models of outdoor inspection scenes based on multi-source information fusion
Li et al.UWB-VO: Ultra-Wideband Anchor Assisted Visual Odometry
CN118746293A (en) High-precision positioning method based on multi-sensor fusion SLAM
Merino et al.Data fusion in ubiquitous networked robot systems for urban services
Zhang et al.Leader-Follower cooperative localization based on VIO/UWB loose coupling for AGV group
Li et al.Dynamic obstacle tracking based on high-definition map in urban scene
Xing et al.An Autonomous Moving Target Tracking System for Rotor UAV

Legal Events

DateCodeTitleDescription
PB01Publication
PB01Publication
SE01Entry into force of request for substantive examination
SE01Entry into force of request for substantive examination
GR01Patent grant
GR01Patent grant

[8]ページ先頭

©2009-2025 Movatter.jp