技术领域technical field
本发明涉及四旋翼无人机的单目视觉领域,尤其是一种针对四旋翼无人机的单目视觉运动物体识别跟踪的场景来实现的三维物体特征提取方法。The invention relates to the field of monocular vision of quadrotor unmanned aerial vehicles, in particular to a three-dimensional object feature extraction method realized for the scene of monocular vision moving object recognition and tracking of quadrotor unmanned aerial vehicles.
背景技术Background technique
近年来,随着计算机技术,自动控制理论,嵌入式开发,芯片设计以及传感器技术的迅速发展,让无人飞行器能够在更加小型化的同时,拥有更强的处理能力,无人机上的相关技术也受到越来越多的关注;小型无人机拥有操控灵活,续航能力强等优势,从而能够在狭小环境中处理复杂任务,在军事上能够执行军事打击,恶劣环境下搜索,情报收集,等高风险环境下替代士兵的工作;在民用上,为各行各业从业人员提供航拍,远程设备巡检,环境监测,抢险救灾等等功能;In recent years, with the rapid development of computer technology, automatic control theory, embedded development, chip design and sensor technology, UAVs can be more miniaturized and have stronger processing capabilities. Related technologies on UAVs It has also attracted more and more attention; small UAVs have the advantages of flexible control and strong endurance, so that they can handle complex tasks in a small environment, and can perform military strikes in the military, search in harsh environments, intelligence collection, etc. It can replace the work of soldiers in high-risk environments; in civilian use, it provides aerial photography, remote equipment inspection, environmental monitoring, rescue and disaster relief and other functions for practitioners in all walks of life;
四旋翼为常见旋翼无人飞行器,通过调节电机转速实现飞行器的俯仰,横滚以及偏航动作;相对于固定翼无人机,旋翼无人机拥有明显的优势:首先,机身结构简单、体积小,单位体积可产生更大升力;其次,动力系统简单,只需调整各旋翼驱动电机转速即可完成空中姿态的控制,可实现垂直起降、空中悬停等多种特有的飞行模式,且系统智能度高,飞行器空中姿态保持能力强;The quadrotor is a common rotary-wing UAV. The pitch, roll and yaw actions of the aircraft can be realized by adjusting the motor speed. Compared with the fixed-wing UAV, the rotary-wing UAV has obvious advantages: first, the fuselage structure is simple and the volume Small, more lift can be generated per unit volume; secondly, the power system is simple, just adjust the speed of each rotor drive motor to complete the control of the attitude in the air, and can realize various unique flight modes such as vertical take-off and landing, hovering in the air, and The system is highly intelligent, and the aircraft has a strong ability to maintain its attitude in the air;
在无人机上搭载高清摄像头,实时运行机器视觉算法已经成为近年来热点研究领域,无人机拥有灵活的视角,能帮助人们捕获一些地面移动摄像机难以捕捉到的图像,如果将轻量级摄像头嵌入到小型四旋翼无人机上,还能提供丰富并廉价的信息;目标跟踪是指在低空飞行的无人机,通过计算相机获得的视觉信息来得到目标与无人机间的相对位移,进而自动调整无人机的姿态和位置,使被跟踪的地面移动目标保持在相机视野中心附近,实现无人机跟随目标运动完成跟踪任务,但是由于单目摄像机的技术限制,想要得到移动物体的三维坐标信息是非常困难的,因此,想要实现运动目标的跟踪需要有一种简单高效的三维特征提取方法。Carrying high-definition cameras on UAVs and running machine vision algorithms in real time has become a hot research field in recent years. UAVs have flexible viewing angles and can help people capture some images that are difficult to capture by ground mobile cameras. If a lightweight camera is embedded It can also provide rich and cheap information to small quadrotor UAVs; target tracking refers to UAVs flying at low altitudes, by calculating the visual information obtained by the camera to obtain the relative displacement between the target and the UAV, and then automatically. Adjust the attitude and position of the drone to keep the tracked ground moving target near the center of the camera's field of view, so that the drone can follow the target movement to complete the tracking task. However, due to the technical limitations of the monocular camera, it is necessary to obtain the three-dimensional Coordinate information is very difficult. Therefore, a simple and efficient 3D feature extraction method is required to achieve the tracking of moving objects.
发明内容SUMMARY OF THE INVENTION
为了克服现有的四旋翼无人机平台单目视觉特征提取方法的无法有效提取三维特征的不足,为了能够在单目摄像机上实现地面运动目标的跟踪,可以将飞行器的运动简化为在某一高度下的二维平面运动,单目摄像机所获取到的二维特征平面可以看作垂直于运动平面,因此还需要得到二维特征平面和飞行器之间的相对距离才能够实现飞行器的运动跟踪即需要得到特征平面的景深信息,而加入了景深信息的二维特征可以近似为三维特征信息,基于这样的思路,本发明提出一种基于四旋翼无人机平台的单目视觉三维特征提取方法。In order to overcome the shortcomings of the existing monocular visual feature extraction methods of the quadrotor UAV platform, which cannot effectively extract 3D features, in order to realize the tracking of ground moving targets on the monocular camera, the motion of the aircraft can be simplified as a certain For two-dimensional plane motion at height, the two-dimensional feature plane obtained by the monocular camera can be regarded as being perpendicular to the motion plane. Therefore, it is necessary to obtain the relative distance between the two-dimensional feature plane and the aircraft to realize the motion tracking of the aircraft. The depth of field information of the feature plane needs to be obtained, and the two-dimensional feature added with the depth of field information can be approximated as three-dimensional feature information. Based on this idea, the present invention proposes a monocular vision three-dimensional feature extraction method based on a quadrotor UAV platform.
本发明解决其技术问题所采用的技术方案是:The technical scheme adopted by the present invention to solve its technical problems is:
一种基于四旋翼无人机的单目视觉三维特征提取方法,包括以下步骤:A monocular vision three-dimensional feature extraction method based on a quadrotor UAV, comprising the following steps:
1)获取图像并且对图像进行预处理;1) Acquire an image and preprocess the image;
2)提取二维图像特征点并且建立特征描述符;2) Extracting two-dimensional image feature points and establishing feature descriptors;
3)获取机载GPS坐标、高度数据和IMU传感器参数;3) Obtain airborne GPS coordinates, altitude data and IMU sensor parameters;
4)根据机体参数对二维特征描述符进行坐标系建立,获得三维坐标信息,过程如下:4) Establish the coordinate system of the two-dimensional feature descriptor according to the body parameters, and obtain the three-dimensional coordinate information. The process is as follows:
首先,根据相机参数建立内参数矩阵,根据该矩阵将步骤3)中获取到的二维特征坐标信息转换到像平面坐标系I,根据已知的焦距信息转换到相机坐标系C;其次,根据相机和机体的安装误差角与相对位置进一步转换坐标系到机体坐标系B;最终,根据IMU姿态角度并且融合飞行器GPS坐标信息和高度信息得到世界坐标系E中的带有景深信息的二维特征描述符。First, an internal parameter matrix is established according to the camera parameters, and the two-dimensional feature coordinate information obtained in step 3) is converted to the image plane coordinate system I according to the matrix, and is converted to the camera coordinate system C according to the known focal length information; The installation error angle and relative position of the camera and the body are further converted into the body coordinate system B; finally, according to the IMU attitude angle and the fusion of the GPS coordinate information and altitude information of the aircraft, the two-dimensional feature with the depth of field information in the world coordinate system E is obtained Descriptor.
进一步,所述步骤4)中,根据机体参数获得二维特征的三维坐标信息,包括以下步骤:Further, in the step 4), obtaining the three-dimensional coordinate information of the two-dimensional feature according to the parameters of the body, including the following steps:
4.1)图像坐标系与像平面坐标系的转换4.1) Conversion between image coordinate system and image plane coordinate system
图像坐标系是以左上角为原点的图像像素坐标系[u,v]T,该坐标系没有物理单位,因此引入原点OI在光轴上的像平面坐标系I=[xI,yI]T,像平面是相机根据小孔成像模型构建出来的具有物理意义的平面,假设每一个像素在u轴和v轴方向上的物理尺寸为dx和dy,其含义是感光芯片上像素的实际大小,是连接图像坐标系和真实尺寸坐标系的桥梁,dx和dy与摄像机焦距f有关,则像平面坐标系上的点(x1,y1)与像素坐标系中点(u1,v1)对应关系如下:The image coordinate system is the image pixel coordinate system [u,v]T with the upper left corner as the origin. This coordinate system has no physical unit, so the image plane coordinate system I=[xI , yI with the origin OI on the optical axis is introduced ]T , the image plane is a plane with physical meaning constructed by the camera according to the pinhole imaging model. Assuming that the physical dimensions of each pixel in the u-axis and v-axis directions are dx and dy, its meaning is the actual size of the pixel on the photosensitive chip. Size is a bridge connecting the image coordinate system and the real size coordinate system. dx and dy are related to the focal length f of the camera, then the point (x1 , y1 ) on the image plane coordinate system and the midpoint of the pixel coordinate system (u1 , v1 ) The corresponding relationship is as follows:
其中,(u0,v0)为图像坐标系中的中心点,即像平面坐标系的原点所对应的像素点,令包含四个与相机内部结构有关的参数,称为相机的内参矩阵;Among them, (u0 , v0 ) is the center point in the image coordinate system, that is, the pixel point corresponding to the origin of the image plane coordinate system, let Contains four parameters related to the internal structure of the camera, called the camera's internal parameter matrix;
4.2)像平面坐标系与相机坐标系的转换4.2) Conversion between image plane coordinate system and camera coordinate system
假设相机坐标系中一点PC1=(xC,yC,zC),连接光心在图像坐标系中的投影点为PI1=(xI,yI),则这两点之间的坐标转换关系如下:Assuming that a point PC1 = (xC , yC , zC ) in the camera coordinate system, the projection point connecting the optical center in the image coordinate system is PI1 = (xI , yI ), then the distance between the two points The coordinate conversion relationship is as follows:
转换成矩阵形式如下:Converted to matrix form as follows:
其中f为相机焦距;where f is the focal length of the camera;
4.3)相机坐标系与世界坐标系的转换4.3) Conversion of camera coordinate system and world coordinate system
首先,由于飞行器与相机存在安装误差,这里用[α,β,γ]T表示安装固定的三维误差角,用[xe,ye,ze]T表示摄像机到机身坐标原点的空间距离,则相机坐标系和机体坐标系的关系用T=来表示,即First of all, due to the installation error between the aircraft and the camera, here [α, β, γ]T is used to represent the fixed three-dimensional error angle, and [xe , ye , ze ]T is used to represent the spatial distance from the camera to the origin of the fuselage coordinates , the relationship between the camera coordinate system and the body coordinate system is T= to indicate that
C=TB (4)C=TB (4)
其中C表示相机坐标系,B表示机体坐标系;Where C represents the camera coordinate system, and B represents the body coordinate system;
其次,对于空间中一点PE=(xE,yE,zE),其对应的摄像机坐标系和摄像机的姿态角和所在位置有关,而无人机在飞行过程中,姿态角和位置信息实时获取,四旋翼无人机是一种具有6自由度的系统,其姿态角分为俯仰角,横滚角θ和偏航角,其旋转轴分别定义为X、Y、Z轴,坐标系原点为飞行器的重心,分别得到三轴的旋转矩阵后相乘即得到机体的旋转矩阵:Secondly, for a point PE =(xE , yE , zE ) in space, the corresponding camera coordinate system is related to the attitude angle and position of the camera, while the attitude angle and position information of the drone during flight Real-time acquisition, quadrotor UAV is a system with 6 degrees of freedom, and its attitude angle is divided into pitch angle , the roll angle θ and the yaw angle , and its rotation axes are defined as X, Y, and Z axes respectively, and the origin of the coordinate system is the center of gravity of the aircraft. The rotation matrices of the three axes are respectively obtained and multiplied to obtain the rotation matrix of the body:
可由四旋翼机身上的IMU传感器测得的x,y,z轴三个加速度分量与陀螺仪分量经四元数解算得到;令其中(x,y,z)为无人机在空间中的位置信息,z即为无人机飞行高度,无人机位置(x,y,z)可以由GPS和气压计获得,那么PE对应的相机坐标系下的点(xC,yC,zC)可由以下关系式计算出: The three acceleration components of the x, y, and z axes and the gyroscope component measured by the IMU sensor on the quadrotor fuselage can be obtained by quaternion solution; let Where (x, y, z) is the position information of the UAV in space, z is the flying height of the UAV, and the UAV position (x, y, z) can be obtained by GPS and barometer, then PE The corresponding point in the camera coordinate system (xC , yC , zC ) can be calculated by the following relation:
其中T为相机坐标系和机体坐标系变换矩阵,R为机体旋转矩阵,M为飞行器的世界坐标点,[xE,yE,zE]T即为所求的特征点的三维坐标。Among them, T is the transformation matrix of the camera coordinate system and the body coordinate system, R is the body rotation matrix, M is the world coordinate point of the aircraft, and [xE , yE , zE ]T is the three-dimensional coordinates of the desired feature point.
再进一步,所述步骤1)中,获取图像并且预处理的步骤如下:Further, in the described step 1), the steps of acquiring an image and preprocessing are as follows:
1.1)采集图像1.1) Acquisition of images
基于四旋翼飞行器平台的Linux开发环境,使用机器人操作系统ROS订阅图像主题的方式获取图像的,相机驱动由ROS和openCV实现;The Linux development environment based on the quadrotor aircraft platform uses the robot operating system ROS to subscribe to image topics to obtain images, and the camera driver is implemented by ROS and openCV;
1.2)图像预处理1.2) Image preprocessing
采集到的彩色图像首先要进行灰度化,去除无用的图像彩色信息,这里使用的方法是求出每个像素点的R、G、B三个分量的加权平均值即为这个像素点的灰度值,这里不同通道的权值根据运行效率进行优化,避免浮点运算计算公式为:The collected color image must first be grayed to remove the useless image color information. The method used here is to find the weighted average of the three components of R, G, and B for each pixel, which is the gray of this pixel. degree value, where the weights of different channels are optimized according to the operating efficiency to avoid floating-point operations The calculation formula is:
Gray=(R×30+G×59+B×11+50)/100 (7)Gray=(R×30+G×59+B×11+50)/100 (7)
其中Gray为像素点的灰度值,R、G、B分别为红、绿、蓝色通道的数值。Among them, Gray is the grayscale value of the pixel, and R, G, and B are the values of the red, green, and blue channels, respectively.
更进一步,所述步骤2)中,提取二维图像特征点并且建立特征描述符的过程为:Further, in the described step 2), the process of extracting two-dimensional image feature points and establishing feature descriptors is:
2.1)ORB提取特征点2.1) ORB to extract feature points
ORB首先利用Harris角点检测方法检测角点,之后利用亮度中心来测量旋转方向;假设一个角点的亮度从其中心偏移而来,则合成周围点的方向强度,计算角点的方向,定义如下强度矩阵:ORB first uses the Harris corner detection method to detect the corner points, and then uses the brightness center to measure the rotation direction; assuming that the brightness of a corner point is offset from its center, the directional intensity of the surrounding points is synthesized, the direction of the corner point is calculated, and the definition The intensity matrix is as follows:
mpq=∑x,y xpyqI(x,y) (8)mpq =∑x,y xp yq I(x,y) (8)
其中x,y为图像块的中心坐标,I(x,y)表示中心的灰度,xp,yq代表点到中心的偏移,则角点的方向表示为:Where x, y are the center coordinates of the image block, I(x, y) represents the gray level of the center, xp , yq represent the offset from the point to the center, and the direction of the corner point is expressed as:
从角点中心构建这个向量,则这个图像块的方向角θ可以表示为:Constructing this vector from the center of the corner point, the direction angle θ of this image block can be expressed as:
θ=tan-1(m01,m10) (10)θ=tan-1 (m01 ,m10 ) (10)
由于ORB提取的关键点具有方向,因此利用ORB提取的特征点具有旋转不变性;Since the key points extracted by ORB have directions, the feature points extracted by ORB have rotation invariance;
2.2)LDB特征描述符建立2.2) LDB feature descriptor establishment
在得到图像的关键点后,就利用LDB来建立图像的特征描述符;LDB的处理过程依次是构建高斯金字塔、构建积分图、二进制测试,位选择和串联;After obtaining the key points of the image, LDB is used to establish the feature descriptor of the image; the processing process of LDB is to build a Gaussian pyramid, build an integral map, binary test, bit selection and concatenation;
为了让LDB拥有尺度不变性,构建高斯金字塔,并计算特征点在相应金字塔层级上对应的LDB描述符:In order to make LDB scale invariant, construct a Gaussian pyramid and calculate the LDB descriptor corresponding to the feature points at the corresponding pyramid level:
其中,I(x,y)为给定图像,G(x,y,σi)为高斯滤波器,σi逐渐增大,用于构建1到L层高斯金字塔Pyri;对于像ORB这样没有显著尺度估计的特征提取,需要对每个特征点都计算金字塔各层的LDB描述;Among them, I(x,y) is a given image, G(x,y,σi ) is a Gaussian filter, σi is gradually increased, used to build 1 to L layers of Gaussian pyramid Pyri ; The feature extraction of saliency scale estimation needs to calculate the LDB description of each layer of the pyramid for each feature point;
LDB计算旋转坐标,并使用最邻近插值法,即时生成一个有向图块;LDB calculates the rotation coordinates and uses the nearest neighbor interpolation method to generate a directed tile on the fly;
建立好垂直积分图或旋转积分图并提取出光强和梯度信息后,在成对网格间进行τ二进制检测,检测方法如下式:After the vertical integral map or rotational integral map is established and the light intensity and gradient information are extracted, τ binary detection is performed between the paired grids. The detection method is as follows:
其中Func(·)={Iavg,dx,dy},用于提取出每个网格的描述信息;Wherein Func( )={Iavg , dx ,dy }, used to extract the description information of each grid;
给定一个图像块,LDB先将这个图像块平均分成n×n个等大小的网格单元,提取出每个网格单元的平均光强度和梯度信息,在成对网格单元间分别比较光强度和梯度信息,将结果大于0的相应位置1;在不同的网格单元中平均光强和沿x或y方向的梯度能够有效地区分图像,因此,定义Func(i)如下:Given an image block, LDB first divides the image block into n×n grid units of equal size, extracts the average light intensity and gradient information of each grid unit, and compares the light intensity between paired grid units. Intensity and gradient information, the corresponding position where the result is greater than 0 is set to 1; the average light intensity and the gradient along the x or y direction in different grid cells can effectively distinguish the image, therefore, define Func(i) as follows:
Func(i)∈{IIntensity(i),dx(i),dy(i)} (13)Func(i)∈{IIntensity (i),dx (i),dy (i)} (13)
其中为网格单元i的平均光强,dx(i)=Gradientx(i),dy(i)=Gradienty(i),m是网格单元i中的总像素数,由于LDB使用的是等大小的而网格,m在同一层高斯金字塔上保持一致;Gradientx(i)和Gradienty(i)分别是网格单元i沿x或y方向的梯度;in is the average light intensity of grid unit i, dx (i)=Gradientx (i),dy (i)=Gradienty (i), m is the total number of pixels in grid unit i, since LDB uses It is of equal size and the grid, m is consistent on the same layer of Gaussian pyramid; Gradientx (i) and Gradienty (i) are the gradients of grid cell i along the x or y direction, respectively;
2.3)特征描述符的匹配2.3) Matching of feature descriptors
当得到两幅图像的LDB描述符后,对两幅图像的描述符进行匹配;采用K最临近法来对两个描述符进行匹配;对于目标模板图像中的每个特征点,在输入图像中查找该点的最近邻的两个匹配,比较这两个匹配之间的距离,如果模板图像中一点的匹配距离小于0.8倍输入图像的匹配距离,认为模板中的点和输入图像对应的点为有效匹配,记录下相应的坐标值,当两幅图像间的匹配点多于4个,认为在输入图像中找到了目标物体,对应的坐标信息即为二维特征信息。When the LDB descriptors of the two images are obtained, the descriptors of the two images are matched; the K nearest neighbor method is used to match the two descriptors; for each feature point in the target template image, in the input image Find two matches of the nearest neighbors of the point, and compare the distance between the two matches. If the matching distance of a point in the template image is less than 0.8 times the matching distance of the input image, consider the point in the template and the point corresponding to the input image as For effective matching, record the corresponding coordinate values. When there are more than 4 matching points between the two images, it is considered that the target object is found in the input image, and the corresponding coordinate information is the two-dimensional feature information.
更进一步,所述步骤3)中,获取机载GPS坐标、高度数据和IMU传感器参数的过程为:Further, in the described step 3), the process of obtaining airborne GPS coordinates, altitude data and IMU sensor parameters is:
MAVROS为第三方团队针对MAVLink开发的ROS包,当启动MAVROS并且和飞行器飞控连接后,该节点就会开始发布飞行器的传感器参数和飞行数据,这里订阅飞行器的GPS坐标主题、GPS高度主题、IMU姿态角主题的消息,就可以获取到对应的数据。MAVROS is a ROS package developed by a third-party team for MAVLink. When MAVROS is started and connected to the aircraft's flight controller, the node will start publishing the aircraft's sensor parameters and flight data. Subscribe to the aircraft's GPS coordinate topic, GPS altitude topic, and IMU. The corresponding data can be obtained from the message of the attitude angle topic.
本发明的技术构思为:随着四旋翼飞行器技术的成熟与稳定并且大量地在民用市场上推广,越来越多的人着眼于四旋翼飞行器上可以搭载的视觉系统,本发明就是在四旋翼飞行器实现运动目标跟踪的研究背景下提出的。The technical idea of the present invention is: as the quadrotor aircraft technology matures and stabilizes and is widely promoted in the civilian market, more and more people focus on the vision system that can be carried on the quadrotor aircraft. It is proposed under the research background of aircraft to achieve moving target tracking.
四旋翼飞行器若要实现运动目标的跟踪,首先需要提取目标的三维特征信息,而三维特征信息在使用单目摄像机的情况下是很难提取得到的,但是,如果将飞行器的追踪运动简化为某一高度下的二维平面运动就可以将所需的三维特征信息简化为具有景深信息的二维特征信息,因此,本发明提出根据飞行器的空间坐标为二维特征加入景深信息,以实现近似的三维特征信息提取。If a quadrotor aircraft wants to track a moving target, it first needs to extract the three-dimensional feature information of the target, and the three-dimensional feature information is difficult to extract when using a monocular camera. However, if the tracking motion of the aircraft is simplified to a certain Two-dimensional plane motion at one height can simplify the required three-dimensional feature information into two-dimensional feature information with depth of field information. Therefore, the present invention proposes to add depth of field information to the two-dimensional feature according to the spatial coordinates of the aircraft to achieve an approximate 3D feature information extraction.
基于四旋翼无人机的单目视觉三维特征提取方法主要包括:获取图像并且灰度化,进一步将提取图像中的二维特征信息,获取飞行器的空间坐标和IMU角度信息,最终根据机体参数对二维特征进行坐标系建立,获取三维特征信息。The monocular vision 3D feature extraction method based on the quadrotor UAV mainly includes: obtaining the image and graying it, further extracting the 2D feature information in the image, obtaining the spatial coordinates and IMU angle information of the aircraft, and finally according to the body parameters. The coordinate system is established for the two-dimensional feature, and the three-dimensional feature information is obtained.
本方法的有益效果主要表现在:针对四旋翼飞行器的运动跟踪问题提出了一种简单且运算量低的单目摄像机三维特征提取方法,大大简化了四旋翼飞行器运动跟踪的实现过程。The beneficial effects of the method are mainly manifested in that a simple and low-computational 3D feature extraction method for a monocular camera is proposed for the motion tracking problem of the quadrotor aircraft, which greatly simplifies the realization process of the quadrotor aircraft motion tracking.
附图说明Description of drawings
图1为一种基于四旋翼无人机的单目视觉三维特征提取方法流程图;Fig. 1 is a kind of flow chart of the monocular vision three-dimensional feature extraction method based on quadrotor UAV;
图2为三维特征提取过程中的各坐标系间的关系,其中[xc,yc,zc]T是摄像机坐标系,[xI,yI,zI]T是像平面坐标系,[xE,yE,zE]T是世界坐标系。Figure 2 shows the relationship between the coordinate systems in the three-dimensional feature extraction process, where [xc , yc , zc ]T is the camera coordinate system, [xI , yI , zI ]T is the image plane coordinate system, [xE , yE , zE ]T is the world coordinate system.
具体实施方式Detailed ways
下面结合附图对本发明做进一步描述:The present invention will be further described below in conjunction with the accompanying drawings:
参照图1和图2,一种基于四旋翼无人机的单目视觉三维特征提取方法,包含以下步骤:1 and 2, a monocular vision three-dimensional feature extraction method based on a quadrotor UAV, comprising the following steps:
1)获取图像并且预处理:1) Get the image and preprocess:
1.1)采集图像1.1) Acquisition of images
一般而言,采集图像的方法有非常多中,本发明是基于四旋翼飞行器平台的Linux开发环境,使用机器人操作系统ROS订阅图像主题的方式获取图像的,相机驱动由ROS和openCV实现;Generally speaking, there are many methods for collecting images. The present invention is based on the Linux development environment of the quadrotor aircraft platform, and uses the robot operating system ROS to subscribe to image topics to obtain images, and the camera driver is implemented by ROS and openCV;
1.2)图像预处理1.2) Image preprocessing
由于本发明所使用的特征提取方法基于的是图像的纹理光强以及梯度信息,因此采集到的彩色图像首先要进行灰度化,去除无用的图像彩色信息,这里使用的方法是求出每个像素点的R、G、B三个分量的加权平均值即为这个像素点的灰度值,这里不同通道的权值可以根据运行效率进行优化,这里避免浮点运算计算公式为:Since the feature extraction method used in the present invention is based on the texture light intensity and gradient information of the image, the collected color image must first be grayed to remove the useless image color information. The method used here is to obtain each The weighted average of the three components of R, G, and B of a pixel is the gray value of the pixel. Here, the weights of different channels can be optimized according to the operating efficiency. The calculation formula for avoiding floating-point operations here is:
Gray=(R×30+G×59+B×11+50)/100 (7)Gray=(R×30+G×59+B×11+50)/100 (7)
其中Gray为像素点的灰度值,R、G、B分别为红、绿、蓝色通道的数值。Among them, Gray is the grayscale value of the pixel, and R, G, and B are the values of the red, green, and blue channels, respectively.
2)提取二维图像特征点并且建立特征描述符:2) Extract two-dimensional image feature points and establish feature descriptors:
2.1)ORB提取特征点2.1) ORB to extract feature points
ORB也称为rBRIEF,提取出局部不变的特征,是对BRIEF算法的改进,BRIEF运算速度快,然而没有旋转不变性,并且对噪声比较敏感,ORB解决了BRIEF的这两个缺点;为了让算法能有旋转不变性,ORB首先利用Harris角点检测方法检测角点,之后利用亮度中心(Intensity Centroid)来测量旋转方向;假设一个角点的亮度从其中心偏移而来,则合成周围点的方向强度,可以计算角点的方向,定义如下强度矩阵:ORB, also known as rBRIEF, extracts locally invariant features, which is an improvement on the Brief algorithm. Brief has a fast operation speed, but has no rotational invariance and is sensitive to noise. ORB solves these two shortcomings of Brief; The algorithm can have rotation invariance. ORB first uses the Harris corner detection method to detect the corner points, and then uses the Intensity Centroid to measure the rotation direction; assuming that the brightness of a corner point is offset from its center, the surrounding points are synthesized. The direction strength of , the direction of the corner can be calculated, and the following strength matrix is defined:
mpq=Σx,y xpyqI(x,y) (8)mpq =Σx,y xp yq I(x,y) (8)
其中x,y为图像块的中心坐标,I(x,y)表示中心的灰度,xp,yq代表点到中心的偏移,则角点的方向可以表示为:Where x, y are the center coordinates of the image block, I(x, y) represents the gray level of the center, xp , yq represent the offset from the point to the center, and the direction of the corner point can be expressed as:
从角点中心构建这个向量,则这个图像块的方向角θ可以表示为:Constructing this vector from the center of the corner point, the direction angle θ of this image block can be expressed as:
θ=tan-1(m01,m10) (10)θ=tan-1 (m01 ,m10 ) (10)
由于ORB提取的关键点具有方向,因此利用ORB提取的特征点具有旋转不变性;Since the key points extracted by ORB have directions, the feature points extracted by ORB have rotation invariance;
2.2)LDB特征描述符建立2.2) LDB feature descriptor establishment
在得到图像的关键点后,就可以利用LDB来建立图像的特征描述符;LDB有5个主要步骤,依次是构建高斯金字塔、主方向估计、构建积分图、二进制测试,位选择和串联,由于本文选用了ORB来提取特征点,本身已经带有方向性,因此可以省去主方向估计;After the key points of the image are obtained, LDB can be used to establish the feature descriptor of the image; LDB has 5 main steps, followed by building a Gaussian pyramid, main direction estimation, building an integral map, binary testing, bit selection and concatenation. In this paper, ORB is used to extract feature points, which already has directionality, so the main direction estimation can be omitted;
为了让LDB拥有尺度不变性,构建高斯金字塔,并计算特征点在相应金字塔层级上对应的LDB描述符:In order to make LDB scale invariant, construct a Gaussian pyramid and calculate the LDB descriptor corresponding to the feature points at the corresponding pyramid level:
其中,I(x,y)为给定图像,G(x,y,σi)为高斯滤波器,σi逐渐增大,用于构建1到L层高斯金字塔Pyri;对于像ORB这样没有显著尺度估计的特征提取,需要对每个特征点都计算金字塔各层的LDB描述;Among them, I(x,y) is a given image, G(x,y,σi ) is a Gaussian filter, σi is gradually increased, used to build 1 to L layers of Gaussian pyramid Pyri ; The feature extraction of saliency scale estimation needs to calculate the LDB description of each layer of the pyramid for each feature point;
LDB利用积分图技术来有效的计算网格单元的平均光强和梯度信息,如果图像有旋转,不能简单的使用垂直积分图,而需要建立旋转积分图,图像块的旋转积分图通过累加主方向上的像素点来构建,生成旋转积分图的两大主要计算开销在计算旋转坐标和有向图像块的插值,为了减少这两部分计算开销,可以量化方位信息,并提前建立旋转坐标查找表,然而,精细的方位量化需要建立较大的查找表,低速的内存读取反过来会导致更长的运行时间,因此,LDB计算旋转坐标,并使用最邻近插值法,即时生成一个有向图块;LDB uses integral graph technology to effectively calculate the average light intensity and gradient information of grid cells. If the image is rotated, the vertical integral graph cannot be simply used, but a rotational integral graph needs to be established. The rotational integral graph of the image block is accumulated by accumulating the main directions. The two main computational overheads of generating the rotation integral map are the calculation of the rotation coordinates and the interpolation of the directed image block. In order to reduce the calculation costs of these two parts, the orientation information can be quantified, and the rotation coordinate lookup table can be established in advance. However, fine orientation quantization requires building large lookup tables, which in turn leads to longer runtimes due to slow memory reads. Therefore, LDB computes the rotated coordinates and uses nearest neighbor interpolation to generate a directed tile on the fly ;
建立好垂直积分图或旋转积分图并提取出光强和梯度信息后,就可以在成对网格间进行τ二进制检测,检测方法如下式:After the vertical integral graph or rotational integral graph is established and the light intensity and gradient information are extracted, τ binary detection can be performed between the paired grids. The detection method is as follows:
其中Func(·)={Iavg,dx,dy},用于提取出每个网格的描述信息;Wherein Func( )={Iavg , dx ,dy }, used to extract the description information of each grid;
给定一个图像块,LDB先将这个图像块平均分成n×n个等大小的网格单元,提取出每个网格单元的平均光强度和梯度信息,在成对网格单元间分别比较光强度和梯度信息,将结果大于0的相应位置1,结合光强和梯度的匹配方法大大高了匹配准确率;在不同的网格单元中平均光强和沿x或y方向的梯度能够有效地区分图像,因此,定义Func(i)如下:Given an image block, LDB first divides the image block into n×n grid units of equal size, extracts the average light intensity and gradient information of each grid unit, and compares the light intensity between paired grid units. Intensity and gradient information, the corresponding position where the result is greater than 0 is set to 1, and the matching method combining light intensity and gradient greatly improves the matching accuracy; the average light intensity and the gradient along the x or y direction in different grid cells can effectively area sub-image, therefore, define Func(i) as follows:
Func(i)∈{IIntensity(i),dx(i),dy(i)} (13)Func(i)∈{IIntensity (i),dx (i),dy (i)} (13)
其中为网格单元i的平均光强,dx(i)=Gradientx(i),dy(i)=Gradienty(i),m是网格单元i中的总像素数,由于LDB使用的是等大小的而网格,m在同一层高斯金字塔上保持一致;Gradientx(i)和Gradienty(i)分别是网格单元i沿x或y方向的梯度;in is the average light intensity of grid unit i, dx (i)=Gradientx (i),dy (i)=Gradienty (i), m is the total number of pixels in grid unit i, since LDB uses It is of equal size and the grid, m is consistent on the same layer of Gaussian pyramid; Gradientx (i) and Gradienty (i) are the gradients of grid cell i along the x or y direction, respectively;
2.3)特征描述符的匹配2.3) Matching of feature descriptors
当得到两幅图像的LDB描述符后,就可以对两幅图像的描述符进行匹配;本发明采用了K最临近法(k Nearest Neighbors)来对两个描述符进行匹配;KNN的思想是假设每一个类包含多个样本数据,而且每个数据都有一个唯一的类标记表示这些样本是属于哪个分类,计算每个样本数据到待分类数据的距离,取和待分类数据最近的K个样本数据,这K个样本数据中哪个类别的样本数据占多数,则待分类数据就属于该类别;对于目标模板图像中的每个特征点,在输入图像中查找该点的最近邻的两个匹配,比较这两个匹配之间的距离,如果模板图像中一点的匹配距离小于0.8倍输入图像的匹配距离,认为模板中的点和输入图像对应的点为有效匹配,记录下相应的坐标值,当两幅图像间的匹配点多于4个,本文认为在输入图像中找到了目标物体,对应的坐标信息即为二维特征信息。After the LDB descriptors of the two images are obtained, the descriptors of the two images can be matched; the present invention adopts the K Nearest Neighbors method to match the two descriptors; the idea of KNN is to assume Each class contains multiple sample data, and each data has a unique class label to indicate which class these samples belong to, calculate the distance from each sample data to the data to be classified, and take the K samples closest to the data to be classified For each feature point in the target template image, look for two matches of the nearest neighbors of the point in the input image. , compare the distance between the two matches, if the matching distance of a point in the template image is less than 0.8 times the matching distance of the input image, it is considered that the point in the template and the point corresponding to the input image are a valid match, and the corresponding coordinate value is recorded, When there are more than 4 matching points between the two images, this paper considers that the target object is found in the input image, and the corresponding coordinate information is the two-dimensional feature information.
3)获取机载GPS坐标、高度数据和IMU传感器参数的过程为:3) The process of obtaining airborne GPS coordinates, altitude data and IMU sensor parameters is as follows:
MAVROS为第三方团队针对MAVLink开发的ROS包,当启动MAVROS并且和飞行器飞控连接后,该节点就会开始发布飞行器的传感器参数和飞行数据,这里订阅飞行器的GPS坐标主题、GPS高度主题、IMU姿态角主题的消息,就可以获取到对应的数据。MAVROS is a ROS package developed by a third-party team for MAVLink. When MAVROS is started and connected to the aircraft's flight controller, the node will start publishing the aircraft's sensor parameters and flight data. Subscribe to the aircraft's GPS coordinate topic, GPS altitude topic, and IMU. The corresponding data can be obtained from the message of the attitude angle topic.
4)根据机体参数获得二维特征的三维坐标信息,过程如下:4) Obtain the three-dimensional coordinate information of the two-dimensional feature according to the body parameters, and the process is as follows:
4.1)图像坐标系与像平面坐标系的转换4.1) Conversion between image coordinate system and image plane coordinate system
图像坐标系是以左上角为原点的图像像素坐标系[u,v]T,该坐标系没有物理单位,因此引入原点OI在光轴上的像平面坐标系I=[xI,yI]T,像平面是相机根据小孔成像模型构建出来的具有物理意义的平面,假设每一个像素在u轴和v轴方向上的物理尺寸为dx和dy,其含义是感光芯片上像素的实际大小,是连接图像坐标系和真实尺寸坐标系的桥梁,dx和dy与摄像机焦距f有关,则像平面坐标系上的点(x1,y1)与像素坐标系中点(u1,v1)对应关系如下:The image coordinate system is the image pixel coordinate system [u,v]T with the upper left corner as the origin. This coordinate system has no physical unit, so the image plane coordinate system I=[xI , yI with the origin OI on the optical axis is introduced ]T , the image plane is a plane with physical meaning constructed by the camera according to the pinhole imaging model. Assuming that the physical dimensions of each pixel in the u-axis and v-axis directions are dx and dy, its meaning is the actual size of the pixel on the photosensitive chip. Size is a bridge connecting the image coordinate system and the real size coordinate system. dx and dy are related to the focal length f of the camera, then the point (x1 , y1 ) on the image plane coordinate system and the midpoint of the pixel coordinate system (u1 , v1 ) The corresponding relationship is as follows:
其中,(u0,v0)为图像坐标系中的中心点,即像平面坐标系的原点所对应的像素点,令包含四个与相机内部结构有关的参数,称为相机的内参矩阵;Among them, (u0 , v0 ) is the center point in the image coordinate system, that is, the pixel point corresponding to the origin of the image plane coordinate system, let Contains four parameters related to the internal structure of the camera, called the camera's internal parameter matrix;
4.2)像平面坐标系与相机坐标系的转换4.2) Conversion between image plane coordinate system and camera coordinate system
假设相机坐标系中一点PC1=(xC,yC,zC),连接光心在图像坐标系中的投影点为PI1=(xI,yI),则这两点之间的坐标转换关系如下:Assuming that a point PC1 = (xC , yC , zC ) in the camera coordinate system, the projection point connecting the optical center in the image coordinate system is PI1 = (xI , yI ), then the distance between the two points The coordinate conversion relationship is as follows:
可以转换成矩阵形式如下:It can be converted into matrix form as follows:
其中f为相机焦距;where f is the focal length of the camera;
4.3)相机坐标系与世界坐标系的转换4.3) Conversion of camera coordinate system and world coordinate system
首先,由于飞行器与相机存在安装误差,这里用[α,β,γ]T表示安装固定的三维误差角,用[xe,ye,ze]T表示摄像机到机身坐标原点的空间距离,则相机坐标系和机体坐标系的关系可以用来表示,即First of all, due to the installation error between the aircraft and the camera, here [α, β, γ]T is used to represent the fixed three-dimensional error angle, and [xe , ye , ze ]T is used to represent the spatial distance from the camera to the origin of the fuselage coordinates , the relationship between the camera coordinate system and the body coordinate system can be used to indicate that
C=TB (4)C=TB (4)
其中C表示相机坐标系,B表示机体坐标系;Where C represents the camera coordinate system, and B represents the body coordinate system;
其次,对于空间中一点PE=(xE,yE,zE),其对应的摄像机坐标系和摄像机的姿态角和所在位置有关,而无人机在飞行过程中,姿态角和位置信息可以实时获取,四旋翼无人机是一种具有6自由度的系统,其姿态角可以分为俯仰角横滚角θ和偏航角其旋转轴分别定义为X、Y、Z轴,坐标系原点为飞行器的重心,分别得到三轴的旋转矩阵后相乘即可得到机体的旋转矩阵:Secondly, for a point PE =(xE , yE , zE ) in space, the corresponding camera coordinate system is related to the attitude angle and position of the camera, while the attitude angle and position information of the drone during flight It can be acquired in real time. The quadrotor UAV is a system with 6 degrees of freedom, and its attitude angle can be divided into pitch angle. Roll angle θ and yaw angle The rotation axes are defined as X, Y, and Z axes, respectively, and the origin of the coordinate system is the center of gravity of the aircraft. The rotation matrices of the three axes are obtained and multiplied to obtain the rotation matrix of the body:
可由四旋翼机身上的IMU传感器测得的x,y,z轴三个加速度分量与陀螺仪分量经四元数解算得到;令其中(x,y,z)为无人机在空间中的位置信息,z即为无人机飞行高度,无人机位置(x,y,z)可以由GPS和气压计获得,那么PE对应的相机坐标系下的点(xC,yC,zC)可由以下关系式计算出: The three acceleration components of the x, y, and z axes and the gyroscope component measured by the IMU sensor on the quadrotor fuselage can be obtained by quaternion solution; let Where (x, y, z) is the position information of the UAV in space, z is the flying height of the UAV, and the UAV position (x, y, z) can be obtained by GPS and barometer, then PE The corresponding point in the camera coordinate system (xC , yC , zC ) can be calculated by the following relation:
其中T为相机坐标系和机体坐标系变换矩阵,R为机体旋转矩阵,M为飞行器的世界坐标点,[xE,yE,zE]T即为所求的特征点的三维坐标。Among them, T is the transformation matrix of the camera coordinate system and the body coordinate system, R is the body rotation matrix, M is the world coordinate point of the aircraft, and [xE , yE , zE ]T is the three-dimensional coordinates of the desired feature point.
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201610901957.2ACN106570820B (en) | 2016-10-18 | 2016-10-18 | A monocular vision 3D feature extraction method based on quadrotor UAV |
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201610901957.2ACN106570820B (en) | 2016-10-18 | 2016-10-18 | A monocular vision 3D feature extraction method based on quadrotor UAV |
| Publication Number | Publication Date |
|---|---|
| CN106570820A CN106570820A (en) | 2017-04-19 |
| CN106570820Btrue CN106570820B (en) | 2019-12-03 |
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN201610901957.2AActiveCN106570820B (en) | 2016-10-18 | 2016-10-18 | A monocular vision 3D feature extraction method based on quadrotor UAV |
| Country | Link |
|---|---|
| CN (1) | CN106570820B (en) |
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN109117690A (en)* | 2017-06-23 | 2019-01-01 | 百度在线网络技术(北京)有限公司 | Drivable region detection method, device, equipment and storage medium |
| CN109709977B (en)* | 2017-10-26 | 2022-08-16 | 广州极飞科技股份有限公司 | Method and device for planning movement track and moving object |
| CN109753079A (en)* | 2017-11-03 | 2019-05-14 | 南京奇蛙智能科技有限公司 | A kind of unmanned plane precisely lands in mobile platform method |
| CN109753076B (en)* | 2017-11-03 | 2022-01-11 | 南京奇蛙智能科技有限公司 | Unmanned aerial vehicle visual tracking implementation method |
| CN109839945B (en)* | 2017-11-27 | 2022-04-26 | 北京京东乾石科技有限公司 | UAV landing method, UAV landing device and computer readable storage medium |
| CN107966112A (en)* | 2017-12-03 | 2018-04-27 | 中国直升机设计研究所 | A kind of large scale rotor movement parameter measurement method |
| CN108335329B (en)* | 2017-12-06 | 2021-09-10 | 腾讯科技(深圳)有限公司 | Position detection method and device applied to aircraft and aircraft |
| CN108255187A (en)* | 2018-01-04 | 2018-07-06 | 北京科技大学 | A kind of micro flapping wing air vehicle vision feedback control method |
| CN108711166B (en)* | 2018-04-12 | 2022-05-03 | 浙江工业大学 | A Monocular Camera Scale Estimation Method Based on Quadrotor UAV |
| CN108759826B (en)* | 2018-04-12 | 2020-10-27 | 浙江工业大学 | A UAV motion tracking method based on the fusion of multi-sensing parameters of mobile phones and UAVs |
| CN108681324A (en)* | 2018-05-14 | 2018-10-19 | 西北工业大学 | Mobile robot trace tracking and controlling method based on overall Vision |
| CN110799921A (en)* | 2018-07-18 | 2020-02-14 | 深圳市大疆创新科技有限公司 | Filming method, device and drone |
| CN109242779B (en)* | 2018-07-25 | 2023-07-18 | 北京中科慧眼科技有限公司 | Method and device for constructing camera imaging model and automobile automatic driving system |
| CN109344846B (en)* | 2018-09-26 | 2022-03-25 | 联想(北京)有限公司 | Image feature extraction method and device |
| CN109754420B (en)* | 2018-12-24 | 2021-11-12 | 深圳市道通智能航空技术股份有限公司 | Target distance estimation method and device and unmanned aerial vehicle |
| CN109895099B (en)* | 2019-03-28 | 2020-10-02 | 哈尔滨工业大学(深圳) | A visual servo grasping method of flying manipulator based on natural features |
| CN110032983B (en)* | 2019-04-22 | 2023-02-17 | 扬州哈工科创机器人研究院有限公司 | Track identification method based on ORB feature extraction and FLANN rapid matching |
| CN110254258B (en)* | 2019-06-13 | 2021-04-02 | 暨南大学 | A wireless charging system and method for unmanned aerial vehicles |
| CN110297498B (en)* | 2019-06-13 | 2022-04-26 | 暨南大学 | A method and system for orbit inspection based on wireless charging UAV |
| CN110516531B (en)* | 2019-07-11 | 2023-04-11 | 广东工业大学 | Identification method of dangerous goods mark based on template matching |
| CN111126450B (en)* | 2019-11-29 | 2024-03-19 | 上海宇航系统工程研究所 | Modeling method and device for cuboid space vehicle based on nine-line configuration |
| CN110942473A (en)* | 2019-12-02 | 2020-03-31 | 哈尔滨工程大学 | Moving target tracking detection method based on characteristic point gridding matching |
| CN111583093B (en)* | 2020-04-27 | 2023-12-22 | 西安交通大学 | Hardware implementation method for ORB feature point extraction with good real-time performance |
| CN111524182B (en)* | 2020-04-29 | 2023-11-10 | 杭州电子科技大学 | Mathematical modeling method based on visual information analysis |
| CN111784731A (en)* | 2020-06-19 | 2020-10-16 | 哈尔滨工业大学 | A target pose estimation method based on deep learning |
| CN111754603B (en)* | 2020-06-23 | 2024-02-13 | 自然资源部四川测绘产品质量监督检验站(四川省测绘产品质量监督检验站) | Unmanned aerial vehicle image connection diagram construction method and system |
| CN112116651B (en)* | 2020-08-12 | 2023-04-07 | 天津(滨海)人工智能军民融合创新中心 | Ground target positioning method and system based on monocular vision of unmanned aerial vehicle |
| CN112197766B (en)* | 2020-09-29 | 2023-04-28 | 西安应用光学研究所 | Visual gesture measuring device for tethered rotor platform |
| CN112797912B (en)* | 2020-12-24 | 2023-04-07 | 中国航天空气动力技术研究院 | Binocular vision-based wing tip deformation measurement method for large flexible unmanned aerial vehicle |
| CN112907662B (en)* | 2021-01-28 | 2022-11-04 | 北京三快在线科技有限公司 | Feature extraction method and device, electronic equipment and storage medium |
| CN113403942B (en)* | 2021-07-07 | 2022-11-15 | 西北工业大学 | Label-assisted bridge detection unmanned aerial vehicle visual navigation method |
| CN114281096A (en)* | 2021-11-09 | 2022-04-05 | 中时讯通信建设有限公司 | Unmanned aerial vehicle tracking control method, device and medium based on target detection algorithm |
| CN115170656B (en)* | 2022-06-17 | 2025-10-03 | 中国人民解放军军事科学院国防科技创新研究院 | A relative pose estimation method for UAV formation based only on monocular vision information |
| CN114973146A (en)* | 2022-06-21 | 2022-08-30 | 长沙海信智能系统研究院有限公司 | Carriage congestion degree detection method and device and electronic equipment |
| CN117032276B (en)* | 2023-07-04 | 2024-06-25 | 长沙理工大学 | Bridge detection method and system based on binocular vision and inertial navigation fusion unmanned aerial vehicle |
| CN118376158B (en)* | 2024-06-21 | 2024-09-17 | 中铁大桥局集团有限公司 | Bridge expansion device monitoring method and system based on monocular machine vision |
| CN118644554B (en)* | 2024-07-31 | 2024-10-18 | 中国人民解放军国防科技大学 | Aircraft navigation method based on monocular depth estimation and ground characteristic point matching |
| CN119124090B (en)* | 2024-11-08 | 2025-01-21 | 北京卓翼智能科技有限公司 | Method, device, drone and medium for determining target distance based on airborne camera |
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| EP2849150A1 (en)* | 2013-09-17 | 2015-03-18 | Thomson Licensing | Method for capturing the 3D motion of an object, unmanned aerial vehicle and motion capture system |
| CN105809687A (en)* | 2016-03-08 | 2016-07-27 | 清华大学 | Monocular vision ranging method based on edge point information in image |
| CN105928493A (en)* | 2016-04-05 | 2016-09-07 | 王建立 | Binocular vision three-dimensional mapping system and method based on UAV |
| CN105953796A (en)* | 2016-05-23 | 2016-09-21 | 北京暴风魔镜科技有限公司 | Stable motion tracking method and stable motion tracking device based on integration of simple camera and IMU (inertial measurement unit) of smart cellphone |
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| EP2849150A1 (en)* | 2013-09-17 | 2015-03-18 | Thomson Licensing | Method for capturing the 3D motion of an object, unmanned aerial vehicle and motion capture system |
| CN105809687A (en)* | 2016-03-08 | 2016-07-27 | 清华大学 | Monocular vision ranging method based on edge point information in image |
| CN105928493A (en)* | 2016-04-05 | 2016-09-07 | 王建立 | Binocular vision three-dimensional mapping system and method based on UAV |
| CN105953796A (en)* | 2016-05-23 | 2016-09-21 | 北京暴风魔镜科技有限公司 | Stable motion tracking method and stable motion tracking device based on integration of simple camera and IMU (inertial measurement unit) of smart cellphone |
| Publication number | Publication date |
|---|---|
| CN106570820A (en) | 2017-04-19 |
| Publication | Publication Date | Title |
|---|---|---|
| CN106570820B (en) | A monocular vision 3D feature extraction method based on quadrotor UAV | |
| CN108711166B (en) | A Monocular Camera Scale Estimation Method Based on Quadrotor UAV | |
| Patruno et al. | A vision-based approach for unmanned aerial vehicle landing | |
| CN113359810B (en) | A multi-sensor based UAV landing area identification method | |
| US11748898B2 (en) | Methods and system for infrared tracking | |
| US10475209B2 (en) | Camera calibration | |
| CN108759826B (en) | A UAV motion tracking method based on the fusion of multi-sensing parameters of mobile phones and UAVs | |
| CN109211241B (en) | Autonomous positioning method of UAV based on visual SLAM | |
| CN110058602A (en) | Autonomous positioning method of multi-rotor UAV based on depth vision | |
| CN108805906A (en) | A kind of moving obstacle detection and localization method based on depth map | |
| CN111527463A (en) | Method and system for multi-target tracking | |
| Grabe et al. | Robust optical-flow based self-motion estimation for a quadrotor UAV | |
| CN102538782B (en) | Helicopter landing guide device and method based on computer vision | |
| CN109947097A (en) | A robot positioning method and navigation application based on vision and laser fusion | |
| WO2019127518A1 (en) | Obstacle avoidance method and device and movable platform | |
| CN117036989A (en) | Miniature unmanned aerial vehicle target recognition and tracking control method based on computer vision | |
| CN115933718B (en) | A UAV autonomous flight technology method integrating panoramic SLAM and target recognition | |
| CN116989772B (en) | An air-ground multi-modal multi-agent collaborative positioning and mapping method | |
| Zarei et al. | Indoor UAV object detection algorithms on three processors: implementation test and comparison | |
| CN115729250A (en) | A flight control method, device, equipment and storage medium for an unmanned aerial vehicle | |
| CN116185049A (en) | Unmanned helicopter autonomous landing method based on visual guidance | |
| CN117636284A (en) | Unmanned aerial vehicle autonomous landing method and device based on visual image guidance | |
| CN117611809A (en) | Point cloud dynamic object filtering method based on camera laser radar fusion | |
| WO2022141123A1 (en) | Movable platform and control method and apparatus therefor, terminal device and storage medium | |
| Zhou et al. | Real-time object detection and pose estimation using stereo vision. An application for a Quadrotor MAV |
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| GR01 | Patent grant | ||
| GR01 | Patent grant |