Movatterモバイル変換


[0]ホーム

URL:


CN110319772A - Visual large-span distance measurement method based on unmanned aerial vehicle - Google Patents

Visual large-span distance measurement method based on unmanned aerial vehicle
Download PDF

Info

Publication number
CN110319772A
CN110319772ACN201910628362.8ACN201910628362ACN110319772ACN 110319772 ACN110319772 ACN 110319772ACN 201910628362 ACN201910628362 ACN 201910628362ACN 110319772 ACN110319772 ACN 110319772A
Authority
CN
China
Prior art keywords
key frame
imu
camera
frame
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910628362.8A
Other languages
Chinese (zh)
Other versions
CN110319772B (en
Inventor
刘刚
杨执钧
白雪
乔丹
钟韬
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai University of Electric Power
Original Assignee
Shanghai University of Electric Power
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai University of Electric PowerfiledCriticalShanghai University of Electric Power
Priority to CN201910628362.8ApriorityCriticalpatent/CN110319772B/en
Publication of CN110319772ApublicationCriticalpatent/CN110319772A/en
Application grantedgrantedCritical
Publication of CN110319772BpublicationCriticalpatent/CN110319772B/en
Activelegal-statusCriticalCurrent
Anticipated expirationlegal-statusCritical

Links

Classifications

Landscapes

Abstract

Translated fromChinese

本发明涉及一种基于无人机的视觉大跨度测距方法,可用于大场景测距。通过飞行器携带视觉传感器,对需要测量的对象场景进行影像数据的获取并通过飞控获取IMU信息;通过图传将数据传给地面站;通过光流法跟踪FAST特征点并筛选提取关键帧;通过非线性优化进行传感器数据融合;利用多立体视觉技术对场景进行三维建模;在三维模型中选取测量点进行测距。本发明方法通过关键帧与图像模糊度筛选减少冗余图片数量,减轻重构时间,提高重构效率;与纯图片的三维重构相比,可以重构出真实尺度的三维模型且模型精度更高;与GPS与图像融合相比,可以在无GPS信号环境下(如山区、室内)进行三维重构,提高了系统的鲁棒性。

The invention relates to an unmanned aerial vehicle-based visual long-span ranging method, which can be used for ranging in large scenes. Through the visual sensor carried by the aircraft, the image data of the object scene to be measured is acquired and the IMU information is obtained through the flight control; the data is transmitted to the ground station through the image transmission; the FAST feature points are tracked through the optical flow method and the key frames are filtered and extracted; through Nonlinear optimization for sensor data fusion; multi-stereo vision technology for 3D modeling of the scene; measuring points in the 3D model for distance measurement. The method of the present invention reduces the number of redundant pictures by screening key frames and image ambiguities, reduces reconstruction time, and improves reconstruction efficiency; compared with the three-dimensional reconstruction of pure pictures, a three-dimensional model of a real scale can be reconstructed and the model accuracy is higher High; compared with GPS and image fusion, it can perform 3D reconstruction in environments without GPS signals (such as mountainous areas and indoors), which improves the robustness of the system.

Description

Translated fromChinese
基于无人机的视觉大跨度测距方法UAV-based visual long-span ranging method

技术领域technical field

本发明涉及一种测绘技术,特别涉及一种基于无人机的视觉大跨度测距方法。The invention relates to a surveying and mapping technology, in particular to a method for visually measuring long-span distances based on unmanned aerial vehicles.

背景技术Background technique

目标场景的三维重构技术在测绘领域得到了广泛的应用。三维重构可以将多视角拍摄的二维图像重建成空间物体的三维模型,进行物体尺寸的测量。以花坛举例,人们想要测量花坛的直径周长,由于花坛尺寸较大无法直接测量。通过无人机对花坛航拍,可以重构出花坛的三维模型,在模型上选点即可获得花坛的真实尺寸。目前有两种三维建模的方式。1)完全通过图像进行三维建模,但所得的三维模型缺乏尺度因子,需要在模型中换算比例尺。2)通过GPS与图像融合的方式,计算出实际尺寸的三维模型。但该方法,对GPS定位精度高。在无GPS信号环境下无法进行重构,如山区,室内等。The 3D reconstruction technology of the target scene has been widely used in the field of surveying and mapping. 3D reconstruction can reconstruct 2D images taken from multiple perspectives into a 3D model of a space object to measure the size of the object. Taking a flower bed as an example, people want to measure the diameter and circumference of the flower bed, but it cannot be directly measured due to the large size of the flower bed. The 3D model of the flower bed can be reconstructed through the aerial photography of the flower bed by the drone, and the real size of the flower bed can be obtained by selecting points on the model. There are currently two ways of 3D modeling. 1) 3D modeling is performed entirely through images, but the resulting 3D model lacks a scale factor, and the scale needs to be converted in the model. 2) Calculate the 3D model of actual size by means of GPS and image fusion. However, this method has high accuracy for GPS positioning. Reconstruction cannot be performed in environments without GPS signals, such as mountainous areas, indoors, etc.

发明内容Contents of the invention

本发明是针对空间物体测距存在的问题,提出了一种基于无人机的视觉大跨度测距方法,基于IMU(惯性测量单元)与图像融合的三维重构测距方法,可以得到真实尺度的三维模型,同时增加了重构的鲁棒性,在无GPS信号环境下也能进行三维重构。The present invention aims at the problems existing in distance measurement of space objects, and proposes a visual large-span distance measurement method based on unmanned aerial vehicles, based on the three-dimensional reconstruction distance measurement method based on IMU (inertial measurement unit) and image fusion, the real scale can be obtained At the same time, it increases the robustness of reconstruction, and can also carry out 3D reconstruction in the environment without GPS signal.

本发明的技术方案为:一种基于无人机的视觉大跨度测距方法,具体包括如下步骤:The technical solution of the present invention is: a visual long-span ranging method based on an unmanned aerial vehicle, specifically comprising the following steps:

1)采用无人机采集视频数据与IMU数据,对视频数据进行预处理后,将预处理后带有特征点和关键帧的视频数据和IMU数据传输给地面服务器;1) UAVs are used to collect video data and IMU data, and after preprocessing the video data, the preprocessed video data and IMU data with feature points and key frames are transmitted to the ground server;

2)利用平均视差、跟踪质量与图像模糊度挑选视频中关键帧;2) Use the average disparity, tracking quality and image blur to select key frames in the video;

3)利用传感器融合技术,通过IMU预积分和选取的关键帧得到相机的粗略位姿,再通过视频帧中的特征点进行非线性优化,得到相机的精确位姿,即飞行轨迹;3) Use sensor fusion technology to obtain the rough pose of the camera through IMU pre-integration and selected key frames, and then perform nonlinear optimization through the feature points in the video frame to obtain the precise pose of the camera, that is, the flight trajectory;

4)最后进行稠密重建获取实际尺度的三维重构模型,在建立的三维模型上直接进行选点测距。4) Finally, perform dense reconstruction to obtain a 3D reconstructed model of the actual scale, and directly perform point selection and distance measurement on the established 3D model.

所述步骤1)具体实现方法如下:Described step 1) concrete realization method is as follows:

1.1)首先采用kalibr工具包对无人机的摄像头与IMU的内参与外参进行标定,通过标定的参数对相机所拍摄的视频进行畸变矫正;1.1) First use the kalibr toolkit to calibrate the internal and external parameters of the camera of the drone and the IMU, and correct the distortion of the video captured by the camera through the calibrated parameters;

1.2)通过无人机航拍,获取矫正后的视频数据和IMU数据,视频数据和IMU数据包括相机的三轴加速度和角速度,图像信息;1.2) Obtain corrected video data and IMU data through drone aerial photography, the video data and IMU data include the three-axis acceleration and angular velocity of the camera, and image information;

1.3)获得视频数据和IMU数据后,提取第一张视频帧FAST特征点并通过光流法跟踪,得到后续帧匹配的特征点,根据图像中特征点多少进行关键帧筛选,然后将带有特征点和关键帧的视频数据和IMU数据通过无线传输到地面服务器。1.3) After obtaining the video data and IMU data, extract the FAST feature points of the first video frame and track them through the optical flow method to obtain the matching feature points of subsequent frames, perform key frame screening according to the number of feature points in the image, and then use the feature points The video data and IMU data of points and key frames are transmitted wirelessly to the ground server.

所述步骤2)具体实现方法如下:Described step 2) specific implementation method is as follows:

2.1)通过对匹配的特征点采用多点透视成像算法计算当前帧与上一个关键帧的旋转矩阵,平移矩阵;如果在当前帧和上一个关键帧之间跟踪的旋转矩阵,平移矩阵超出设定阈值,则将当前帧视为新的关键帧;2.1) Calculate the rotation matrix and translation matrix of the current frame and the previous key frame by using the multi-point perspective imaging algorithm for the matched feature points; if the rotation matrix tracked between the current frame and the previous key frame, the translation matrix exceeds the set threshold, the current frame is regarded as a new keyframe;

2.2)通过将跟踪特征数量低于设定数量的关键帧也视为新的关键帧,以此避免跟踪特征完全丢失,提高跟踪质量;2.2) By treating the keyframes whose number of tracking features is lower than the set number as new keyframes, it is possible to avoid the complete loss of tracking features and improve the tracking quality;

2.3)在无参考图像情况下,通过对得到的所有关键帧进行拉普拉斯变换后,再计算相邻像素间的平均梯度差,用平均梯度差来衡量模糊度,数值越低帧越模糊,当平均梯度差小于设定梯度阈值时,则剔除该关键帧。2.3) In the absence of a reference image, after Laplace transform is performed on all key frames obtained, the average gradient difference between adjacent pixels is calculated, and the average gradient difference is used to measure the blur. The lower the value, the more blurred the frame , when the average gradient difference is less than the set gradient threshold, the key frame is eliminated.

所述步骤3)具体实现方法如下:Described step 3) concrete realization method is as follows:

3.1)通过IMU预积分计算关键帧在世界坐标系下的位姿关系,计算出相机的粗略位姿;相机的粗略位姿获取过程中通过对齐IMU数据与关键帧数据,使用关键帧图像之间的匹配来约束IMU的预计分误差;3.1) Calculate the pose relationship of key frames in the world coordinate system through IMU pre-integration, and calculate the rough pose of the camera; in the process of obtaining the rough pose of the camera, by aligning the IMU data and key frame data, using the key frame image The match to constrain the pre-score error of the IMU;

3.2)通过非线性算法,优化特征点和相机的位姿,得到相机的精确位姿。3.2) Through the nonlinear algorithm, the feature points and the pose of the camera are optimized to obtain the precise pose of the camera.

所述步骤4)具体实现方法如下:Described step 4) specific implementation method is as follows:

4.1)采用了极线搜索和块匹配技术在关键帧图片,对每个像素点的极线上选取3*3的像素块与其他关键帧图片相互逐个比较相似度,得到像素在各个图片中的位置;4.1) Using epipolar search and block matching technology in the key frame picture, select 3*3 pixel blocks on the epipolar line of each pixel and compare the similarity with other key frame pictures one by one to get the pixel in each picture Location;

4.2)通过对不同关键帧中同一个像素点在空间中的三维映射点采用深度滤波技术多次进行三角测量使深度估计收敛到一个稳定值;4.2) Triangulating the three-dimensional mapping points of the same pixel in different key frames in space using depth filtering technology to make the depth estimation converge to a stable value;

4.3)通过相机位姿将各个关键帧图片视角下的点云进行旋转、平移到世界坐标系下,拼接得到实际尺度的三维模型,在建立的三维模型上可以直接进行选点测距。4.3) Rotate and translate the point clouds under the perspective of each key frame picture to the world coordinate system through the camera pose, and splicing to obtain a 3D model of actual scale. On the established 3D model, point selection and distance measurement can be directly performed.

本发明的有益效果在于:本发明基于无人机的视觉大跨度测距方法,通过关键帧与图像模糊度筛选减少冗余图片数量,减轻重构时间,提高重构效率;与纯图片的三维重构相比,可以重构出真实尺度的三维模型且模型精度更高;与GPS与图像融合相比,可以在无GPS信号环境下(如山区、室内)进行三维重构,提高了系统的鲁棒性。The beneficial effects of the present invention are: the present invention is based on the UAV-based visual long-span ranging method, which reduces the number of redundant pictures through key frame and image fuzziness screening, reduces reconstruction time, and improves reconstruction efficiency; Compared with reconstruction, real-scale 3D models can be reconstructed with higher model accuracy; compared with GPS and image fusion, 3D reconstruction can be performed in environments without GPS signals (such as mountainous areas, indoors), which improves the system performance. robustness.

附图说明Description of drawings

图1为本发明的方法流程图;Fig. 1 is method flowchart of the present invention;

图2为本发明的方法流程框图;Fig. 2 is a method block diagram of the present invention;

图3为本发明实施例IMU预积分原理图;Fig. 3 is the schematic diagram of the IMU pre-integration of the embodiment of the present invention;

图4为本发明实施例传感器优化后的相机位姿图;Fig. 4 is a camera pose diagram after the sensor optimization of the embodiment of the present invention;

图5为本发明实施例极线约束原理图。Fig. 5 is a schematic diagram of epipolar line constraint according to an embodiment of the present invention.

具体实施方式Detailed ways

如图1、2所示基于无人机的视觉大跨度测距方法流程图,对给定对象的三维模型进行测距,包括目标对象的尺寸,多个对象之间的距离,测距所需硬件为:无人机、地面站服务器、单目摄像头、无线路由器等。具体实现方法如下:As shown in Figures 1 and 2, the flow chart of the UAV-based visual long-span ranging method is used to measure the distance of the 3D model of a given object, including the size of the target object, the distance between multiple objects, and the distance required for ranging. The hardware is: UAV, ground station server, monocular camera, wireless router, etc. The specific implementation method is as follows:

1、采用无人机采集视频数据与IMU数据,对视频数据进行图像预处理后,将预处理后图像数据和IMU数据传输给地面服务器;1. UAVs are used to collect video data and IMU data, and after image preprocessing is performed on the video data, the preprocessed image data and IMU data are transmitted to the ground server;

步骤1具体实现步骤为:The specific implementation steps of step 1 are:

1.1首先采用kalibr工具包对无人机的摄像头与IMU的内参与外参进行标定,通过标定的参数对相机所拍摄的视频进行畸变矫正;1.1 First use the kalibr toolkit to calibrate the internal and external parameters of the camera of the drone and the IMU, and correct the distortion of the video captured by the camera through the calibrated parameters;

1.2通过无人机航拍,获取矫正后的视频数据和IMU数据,视频数据和IMU数据包括相机的三轴加速度和角速度,图像信息;1.2 Obtain the corrected video data and IMU data through the aerial photography of the drone. The video data and IMU data include the three-axis acceleration and angular velocity of the camera, and image information;

1.3获得视频数据和IMU数据后,提取第一张视频帧FAST特征点并通过光流法跟踪,得到后续帧匹配的特征点,根据图像中特征点多少进行关键帧筛选,然后将带有特征点和关键帧的视频数据和IMU数据通过无线传输到地面服务器。1.3 After obtaining the video data and IMU data, extract the FAST feature points of the first video frame and track them through the optical flow method to obtain the matching feature points of subsequent frames, perform key frame screening according to the number of feature points in the image, and then use the feature points The video data and IMU data of key frames are transmitted to the ground server through wireless.

2、利用平均视差、跟踪质量与图像模糊度挑选视频中关键帧;2. Use the average parallax, tracking quality and image blur to select key frames in the video;

具体实现步骤为:The specific implementation steps are:

2.1通过对匹配的特征点采用PNP(多点透视成像)算法计算当前帧与上一个关键帧的旋转矩阵,平移矩阵。如果在当前帧和上一个关键帧之间跟踪的旋转矩阵,平移矩阵超出设定阈值,则将当前帧视为新的关键帧;2.1 Calculate the rotation matrix and translation matrix of the current frame and the previous key frame by using the PNP (multi-point perspective imaging) algorithm for the matched feature points. If the rotation matrix and translation matrix tracked between the current frame and the previous keyframe exceed the set threshold, the current frame is regarded as a new keyframe;

2.2为了防止在轨迹跟踪时特征点过少导致跟踪失败。通过将跟踪特征数量低于设定数量的关键帧也视为新的关键帧,以此避免跟踪特征完全丢失,提高跟踪质量。2.2 In order to prevent the tracking failure caused by too few feature points during trajectory tracking. By treating the keyframes whose number of tracking features is lower than the set number as new keyframes, the complete loss of tracking features can be avoided and the tracking quality can be improved.

2.3在无参考图像情况下,先对得到的所有关键帧进行拉普拉斯变换后,再计算相邻像素间的平均梯度差,用平均梯度差来衡量模糊度,数值越低帧越模糊,当平均梯度差小于设定梯度阈值时,则剔除该关键帧。2.3 In the absence of a reference image, first perform Laplace transform on all the obtained key frames, and then calculate the average gradient difference between adjacent pixels, and use the average gradient difference to measure the blur. The lower the value, the more blurred the frame. When the average gradient difference is less than the set gradient threshold, the key frame is eliminated.

3、利用传感器融合技术,通过IMU预积分和选取的关键帧得到相机的粗略位姿,再通过视频帧中的特征点进行非线性优化,得到相机的精确位姿,即飞行轨迹。3. Using sensor fusion technology, the rough pose of the camera is obtained through IMU pre-integration and selected key frames, and then through nonlinear optimization of the feature points in the video frame, the precise pose of the camera, that is, the flight trajectory, is obtained.

具体实现步骤为:The specific implementation steps are:

3.1通过IMU预积分计算关键帧在世界坐标系下的位姿关系,计算出相机的粗略位姿。即在世界坐标系下,IMU在k时刻的位置,速度,旋转分别对应如图3所示,i与j时刻的关键帧之间的位姿关系可由IMU预积分得到,它们满足迭代式。3.1 Calculate the pose relationship of key frames in the world coordinate system through IMU pre-integration, and calculate the rough pose of the camera. That is, in the world coordinate system, the position, speed, and rotation of the IMU at time k correspond to As shown in Figure 3, the pose relationship between key frames at time i and j can be obtained by IMU pre-integration, and they satisfy the iterative formula.

式中:上标w代表世界坐标系;下标bk代表IMU坐标系的k时刻;下标t代表t时刻;下标bk+1代表IMU坐标系的k+1时刻;为t时刻,IMU在世界坐标系下的旋转,为t时刻IMU的加速度;为t时刻IMU加速度的偏置;gw为在世界坐标系下的重力;为t时刻IMU的角速度;为t时刻IMU角速度的偏置;In the formula: the superscript w represents the world coordinate system; the subscript bk represents time k of the IMU coordinate system; the subscript t represents time t; the subscript bk+1 represents time k+1 of the IMU coordinate system; is the rotation of the IMU in the world coordinate system at time t, is the acceleration of the IMU at time t; is the bias of the IMU acceleration at time t; gw is the gravity in the world coordinate system; is the angular velocity of the IMU at time t; is the bias of the IMU angular velocity at time t;

即当前时刻的状态可以通过上一时刻状态来迭代求得。将相邻两关键帧之间的一段时间的IMU预积分起来,得到两帧之间的IMU相对运动。That is, the state at the current moment can be obtained iteratively through the state at the previous moment. Pre-integrate the IMU for a period of time between two adjacent key frames to obtain the relative motion of the IMU between the two frames.

相机的粗略位姿获取过程中通过对齐IMU数据与关键帧数据,使用关键帧图像之间的匹配来约束IMU的预计分误差。During the coarse pose acquisition process of the camera, the IMU data is aligned with the keyframe data, and the matching between the keyframe images is used to constrain the pre-scoring error of the IMU.

3.2通过非线性算法,优化特征点和相机的位姿,得到相机的精确位姿。首先,将位姿,速度,相机外参等状态量整合到一个状态向量中进行非线性优化。状态向量如下:3.2 Through the nonlinear algorithm, optimize the feature points and the pose of the camera to obtain the precise pose of the camera. First, the pose, velocity, camera extrinsics and other state variables are integrated into a state vector for nonlinear optimization. The state vector is as follows:

由(5)(6)推导出(4)From (5)(6) deduced (4)

式中:X为状态向量;Xk为IMU的状态变量;bg为IMU的重力偏置;ba为IMU的加速度偏置;为相机与IMU相对的状态变量;为相机与IMU之间的位移;为相机与IMU之间的旋转;λi为特征点坐标,i=1,2,...,m;In the formula: X is the state vector; Xk is the state variable of the IMU; bg is the gravity bias of the IMU; ba is the acceleration bias of the IMU; It is the relative state variable of the camera and the IMU; is the displacement between the camera and the IMU; is the rotation between the camera and the IMU; λi is the feature point coordinates, i=1,2,...,m;

非线性优化目标函数如下:The nonlinear optimization objective function is as follows:

式中:为基于IMU的位移观测量;为基于图像的位移观测量;rβ为IMU的关系函数;rc为图像的关系函数;k是时刻,β代表IMU,l代表第几个特征点,j代表第几帧,C代表相机。In the formula: is the displacement observation based on IMU; rβ is the relationship function of the IMU; rc is the relationship function of the image; k is the moment, β represents the IMU, l represents the feature point, j represents the frame, and C represents the camera.

相机位姿优化结果如图4所示。The results of camera pose optimization are shown in Fig. 4.

4、最后进行稠密重建获取实际尺度的三维重构模型,在建立的三维模型上直接进行选点测距。4. Finally, perform dense reconstruction to obtain a 3D reconstruction model of the actual scale, and directly perform point selection and distance measurement on the established 3D model.

具体实现步骤为:The specific implementation steps are:

4.1采用了极线搜索和块匹配技术在关键帧图片,对每个像素点的极线上选取3*3的像素块与其他关键帧图片相互逐个比较相似度,得到像素在各个图片中的位置。如图5所示,d表示三维点的深度,o1、o2表示第一张图片和第二张图片的相机中心,R、T表示2两帧图片之间的旋转矩阵和平移矩阵,p1、p2表示第一张图片和第二张图片上的像素点,l2表示p1在第二张图片中的极线。4.1 Using epipolar search and block matching technology in the key frame picture, select 3*3 pixel blocks on the epipolar line of each pixel and compare the similarity with other key frame pictures one by one to get the position of the pixel in each picture . As shown in Figure 5, d represents the depth of the 3D point, o1 and o2 represent the camera center of the first picture and the second picture, R and T represent the rotation matrix and translation matrix between the two frames of pictures, p1 and p2 represent the pixels on the first picture and the second picture, and l2 represents the epipolar line of p1 in the second picture.

4.2通过对不同关键帧中同一个像素点在空间中的三维映射点采用深度滤波技术多次进行三角测量使深度估计收敛到一个稳定值。4.2 The depth estimation converges to a stable value by using the depth filtering technology to triangulate the three-dimensional mapping points of the same pixel in different key frames in space.

4.3通过相机位姿将各个关键帧图片视角下的点云进行旋转、平移到世界坐标系下,拼接得到实际尺度的三维模型,在建立的三维模型上可以直接进行选点测距。4.3 Rotate and translate the point clouds under the perspective of each key frame picture to the world coordinate system through the camera pose, and splicing to obtain a 3D model of actual scale. On the established 3D model, point selection and distance measurement can be directly performed.

实验结果,通过基于无人机的视觉大跨度测距方法得到了三维模型,并可以再三维模型上进行真实尺度的测距。基于本发明的测距系统,测绘工作者可以非常方便的进行测绘工作。As a result of the experiment, the 3D model was obtained through the UAV-based visual long-span ranging method, and the real-scale ranging can be performed on the 3D model. Based on the ranging system of the invention, surveying and mapping workers can carry out surveying and mapping work very conveniently.

Claims (5)

CN201910628362.8A2019-07-122019-07-12 Visual long-span ranging method based on UAVActiveCN110319772B (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
CN201910628362.8ACN110319772B (en)2019-07-122019-07-12 Visual long-span ranging method based on UAV

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN201910628362.8ACN110319772B (en)2019-07-122019-07-12 Visual long-span ranging method based on UAV

Publications (2)

Publication NumberPublication Date
CN110319772Atrue CN110319772A (en)2019-10-11
CN110319772B CN110319772B (en)2020-12-15

Family

ID=68122084

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN201910628362.8AActiveCN110319772B (en)2019-07-122019-07-12 Visual long-span ranging method based on UAV

Country Status (1)

CountryLink
CN (1)CN110319772B (en)

Cited By (10)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN111047620A (en)*2019-11-152020-04-21广东工业大学 A UAV Visual Odometry Method Based on Depth Point-Line Features
CN112149495A (en)*2020-08-072020-12-29中国矿业大学(北京)Video key frame extraction method based on parallax tracking
CN112365537A (en)*2020-10-132021-02-12天津大学Active camera repositioning method based on three-dimensional point cloud alignment
CN112505065A (en)*2020-12-282021-03-16上海工程技术大学Method for detecting surface defects of large part by indoor unmanned aerial vehicle
CN112697044A (en)*2020-12-172021-04-23北京航空航天大学Static rigid object vision measurement method based on unmanned aerial vehicle platform
CN113125791A (en)*2019-12-302021-07-16南京智能情资创新科技研究院有限公司Motion camera speed measurement method based on characteristic object and optical flow method
CN113139949A (en)*2021-04-302021-07-20逻腾(杭州)科技有限公司Robot image ambiguity detection method
CN113284224A (en)*2021-04-202021-08-20北京行动智能科技有限公司Automatic mapping method and device based on simplex code, and positioning method and equipment
CN115272494A (en)*2022-09-292022-11-01腾讯科技(深圳)有限公司Calibration method and device for camera and inertial measurement unit and computer equipment
CN116380004A (en)*2022-12-082023-07-04亿咖通(湖北)技术有限公司 Method, device, equipment and storage medium for distance measurement of an object

Citations (7)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN104424630A (en)*2013-08-202015-03-18华为技术有限公司Three-dimension reconstruction method and device, and mobile terminal
CN105953796A (en)*2016-05-232016-09-21北京暴风魔镜科技有限公司Stable motion tracking method and stable motion tracking device based on integration of simple camera and IMU (inertial measurement unit) of smart cellphone
CN106123802A (en)*2016-06-132016-11-16天津大学A kind of autonomous flow-type 3 D measuring method
CN107193279A (en)*2017-05-092017-09-22复旦大学Robot localization and map structuring system based on monocular vision and IMU information
US20180112985A1 (en)*2016-10-262018-04-26The Charles Stark Draper Laboratory, Inc.Vision-Inertial Navigation with Variable Contrast Tracking Residual
CN108413917A (en)*2018-03-152018-08-17中国人民解放军国防科技大学 Non-contact three-dimensional measuring system, non-contact three-dimensional measuring method and measuring device
CN109520497A (en)*2018-10-192019-03-26天津大学The unmanned plane autonomic positioning method of view-based access control model and imu

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN104424630A (en)*2013-08-202015-03-18华为技术有限公司Three-dimension reconstruction method and device, and mobile terminal
CN105953796A (en)*2016-05-232016-09-21北京暴风魔镜科技有限公司Stable motion tracking method and stable motion tracking device based on integration of simple camera and IMU (inertial measurement unit) of smart cellphone
CN106123802A (en)*2016-06-132016-11-16天津大学A kind of autonomous flow-type 3 D measuring method
US20180112985A1 (en)*2016-10-262018-04-26The Charles Stark Draper Laboratory, Inc.Vision-Inertial Navigation with Variable Contrast Tracking Residual
CN107193279A (en)*2017-05-092017-09-22复旦大学Robot localization and map structuring system based on monocular vision and IMU information
CN108413917A (en)*2018-03-152018-08-17中国人民解放军国防科技大学 Non-contact three-dimensional measuring system, non-contact three-dimensional measuring method and measuring device
CN109520497A (en)*2018-10-192019-03-26天津大学The unmanned plane autonomic positioning method of view-based access control model and imu

Cited By (13)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN111047620A (en)*2019-11-152020-04-21广东工业大学 A UAV Visual Odometry Method Based on Depth Point-Line Features
CN113125791A (en)*2019-12-302021-07-16南京智能情资创新科技研究院有限公司Motion camera speed measurement method based on characteristic object and optical flow method
CN113125791B (en)*2019-12-302023-10-20南京智能情资创新科技研究院有限公司Motion camera speed measuring method based on characteristic object and optical flow method
CN112149495A (en)*2020-08-072020-12-29中国矿业大学(北京)Video key frame extraction method based on parallax tracking
CN112149495B (en)*2020-08-072023-07-28中国矿业大学(北京) A video key frame extraction method based on parallax tracking
CN112365537B (en)*2020-10-132023-06-27天津大学Active camera repositioning method based on three-dimensional point cloud alignment
CN112365537A (en)*2020-10-132021-02-12天津大学Active camera repositioning method based on three-dimensional point cloud alignment
CN112697044A (en)*2020-12-172021-04-23北京航空航天大学Static rigid object vision measurement method based on unmanned aerial vehicle platform
CN112505065A (en)*2020-12-282021-03-16上海工程技术大学Method for detecting surface defects of large part by indoor unmanned aerial vehicle
CN113284224A (en)*2021-04-202021-08-20北京行动智能科技有限公司Automatic mapping method and device based on simplex code, and positioning method and equipment
CN113139949A (en)*2021-04-302021-07-20逻腾(杭州)科技有限公司Robot image ambiguity detection method
CN115272494A (en)*2022-09-292022-11-01腾讯科技(深圳)有限公司Calibration method and device for camera and inertial measurement unit and computer equipment
CN116380004A (en)*2022-12-082023-07-04亿咖通(湖北)技术有限公司 Method, device, equipment and storage medium for distance measurement of an object

Also Published As

Publication numberPublication date
CN110319772B (en)2020-12-15

Similar Documents

PublicationPublication DateTitle
CN110319772A (en)Visual large-span distance measurement method based on unmanned aerial vehicle
CN110070615B (en)Multi-camera cooperation-based panoramic vision SLAM method
CN109166149B (en)Positioning and three-dimensional line frame structure reconstruction method and system integrating binocular camera and IMU
CN111275750B (en)Indoor space panoramic image generation method based on multi-sensor fusion
CN112233177B (en) A method and system for estimating position and attitude of unmanned aerial vehicle
CN106447766B (en)A kind of scene reconstruction method and device based on mobile device monocular camera
CN113850126A (en)Target detection and three-dimensional positioning method and system based on unmanned aerial vehicle
KR100912715B1 (en) Digital photogrammetry method and device by heterogeneous sensor integrated modeling
CN115471534A (en) Method and equipment for 3D reconstruction of underwater scene based on binocular vision and IMU
CN108665499B (en)Near distance airplane pose measuring method based on parallax method
CN110799921A (en) Filming method, device and drone
CN110033489A (en)A kind of appraisal procedure, device and the equipment of vehicle location accuracy
CN107560603B (en)Unmanned aerial vehicle oblique photography measurement system and measurement method
CN112465969A (en)Real-time three-dimensional modeling method and system based on unmanned aerial vehicle aerial image data
JP2012118666A (en)Three-dimensional map automatic generation device
CN106920276A (en)A kind of three-dimensional rebuilding method and system
CN112598706B (en)Multi-camera moving target three-dimensional track reconstruction method without accurate time-space synchronization
CN111815765A (en) An Image 3D Reconstruction Method Based on Heterogeneous Data Fusion
CN114638897B (en)Multi-camera system initialization method, system and device based on non-overlapping views
CN110986888A (en)Aerial photography integrated method
Wendel et al.Automatic alignment of 3D reconstructions using a digital surface model
CN110068306A (en)A kind of unmanned plane inspection photometry system and method
CN114114311B (en) A relative pose measurement method for non-cooperative spacecraft based on multi-source information fusion
CN105389819A (en)Robust semi-calibrating down-looking image epipolar rectification method and system
WuPhotogrammetry: 3-D from imagery

Legal Events

DateCodeTitleDescription
PB01Publication
PB01Publication
SE01Entry into force of request for substantive examination
SE01Entry into force of request for substantive examination
GR01Patent grant
GR01Patent grant

[8]ページ先頭

©2009-2025 Movatter.jp