技术领域technical field
本发明属于无人机导航定位技术领域,特别涉及一种利用了图像中边沿点信息的单目视觉测程方法。The invention belongs to the technical field of unmanned aerial vehicle navigation and positioning, and in particular relates to a monocular vision ranging method utilizing edge point information in an image.
背景技术Background technique
在许多应用场景中人们没办法依靠GPS进行准确定位,如室内、城市楼宇之间或者是丛林、山谷地带乃至于外星球。利用视觉进行导航是非常符合直觉的:自然界中所有有眼睛的动物都利用视觉进行某种形式的导航,苍蝇、蜜蜂等昆虫通过光流判断自身与目标之间的相对运动,大部分时间里人通过看到的景物判断身处的位置。In many application scenarios, people cannot rely on GPS for accurate positioning, such as indoors, between urban buildings, or in jungles, valleys, or even alien planets. It is very intuitive to use vision for navigation: all animals with eyes in nature use vision for some form of navigation. Insects such as flies and bees judge the relative motion between themselves and the target through optical flow. Most of the time, humans Determine where you are based on what you see.
在不使用GPS的前提下,出于成本和重量的考虑,微小无人机多数采用惯性测量单元和单摄像头组合的导航方案,也可以配备超声波传感器和气压传感器以获取绝对高度信息。这种配置的无人机可以实现完全自主的航路点跟踪。Under the premise of not using GPS, due to cost and weight considerations, most micro drones use a navigation solution combined with an inertial measurement unit and a single camera, and can also be equipped with ultrasonic sensors and air pressure sensors to obtain absolute altitude information. A UAV in this configuration enables fully autonomous waypoint tracking.
携带单个或多个摄像头的载体,仅利用其图像输入,对自身运动进行估计的过程叫做视觉测程VO(VisualOdometry)。应用领域包括机器人、增强现实和自动驾驶等。该过程与传统的轮测程法存在相似之处。轮测程通过累计车轮的转动计算里程,视觉测程通过感知输入图像的变化,增量地估计载体位姿。视觉测程算法的有效运行要求环境中光照充足,场景纹理足够丰富。The process of estimating the motion of a carrier carrying a single or multiple cameras using only its image input is called Visual Odometry (VO). Applications include robotics, augmented reality, and autonomous driving, among others. The process is similar to traditional wheel odometry. Wheel odometry calculates the mileage by accumulating the rotation of the wheel, and visual odometry incrementally estimates the carrier pose by perceiving changes in the input image. The effective operation of the visual odometry algorithm requires sufficient lighting in the environment and rich enough scene textures.
单目视觉测程仅利用单个摄像头作为输入,系统配置简单,适应环境尺度变化的能力强于多目视觉系统。现有的单目视觉测程方法,一般利用图像中的角点特征进行帧与帧之间的匹配,无法适应缺少角点特征的场景(角点特征是图像的重要特征,代表图像中在某些属性方面突出的像素点。在图1中的场景中,角点特征多出现在直线的交点处,数量稀少且结构上存在重复性,现有的基于角点特征的单目视觉测程算法将很有可能失效)。此外,这些角点特征只占图像全部像素点的一小部分,从图像信息的利用率来看,单目视觉测程的性能还能提高。Monocular vision odometry only uses a single camera as input, the system configuration is simple, and the ability to adapt to environmental scale changes is stronger than multi-eye vision systems. The existing monocular vision odometry methods generally use the corner features in the image to match between frames, which cannot adapt to scenes lacking corner features (the corner feature is an important feature of the image, which represents Pixels with outstanding attributes. In the scene in Figure 1, corner features mostly appear at the intersection of straight lines, which are rare in number and repetitive in structure. The existing monocular vision odometry algorithm based on corner features will most likely fail). In addition, these corner features only account for a small part of the total pixels of the image. From the perspective of image information utilization, the performance of monocular vision odometry can be improved.
本发明涉及的术语说明如下:The terms involved in the present invention are described as follows:
帧:在视觉测程领域,习惯称获得的一幅图像为一帧,比如,相机前一时刻获得的图像称作前一帧,相机当前时刻获得的图像称作当前帧,相机获得的连续两幅图像称作相邻帧等;Frame: In the field of visual odometry, it is customary to call an image obtained as a frame. For example, the image obtained by the camera at the previous moment is called the previous frame, and the image obtained by the camera at the current moment is called the current frame. images are called adjacent frames, etc.;
关键帧:由于当前相机的帧率较高,相邻帧之间的位姿变化往往比较小,为了增强运动估计的准确性,一般采取关键帧的策略,即在一定的位姿变化范围内,新得到的图像只与某一特定的帧进行对齐以估计当前的位姿,而只有当超出了一定的范围后,我们才采取新的特定的帧进行下一阶段的图像对齐,即称这些用来进行图像对齐的特定帧为关键帧;Key frame: Due to the high frame rate of the current camera, the pose changes between adjacent frames are often relatively small. In order to enhance the accuracy of motion estimation, a key frame strategy is generally adopted, that is, within a certain range of pose changes, The newly obtained image is only aligned with a specific frame to estimate the current pose, and only when it exceeds a certain range, we take a new specific frame for the next stage of image alignment, that is, these are called The specific frame for image alignment is the key frame;
参考帧:用来对齐当前图像的帧称为当前图像的参考帧;Reference frame: The frame used to align the current image is called the reference frame of the current image;
地图:在视觉测程领域,将已知的环境信息(比如已经计算得到的点的位置、已获取的图像等)保存起来,称为地图。地图可以作为后续图像匹配、运动估计的先验信息以增加测程的精度。Map: In the field of visual odometry, the known environmental information (such as the position of the calculated point, the acquired image, etc.) is saved, which is called a map. The map can be used as prior information for subsequent image matching and motion estimation to increase the accuracy of odometry.
发明内容Contents of the invention
本发明针对已有技术的不足,提出一种基于图像中边沿点信息的单目视觉测程方法,能有效地利用图像中的边沿点信息,结合角点信息(作为辅助),对缺乏角点特征的场景具有更强的适应性。Aiming at the deficiencies of the prior art, the present invention proposes a monocular vision range measurement method based on edge point information in an image, which can effectively utilize the edge point information in the image, combined with corner point information (as an auxiliary), and solve the problem of lack of corner points Featured scenes have stronger adaptability.
本发明提出的一种基于图像中边沿点信息的无人机单目视觉测程的方法,其特征在于,该方法包括初始化以及并行处理的运动估计、地图构建和深度图估计;具体包括以下步骤:A method of unmanned aerial vehicle monocular vision range measurement based on edge point information in the image proposed by the present invention is characterized in that the method includes initialization and parallel processing of motion estimation, map construction and depth map estimation; specifically includes the following steps :
1)初始化:从与无人机固连的下视单目相机捕获的图像序列中选择两帧构建初始地图和初始深度图,地图用一组关键帧集合{rm}和一组三维特征点集合{pi}表示,其中r表示关键帧,下标m表示第m个关键帧,m为正整数,p表示三维特征点,下标i表示第i个三维特征点,i为正整数,三维特征点对应于图像中的角点特征,将其对应的二维角点集合记为其中uc表示角点,下标i表示第i个角点,i为正整数;深度图由边沿点坐标、边沿点深度值及深度值的不确定度构成,深度图与关键帧一一对应,深度图用表示,其中D表示边沿点对应的深度图,上标m表示第m幅深度图,为正整数,ue表示边沿点,de表示边沿点的深度值,σe表示边沿点深度倒数的不确定度,下标i表示第i个边沿点,为正整数;1) Initialization: Select two frames from the image sequence captured by the down-looking monocular camera connected to the UAV to construct the initial map and initial depth map. The map uses a set of key frame sets {rm } and a set of 3D feature points The set {pi } represents, where r represents the key frame, the subscript m represents the mth key frame, m is a positive integer, p represents a three-dimensional feature point, the subscript i represents the i-th three-dimensional feature point, and i is a positive integer, The three-dimensional feature points correspond to the corner features in the image, and the corresponding two-dimensional corner point set is recorded as Among them, uc represents the corner point, the subscript i represents the i-th corner point, and i is a positive integer; the depth map is composed of the coordinates of the edge point, the depth value of the edge point and the uncertainty of the depth value, and the depth map corresponds to the key frame one by one , for the depth map where D represents the depth map corresponding to the edge point, the superscript m represents the m-th depth map, which is a positive integer, ue represents the edge point, de represents the depth value of the edge point, σe represents the difference of the reciprocal of the depth of the edge point Degree of certainty, the subscript i represents the i-th edge point, which is a positive integer;
其中,将第一帧取作地图中的第一个关键帧,并将第一帧对应的相机坐标系取为世界坐标系,完成初始化;Among them, the first frame is taken as the first keyframe in the map, and the camera coordinate system corresponding to the first frame is taken as the world coordinate system to complete the initialization;
2)初始化地图和深度图信息后,并行进行运动估计、地图构建和深度图估计三个线程:其中,运动估计线程利用已知的地图和深度图信息,与当前帧进行对齐得到测程结果,并根据测程结果对已有的地图信息进行优化,地图构建和深度图估计线程同时运行以维护地图和深度图信息。2) After initializing the map and depth map information, three threads of motion estimation, map construction and depth map estimation are performed in parallel: among them, the motion estimation thread uses the known map and depth map information to align with the current frame to obtain the range measurement result, The existing map information is optimized according to the odometry results, and the map construction and depth map estimation threads run simultaneously to maintain map and depth map information.
本发明的特点及有益效果:Features and beneficial effects of the present invention:
本发明提出的基于图像中边沿点信息的单目视觉测程方法,将与无人机固连的下视单目相机捕获的图像作为输入,经过机载计算机运算,输出的结果为无人机的测程结果;该方法通过运动估计线程、地图构建线程和深度图估计线程三个并行的线程来实现;其中运动估计线程利用地图和深度图中的已有信息与当前帧做对齐估计当前运动的位姿,它是测程算法的核心,必须保证强实时性;地图构建线程用于维护关键帧信息以及图像角点特征所对应的三维特征地图;深度图估计线程用来维护关键帧中边沿点信息的深度估计。The monocular vision range measurement method based on the edge point information in the image proposed by the present invention takes the image captured by the down-looking monocular camera fixedly connected to the UAV as input, and after calculation by the onboard computer, the output result is UAV The range measurement results; this method is implemented by three parallel threads of motion estimation thread, map construction thread and depth map estimation thread; the motion estimation thread uses the existing information in the map and depth map to align with the current frame to estimate the current motion The pose, which is the core of the odometry algorithm, must ensure strong real-time performance; the map construction thread is used to maintain the key frame information and the three-dimensional feature map corresponding to the image corner feature; the depth map estimation thread is used to maintain the edge of the key frame Depth estimation from point information.
本发明将运动估计、地图构建和深度图估计分别运行在三个线程中,充分利用现代处理器的多核架构,提高了算法效率。运动估计算法中没有特征提取和匹配的过程,地图构建时采用快速的特征点提取算法,深度图估计中直接提取亮度梯度较大的像素点,这都有效降低了计算量。同时在运动估计算法的初始估计过程中,利用了图像中的边沿点信息,并相应的设计了边沿点的深度图传播方法。图像中的边沿点是比角点丰富很多的图像特征,算法在初始估计中采用边沿点信息可以降低算法对角点特征的依赖,增强算法的环境适应能力。The present invention respectively runs motion estimation, map construction and depth map estimation in three threads, fully utilizes the multi-core architecture of modern processors, and improves algorithm efficiency. There is no feature extraction and matching process in the motion estimation algorithm. A fast feature point extraction algorithm is used in map construction, and pixels with large brightness gradients are directly extracted in depth map estimation, which effectively reduces the amount of calculation. At the same time, in the initial estimation process of the motion estimation algorithm, the edge point information in the image is used, and the depth image propagation method of the edge point is designed accordingly. The edge points in the image are much richer image features than the corner points. Using the edge point information in the initial estimation of the algorithm can reduce the algorithm's dependence on the corner point features and enhance the environment adaptability of the algorithm.
附图说明Description of drawings
图1为基于角点特征的单目视觉测程方法可能失效的场景;Figure 1 is a scene where the monocular vision odometry method based on corner features may fail;
图2为本发明提出的基于边沿点的改进单目视觉测程方法的总体流程图;Fig. 2 is the overall flow chart of the improved monocular vision odometry method based on edge points proposed by the present invention;
图3为本发明实施例的运动估计线程流程图;FIG. 3 is a flowchart of a motion estimation thread according to an embodiment of the present invention;
图4为本发明实施例的地图构建线程流程图;Fig. 4 is the map construction thread flow diagram of the embodiment of the present invention;
图5为本发明实施例的深度图估计线程流程图;FIG. 5 is a flowchart of a depth map estimation thread according to an embodiment of the present invention;
具体实施方式detailed description
下面结合附图及实施例对本发明进行详细说明。The present invention will be described in detail below in conjunction with the accompanying drawings and embodiments.
本发明提出的一种基于图像中边沿点信息的无人机单目视觉测程的方法的总体流程如图2所示,其特征在于,包括初始化以及并行处理的运动估计、地图构建和深度图估计;具体包括以下步骤:The overall flow of the method of the UAV monocular vision range measurement based on the edge point information in the image proposed by the present invention is shown in Figure 2, which is characterized in that it includes initialization and parallel processing of motion estimation, map construction and depth map Estimate; specifically include the following steps:
1)初始化:从与无人机固连的下视单目相机捕获的图像序列中选择两帧构建初始地图和初始深度图,本实施例中,地图用一组关键帧集合{rm}和一组三维特征点集合{pi}表示,其中r表示关键帧,下标m表示第m个关键帧,为正整数,p表示三维特征点,下标i表示第i个三维特征点,为正整数,三维特征点对应于图像中的角点特征,一般将其对应的二维角点集合记为其中uc表示角点,下标i表示第i个角点,为正整数;1) Initialization: Select two frames from the image sequence captured by the downward-looking monocular camera fixedly connected to the UAV to construct an initial map and an initial depth map. In this embodiment, the map uses a set of key frame sets {rm } and A set of three-dimensional feature point set {pi }, where r represents the key frame, the subscript m represents the m-th key frame, which is a positive integer, p represents the three-dimensional feature point, and the subscript i represents the i-th three-dimensional feature point, which is Positive integer, the three-dimensional feature point corresponds to the corner feature in the image, and the corresponding two-dimensional corner point set is generally recorded as Among them, uc represents the corner point, and the subscript i represents the i-th corner point, which is a positive integer;
本实施例中,对地图中的每一幅关键帧计算其对应的深度图,深度图由边沿点坐标、边沿点深度值及深度值的不确定度构成,即深度图用表示,其中D表示边沿点对应的深度图,上标m表示第m幅深度图,为正整数(深度图与关键帧一一对应),ue表示边沿点,de表示边沿点的深度值,σe表示边沿点深度倒数的不确定度,下标i表示第i个边沿点,为正整数。In this embodiment, the corresponding depth map is calculated for each key frame in the map, and the depth map is composed of the edge point coordinates, the edge point depth value and the uncertainty of the depth value, that is, the depth map uses where D represents the depth map corresponding to the edge point, the superscript m represents the m-th depth map, which is a positive integer (one-to-one correspondence between the depth map and the key frame), ue represents the edge point, and de represents the depth value of the edge point , σe represents the uncertainty of the reciprocal of the depth of the edge point, and the subscript i represents the i-th edge point, which is a positive integer.
其中,将第一帧取作地图中的第一个关键帧,并将第一帧对应的相机坐标系取为世界坐标系(世界坐标系是指相对真实世界的坐标系比如北东地坐标系有确定变换关系的一个坐标系,本实施例为了方便计算选为第一帧对应的相机坐标系为世界坐标系),完成初始化;Among them, the first frame is taken as the first key frame in the map, and the camera coordinate system corresponding to the first frame is taken as the world coordinate system (the world coordinate system refers to the coordinate system relative to the real world, such as the northeast coordinate system There is a coordinate system that determines the transformation relationship. In this embodiment, for the convenience of calculation, the camera coordinate system corresponding to the first frame is selected as the world coordinate system), and the initialization is completed;
2)初始化地图和深度图信息后,并行进行运动估计、地图构建和深度图估计三个线程:其中,运动估计线程利用已知的地图和深度图信息,与当前帧进行对齐得到测程结果,并根据测程结果对已有的地图信息进行优化,地图构建和深度图估计线程同时运行以维护地图和深度图信息;三个线程具体描述如下:2) After initializing the map and depth map information, three threads of motion estimation, map construction and depth map estimation are performed in parallel: among them, the motion estimation thread uses the known map and depth map information to align with the current frame to obtain the range measurement result, And optimize the existing map information according to the distance measurement results. The map construction and depth map estimation threads run at the same time to maintain the map and depth map information; the three threads are described in detail as follows:
21)运动估计线程:对于相机新获取的当前帧图像,利用当前关键帧中的边沿点深度信息,进行基于运动模型的图像对齐,得到当前帧在世界坐标系中的位姿Tk,w的初始估计其中T代表位姿,下标k表示当前帧为相机获得的第k帧图像,为正整数,下标w表示世界坐标系,上标-表示初始估计,利用初始估计,可以将地图中已有的三维特征点{pi}投射在当前图像中,并通过块匹配得到与{pi}对应的二维角点以为量测值,为初值,先后对当前帧在世界坐标系w中的位姿Tk,w和地图中相关的三维特征点位置进行优化。该运动估计具体实施例如图3所示,包括:21) Motion estimation thread: For the current frame image newly acquired by the camera, use the edge point depth information in the current key frame to perform image alignment based on the motion model, and obtain the pose Tk, w of the current frame in the world coordinate system initial estimate Where T stands for pose, the subscript k means that the current frame is the kth frame image obtained by the camera, which is a positive integer, the subscript w means the world coordinate system, and the superscript - means the initial estimate. Using the initial estimate, the existing The 3D feature points {pi } of are projected in the current image, and the 2D corner points corresponding to {pi } are obtained by block matching by is the measured value, As the initial value, the pose Tk, w of the current frame in the world coordinate system w and the position of the relevant three-dimensional feature points in the map are optimized successively. The specific implementation of the motion estimation is shown in Figure 3, including:
S11:记当前帧图像为Ik,其中I表示相机获取的图像,首先将无人机当前位姿初始化为前一时刻的位姿,即以当前关键帧rm作为参考帧,用高斯-牛顿迭代算法最小化Ik和rm之间的加权灰度残差如式(1-1):S11: Record the current frame image as Ik , where I represents the image acquired by the camera. First, the current pose of the UAV is initialized to the pose of the previous moment, namely Taking the current key frame rm as the reference frame, use the Gauss-Newton iterative algorithm to minimize the weighted gray level residual between Ik and rm as formula (1-1):
得到当前帧Ik相对于参考帧rm的位姿Tk,m,其中表示参考帧rm中深度倒数不确定度小于设定的不确定度阈值(深度倒数最大估计值的1/200)、并且在当前帧可见的边沿点集合,ui表示图像中的像素点,其中u表示像素点,下标i表示第i个像素点,为正整数,wi(ui)表示像素点ui的权值,取为σi表示第i个像素点的深度的不确定度,像素点ui的灰度残差δR(Tk,m,ui)为式(1-2):Get the pose Tk,m of the current frame Ik relative to the reference frame rm , where Indicates that the depth reciprocal uncertainty in the reference frame rm is less than the set uncertainty threshold (1/200 of the maximum estimated value of the depth reciprocal), and the set of edge points visible in the current frame, ui represents the pixel point in the image, Among them, u represents the pixel point, the subscript i represents the i-th pixel point, which is a positive integer, and wi (ui ) represents the weight value of the pixel point ui , which is taken as σi represents the uncertainty of the depth of the i-th pixel point, and the grayscale residual δR(Tk, m , ui ) of the pixel point ui is formula (1-2):
其中π为投影函数,将相机坐标系中的三维点投影到二维图像坐标系中,π-1为逆投影函数,和分别表示像素点*在帧Ik和帧rm中的灰度值,结合参考帧rm相对于世界坐标系的位姿Tm,w,则可计算当前帧位姿的初始估计为式(1-3):Among them, π is the projection function, which projects the three-dimensional point in the camera coordinate system to the two-dimensional image coordinate system, and π-1 is the inverse projection function, and Represent the gray value of the pixel point * in the frame Ik and frame rm respectively, combined with the pose Tm,w of the reference frame rm relative to the world coordinate system, the initial estimate of the current frame pose can be calculated For formula (1-3):
若参与加权的边沿点数量小于设定的边沿点阈值(本实施例取值为300),转步骤S12,否则直接进入步骤S13;If the number of edge points participating in the weighting is less than the set edge point threshold (the value of this embodiment is 300), go to step S12, otherwise directly enter step S13;
S12:以步骤S11中得到的位姿估计为初值,将前一帧Ik-1作为当前帧的参考帧进行图像对齐,用高斯-牛顿迭代优化算法最小化Ik和Ik-1之间的灰度残差得到当前帧相对于前一帧的位姿Tk,k-1,如式(1-4):S12: Use the pose estimation obtained in step S11 as the initial value, use the previous frame Ik-1 as the reference frame of the current frame for image alignment, and use the Gauss-Newton iterative optimization algorithm to minimize the difference between Ik and Ik-1 The grayscale residual between the current frame and the previous frame relative to the pose Tk, k-1 , such as formula (1-4):
其中表示帧k-1中深度已知,并且在当前帧可见的角点集合(由地图中的三维特征点投射得到)。结合前一帧相对于世界坐标系的位姿Tk-1,w,则可计算当前帧位姿的初始估计为式(1-5):in Indicates the set of corner points whose depth is known in frame k-1 and visible in the current frame (obtained by projection of 3D feature points in the map). Combined with the pose Tk-1,w of the previous frame relative to the world coordinate system, the initial estimate of the pose of the current frame can be calculated For formula (1-5):
此时,将当前帧Ik选为新的关键帧作为后续一段图像的参考帧;同时作为地图构建和深度图估计线程的输入信息;At this point, the current frame Ik is selected as a new key frame as the reference frame of a subsequent section of image; simultaneously as the input information of map construction and depth map estimation thread;
S13:(利用当前帧位姿的初始估计可以将地图中的三维特征点投影到当前帧,得到一组二维坐标,但由于存在误差,这些二维坐标不一定是三维特征点在当前帧成像的真实位置。)根据初始估计判断地图中能被当前帧观测到的三维特征点集合{pi},得到其在参考帧rm中对应的角点坐标估计以为初值,用图像跟踪算法KLT(Kanade-Lucas-TomasiFeatureTracker,为常用的公开算法)计算{pi}在当前帧的真实成像位置S13: (using the initial estimation of the pose of the current frame The three-dimensional feature points in the map can be projected to the current frame to obtain a set of two-dimensional coordinates, but due to There are errors, and these two-dimensional coordinates are not necessarily the real positions of the three-dimensional feature points in the current frame imaging. ) according to the initial estimate Judging the three-dimensional feature point set {pi } that can be observed by the current frame in the map, and obtaining its corresponding corner point coordinate estimation in the reference frame rm by As the initial value, use the image tracking algorithm KLT (Kanade-Lucas-TomasiFeatureTracker, a commonly used public algorithm) to calculate the real imaging position of {pi } in the current frame
其中,Ai表示将参考图块变换到当前图像的变换矩阵。表示参考图块和当前图像中的匹配图块的平均灰度差,用于消除光照影响(该步骤建立了地图中的三维特征点{pi}和图像中二维角点的亚像素精度的对应关系);whereAi represents the transformation matrix that transforms the reference block to the current image. Indicates the average gray level difference between the reference block and the matching block in the current image, which is used to eliminate the influence of illumination (this step establishes the three-dimensional feature points {pi } in the map and the two-dimensional corner points in the image Correspondence of the sub-pixel precision of );
S14:经步骤S13得到的二维坐标与当前帧的初始位姿估计不再满足几何投影约束,即式(1-7):S14: The two-dimensional coordinates obtained in step S13 and the initial pose estimation of the current frame The geometric projection constraint is no longer satisfied, that is, formula (1-7):
以为初值,通过最小化重投影误差对Tk,w进行更新得到的当前帧位姿Tk,w即是运动估计的最终结果,如式(1-8):by is the initial value, and the current frame pose Tk, w obtained by updating Tk, w by minimizing the reprojection error is the final result of motion estimation, as shown in formula (1-8):
Tk,w即为测程得到的最终结果。之后转步骤S15,同时将步骤S14得到的当前帧位姿Tk,w分别转地图构建线程和深度图估计线程计算角点深度值和计算边沿点深度值;Tk, w is the final result obtained from the range measurement. Then go to step S15, and transfer the current frame pose Tk and w obtained in step S14 to the map construction thread and the depth map estimation thread respectively to calculate the corner point depth value and the edge point depth value;
S15:利用步骤S14得到的当前帧位姿Tk,w,对当前帧能够观测到的地图中的每一个三维特征点分别进行优化,使该三维特征点在所有能观测到它的关键帧中的投影和真实成像位置的误差平方和最小即如式(1-9):S15: Use the current frame pose Tk,w obtained in step S14 to optimize each 3D feature point in the map that can be observed in the current frame, so that the 3D feature point is in all key frames that can observe it The sum of squared errors between the projection and the real imaging position is the smallest, that is, formula (1-9):
上式中,上标j表示能观测到三维特征点pi的第j个关键帧;In the above formula, the superscript j represents the jth key frame where the three-dimensional feature point pi can be observed;
22)地图构建线程,分两种情况处理:若新获取的图像在运动估计线程步骤S12被选为新的关键帧,执行步骤S21,从新加入的关键帧中提取角点特征,算法为每一个新提取的角点创建一个基于概率的深度滤波器(即角点的深度由深度值和不确定度构成,每一次新的观测都通过深度滤波器更新角点的深度值及其不确定度);当新获取的图像不被选作关键帧时,利用运动估计线程步骤S14得到的当前帧的位姿执行步骤S22~S24,利用新图像的信息更新角点特征的概率深度滤波器,当某一角点的深度不确定度小于设定的不确定度阈值时,将其对应的三维特征点加入到地图中,该线程实施例如图4所示,具体步骤描述如下:首先判断新图像是否为关键帧,若是执行步骤S21,否则执行步骤S22-S24;22) The map construction thread is divided into two cases: if the newly acquired image is selected as a new key frame in step S12 of the motion estimation thread, step S21 is executed to extract corner features from the newly added key frame, and the algorithm is The newly extracted corner points create a probability-based depth filter (that is, the depth of the corner point is composed of depth value and uncertainty, and each new observation updates the depth value of the corner point and its uncertainty through the depth filter) ; When the newly acquired image is not selected as a key frame, use the pose of the current frame obtained in the motion estimation thread step S14 to perform steps S22-S24, and use the information of the new image to update the probability depth filter of the corner feature. When the depth uncertainty of a corner point is less than the set uncertainty threshold, its corresponding three-dimensional feature point is added to the map. The implementation of this thread is shown in Figure 4. The specific steps are described as follows: First, determine whether the new image is a key point. frame, if step S21 is executed, otherwise steps S22-S24 are executed;
S21:当一帧图像被选定为关键帧时,使用FAST算法(FeaturesfromAcceleratedSegmentTest,常用的快速角点提取算法)提取该关键帧角点特征;先用等间距的网格(例如30x30像素)将图像均匀划分,每个网格中只提取一个角点,使角点均匀地分布在图像中,总数不超过网格数;为每一个新的角点创建一个概率深度滤波器,并将其深度的倒数(其中表示角点的深度的倒数,下标i表示当前关键帧中的第i个角点,为正整数),和该角点有效观测的出现概率ρi(其中ρ表示角点的有效观测概率,下标i表示当前关键帧中的第i个角点,为正整数)的联合后验分布定义如式(2-1):S21: When a frame of image is selected as a key frame, use the FAST algorithm (Features from AcceleratedSegmentTest, a commonly used fast corner extraction algorithm) to extract the key frame corner features; first use an equidistant grid (such as 30x30 pixels) to image Evenly divided, only one corner point is extracted from each grid, so that the corner points are evenly distributed in the image, and the total number does not exceed the number of grids; for each new corner point Create a probabilistic depth filter with the inverse of its depth (in Indicates the reciprocal of the depth of the corner point, the subscript i represents the ith corner point in the current key frame, which is a positive integer), and the occurrence probability ρi of the effective observation of the corner point (where ρ represents the effective observation probability of the corner point, The subscript i represents the i-th corner point in the current key frame, which is a positive integer) and the joint posterior distribution is defined as formula (2-1):
其中Beta和N分别表示Beta分布和正态分布,下标n表示对当前概率深度滤波器已经进行了n次参数更新,为正整数,参数an和bn是递推更新过程中有效观测和无效观测分别发生的次数,μn和则是深度倒数高斯分布的均值和方差;对式(2-1)中的参数进行初始化时,将an和bn的初值置为10,μn初始化为所在图像的平均深度的倒数,初始化为最大值。Among them, Beta and N represent the Beta distribution and the normal distribution respectively. The subscript n represents that the current probability depth filter has been updated n times, which is a positive integer. The parameters an and bn are valid observations and values during the recursive update process. The number of occurrences of invalid observations, μn and is the mean and variance of the depth reciprocal Gaussian distribution; when initializing the parameters in formula (2-1), set the initial values of an and bn to 10, and μn is initialized to the reciprocal of the average depth of the image, Initialized to the maximum value.
S22:使用步骤S14得到的相机位姿已知的图像Ik,对角点特征的深度估计进行更新;记被更新的角点所在的关键帧为rm,根据当前帧Ik和rm的相对位姿,在Ik中确定一条直线,并保证Ik中与对应的像素点出现在这条直线上;在这条直线上进行搜索(该过程通常称为极线搜索),经亚像素精度的块匹配得到的坐标后,用三角测量(立体视觉中的标准计算方法)计算出所述角点的深度S22: Use the image Ik with known camera pose obtained in step S14 to update the depth estimation of the corner feature; record the updated corner The key frame where is rm , according to the relative pose of current frame Ik and rm , determine a straight line in Ik , and ensure that Ik is consistent with corresponding pixel Appears on this straight line; search on this straight line (the process is usually called epipolar search), and obtain by sub-pixel precision block matching After the coordinates of , the corner points are calculated using triangulation (standard calculation method in stereo vision) depth
S23:将步骤S22得到的深度取倒数为其中,下标i表示是当前关键帧中的第i个角点,为正整数,上标n表示这是对当前角点深度滤波器的第n次量测更新,为正整数,对深度的倒数的概率分布建模如式(2-2):S23: the depth obtained in step S22 Take the reciprocal as Among them, the subscript i indicates the ith corner point in the current key frame, which is a positive integer, and the superscript n indicates that this is the nth measurement update of the depth filter of the current corner point, which is a positive integer. reciprocal The probability distribution of is modeled as formula (2-2):
上式的含义为:关键帧rm中第i个角点经图像Ik的三角测量,如果为有效测量,那么测量结果的倒数满足正态分布其中为图像平面一个像素误差引起的深度倒数计算方差,如果为无效测量,则满足均匀分布其中为根据当前场景的先验信息设定的深度倒数的最小值和最大值;经过的量测更新,深度倒数的后验概率分布如式(2-3):The meaning of the above formula is: the i-th corner point in the key frame rm is triangulated by the image Ik , if it is a valid measurement, then the reciprocal of the measurement result Satisfies the normal distribution in Computes the variance for the inverse of depth caused by a one-pixel error in the image plane, or if it is an invalid measurement, then meet uniform distribution in The minimum and maximum values of the depth inverse set according to the prior information of the current scene; after The measurement update of the depth reciprocal The posterior probability distribution of is as formula (2-3):
其中,C为保证概率分布归一化的常数。通过矩匹配方法(统计学方法,用一种分布近似另一种分布)可以计算出Among them, C is a constant to ensure the normalization of the probability distribution. It can be calculated by the moment matching method (statistical method, using one distribution to approximate another distribution)
S24:剔除有效测量概率过低(小于0.1)的角点,将不确定度σn满足要求(σn小于角点深度倒数的最大估计值的1/200)的点添加到地图中;S24: Eliminate effective measurement probability For corner points that are too low (less than 0.1), add the points whose uncertainty σn meets the requirements (σn is less than 1/200 of the maximum estimated value of the reciprocal of the corner depth) to the map;
23)深度图估计线程,分两种情况处理:若新获取的图像在运动估计线程步骤S12被选为新的关键帧,从新加入的关键帧中提取出边沿点,执行步骤S31~S33;当新输入的图像不被选作关键帧时,利用运动估计线程步骤S14得到的当前帧的位姿,执行步骤S34~S36,用新图像信息更新边沿点的概率深度滤波器,并剔除有效观测概率过低的边沿点;执行步骤如图5所示,具体描述如下:首先判断新图像是否为关键帧,若是执行步骤S31-S33,否则执行步骤S34-S36;23) The depth map estimation thread is divided into two cases: if the newly acquired image is selected as a new key frame in step S12 of the motion estimation thread, edge points are extracted from the newly added key frame, and steps S31 to S33 are executed; When the newly input image is not selected as a key frame, use the pose of the current frame obtained in step S14 of the motion estimation thread, execute steps S34-S36, update the probability depth filter of the edge point with the new image information, and remove the effective observation probability Too low edge point; the execution steps are as shown in Figure 5, and the specific description is as follows: first judge whether the new image is a key frame, if it is to execute steps S31-S33, otherwise execute steps S34-S36;
S31:在新加入的关键帧中,利用Sobel算子(一种进行边沿点检测的通用方法)计算图像横向和纵向的灰度梯度Gx和Gy,灰度梯度图用近似。从图像中选出灰度梯度最大的2500到4000个像素点,当梯度显著的像素点数目小于2500时,将梯度阈值450减小为原来的0.95倍,当数目大于4000时,增大阈值为原来的1.05倍;S31: In the newly added key frame, use the Sobel operator (a general method for edge point detection) to calculate the horizontal and vertical gray gradients Gx and Gy of the image, and the gray gradient map uses approximate. Select 2500 to 4000 pixels with the largest gray gradient from the image. When the number of pixels with significant gradient is less than 2500, reduce the gradient threshold 450 to 0.95 times the original value. When the number is greater than 4000, increase the threshold to 1.05 times of the original;
S32:记新的关键帧为rm,前一关键帧为rm-1,本步骤将帧rm-1的深度图Dm-1传播到新的关键帧深度图Dm;rm-1中,边沿点的深度倒数的均值为深度倒数的方差为其中,表示边沿点的深度倒数的均值,表示深度倒数的方差,下标i表示帧rm-1中的第i个边沿点。rm-1到rm的相对位姿为Tm,m-1,边沿点在帧rm中对应边沿点的位置可由投影关系计算得到如式(3-1):S32: record the new key frame as rm , and the previous key frame as rm-1 , this step propagates the depth map Dm-1 of frame rm-1 to the new key frame depth map Dm ; rm- 1 , edge point The mean of the reciprocal of the depth is The variance of the inverse of depth is in, Represents the mean value of the reciprocal depth of the edge points, Indicates the variance of the reciprocal of the depth, and the subscript i indicates the i-th edge point in the frame rm-1 . The relative pose from rm-1 to rm is Tm, m-1 , the edge point The position of the corresponding edge point in the frame rm It can be calculated from the projection relationship as formula (3-1):
记的深度倒数为深度倒数的方差记为那么:remember The reciprocal of the depth is The variance of the reciprocal of the depth is denoted as So:
其中,tz表示相机在光轴上的平移。由此可以推导出的深度倒数的方差为:where tz represents the translation of the camera on the optical axis. From this it can be deduced The variance of the reciprocal of the depth for:
其中表示预测不确定性,根据经验设置。由于投影误差,的坐标可能不是整数,将绑定到rm中最近的像素点in Indicates the forecast uncertainty, set empirically. Due to projection errors, The coordinates of may not be integers, theBind to the nearest pixel in rm
S33:为步骤S31中提取的每一个边沿点创建概率深度滤波器,如果某一个边沿点附近3x3像素范围内存在先验估计则其深度的均值初始化如式(3-4):S33: Create a probabilistic depth filter for each edge point extracted in step S31, if there is a priori estimation within a 3x3 pixel range near a certain edge point then its depth mean of The initialization is as formula (3-4):
概率深度滤波器的参数深度倒数的不确定度则初始化为先验估计中最小不确定度,即如果不存在先验估计,则概率深度滤波器初始化为平均深度和最大不确定度;取则对于每一个边沿点可按照式(2-1)构造概率深度滤波器如式(3-5):Uncertainty in the reciprocal of the parameter depth of a probabilistic depth filter Then it is initialized to the minimum uncertainty in the prior estimate, that is, If no prior estimate exists, the probabilistic depth filter is initialized to mean depth and maximum uncertainty; take Then for each edge point, a probabilistic depth filter can be constructed according to formula (2-1), such as formula (3-5):
其中,(3-5)式各参数定义与(2-1)式中类似,在这里对应边沿点的概率深度滤波器的各个参数,an和bn的初值置为10,μn,σn初始化为和Among them, the definition of each parameter in formula (3-5) is similar to that in formula (2-1). Here, the parameters of the probabilistic depth filter corresponding to the edge point, the initial values of an and bn are set to 10, μn ,σn is initialized as and
S34:将新的关键帧rm设为后续图像的参考帧,利用后续的图像对帧rm中的深度图Dm进行量测更新;用图像Ik对Dm更新时,记被更新的边沿点为根据Ik和rm的相对位姿,在Ik中确定一条直线,保证Ik中与对应的像素点出现在这条直线上;如果边沿点灰度梯度方向与所述直线方向夹角小于阈值(本实施例设定阈值为25度),经亚像素精度的块匹配得到的坐标后,用三角测量计算出所述边沿点的深度执行步骤35);如果所述夹角大于该阈值,则不做搜索,返回步骤S34检查下一个边沿点。S34: Set the new key frame rm as the reference frame of the subsequent image, and use the subsequent image to measure and update the depth map Dm in the frame rm ; when using the image Ik to update Dm , record the updated The edge point is According to the relative poses of Ik and rm , determine a straight line in Ik to ensure that Ik is consistent with corresponding pixel appears on this straight line; if the angle between the gray gradient direction of the edge point and the straight line direction is less than the threshold (the threshold is set to 25 degrees in this embodiment), the sub-pixel precision block matching is obtained After the coordinates of , use triangulation to calculate the depth of the edge point Execute step 35); if the included angle is greater than the threshold, do not search, and return to step S34 to check the next edge point.
S35:将步骤S34得到的深度取倒数为其中,下标i表示是当前关键帧中的第i个边沿点,为正整数,上标n表示这是对当前边沿点点深度滤波器的第n次量测更新,为正整数,对深度的倒数的概率分布建模如式(3-6):S35: the depth obtained in step S34 Take the reciprocal as Among them, the subscript i indicates the i-th edge point in the current key frame, which is a positive integer, and the superscript n indicates that this is the nth measurement update of the current edge point depth filter, which is a positive integer. reciprocal The probability distribution of is modeled as formula (3-6):
上式的含义为:关键帧rm中第i个边沿点点经图像Ik的三角测量,如果为有效测量,那么测量结果的倒数满足正态分布如果为无效测量,则满足均匀分布其中为根据当前场景的先验信息设定的深度倒数的最小值和最大值;经过的量测更新,深度倒数的后验概率分布如式(3-6):The meaning of the above formula is: the i-th edge point in the key frame rm is triangulated by the image Ik , if it is a valid measurement, then the reciprocal of the measurement result Satisfies the normal distribution If it is an invalid measurement, then meet uniform distribution in The minimum and maximum values of the depth inverse set according to the prior information of the current scene; after The measurement update of the depth reciprocal The posterior probability distribution of is as formula (3-6):
通过矩匹配方法计算出Calculated by the moment matching method
S36:剔除有效测量概率过低(小于0.1)的边沿点。S36: The probability of eliminating effective measurement is too low ( less than 0.1) edge points.
以上仅为本发明的较佳实施例,仅用于解释本发明,而并非对本发明的限制。且本发明的保护范围并不局限于此,任何熟悉本领域的技术人员,在不脱离本发明的精神、原理和范围的情况下,还可以对这些实施例进行多种变化、变型、修改和替换,因此所有等同的技术方法都应属于本发明的范畴,本发明的保护范围应由所附权利要求及其等同限。The above are only preferred embodiments of the present invention, and are only used to explain the present invention, rather than limit the present invention. And the scope of protection of the present invention is not limited thereto, any person skilled in the art, without departing from the spirit, principle and scope of the present invention, can also carry out various changes, modifications, modifications and Replacement, so all equivalent technical methods should belong to the category of the present invention, and the protection scope of the present invention should be limited by the appended claims and their equivalents.
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201610131438.2ACN105809687B (en) | 2016-03-08 | 2016-03-08 | A Monocular Vision Odometry Method Based on Edge Point Information in Image |
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201610131438.2ACN105809687B (en) | 2016-03-08 | 2016-03-08 | A Monocular Vision Odometry Method Based on Edge Point Information in Image |
| Publication Number | Publication Date |
|---|---|
| CN105809687Atrue CN105809687A (en) | 2016-07-27 |
| CN105809687B CN105809687B (en) | 2019-09-27 |
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN201610131438.2AExpired - Fee RelatedCN105809687B (en) | 2016-03-08 | 2016-03-08 | A Monocular Vision Odometry Method Based on Edge Point Information in Image |
| Country | Link |
|---|---|
| CN (1) | CN105809687B (en) |
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN106570820A (en)* | 2016-10-18 | 2017-04-19 | 浙江工业大学 | Monocular visual 3D feature extraction method based on four-rotor unmanned aerial vehicle (UAV) |
| CN106652028A (en)* | 2016-12-28 | 2017-05-10 | 深圳乐行天下科技有限公司 | Environment three-dimensional mapping method and apparatus |
| CN107341814A (en)* | 2017-06-14 | 2017-11-10 | 宁波大学 | The four rotor wing unmanned aerial vehicle monocular vision ranging methods based on sparse direct method |
| CN107527366A (en)* | 2017-08-23 | 2017-12-29 | 上海视智电子科技有限公司 | A kind of camera tracking towards depth camera |
| CN107687850A (en)* | 2017-07-26 | 2018-02-13 | 哈尔滨工业大学深圳研究生院 | A kind of unmanned vehicle position and orientation estimation method of view-based access control model and Inertial Measurement Unit |
| CN107888828A (en)* | 2017-11-22 | 2018-04-06 | 网易(杭州)网络有限公司 | Space-location method and device, electronic equipment and storage medium |
| CN108151728A (en)* | 2017-12-06 | 2018-06-12 | 华南理工大学 | A kind of half dense cognitive map creation method for binocular SLAM |
| CN106920279B (en)* | 2017-03-07 | 2018-06-19 | 百度在线网络技术(北京)有限公司 | Three-dimensional map construction method and device |
| CN108398139A (en)* | 2018-03-01 | 2018-08-14 | 北京航空航天大学 | A kind of dynamic environment visual odometry method of fusion fish eye images and depth image |
| CN108615244A (en)* | 2018-03-27 | 2018-10-02 | 中国地质大学(武汉) | A kind of image depth estimation method and system based on CNN and depth filter |
| CN108615246A (en)* | 2018-04-19 | 2018-10-02 | 浙江大承机器人科技有限公司 | It improves visual odometry system robustness and reduces the method that algorithm calculates consumption |
| CN108765481A (en)* | 2018-05-25 | 2018-11-06 | 亮风台(上海)信息科技有限公司 | A kind of depth estimation method of monocular video, device, terminal and storage medium |
| CN108759833A (en)* | 2018-04-25 | 2018-11-06 | 中国科学院合肥物质科学研究院 | A kind of intelligent vehicle localization method based on priori map |
| CN108780577A (en)* | 2017-11-30 | 2018-11-09 | 深圳市大疆创新科技有限公司 | Image processing method and equipment |
| CN108986037A (en)* | 2018-05-25 | 2018-12-11 | 重庆大学 | Monocular vision odometer localization method and positioning system based on semi-direct method |
| CN109798897A (en)* | 2019-01-22 | 2019-05-24 | 广东工业大学 | A method of it is assessed by environmental model integrity degree to improve monocular vision reliability of positioning |
| CN110044358A (en)* | 2019-04-29 | 2019-07-23 | 清华大学 | Method for positioning mobile robot based on live field wire feature |
| CN110099656A (en)* | 2017-01-22 | 2019-08-06 | 四川金瑞麒智能科学技术有限公司 | System and method for controlling intelligent wheel chair |
| CN110135376A (en)* | 2019-05-21 | 2019-08-16 | 北京百度网讯科技有限公司 | Method, device and medium for determining coordinate system conversion parameters of image sensor |
| CN110177532A (en)* | 2017-01-22 | 2019-08-27 | 四川金瑞麒智能科学技术有限公司 | An intelligent wheelchair system based on big data and artificial intelligence |
| CN110349212A (en)* | 2019-06-28 | 2019-10-18 | Oppo广东移动通信有限公司 | Immediately optimization method and device, medium and the electronic equipment of positioning and map structuring |
| CN110428462A (en)* | 2019-07-17 | 2019-11-08 | 清华大学 | Polyphaser solid matching method and device |
| CN110889799A (en)* | 2018-09-11 | 2020-03-17 | 顶级公司 | Method, apparatus and processor for generating higher resolution frames |
| CN111260698A (en)* | 2018-12-03 | 2020-06-09 | 北京初速度科技有限公司 | Binocular image feature matching method and vehicle-mounted terminal |
| CN112364677A (en)* | 2020-11-23 | 2021-02-12 | 盛视科技股份有限公司 | Robot vision positioning method based on two-dimensional code |
| CN112632426A (en)* | 2020-12-22 | 2021-04-09 | 新华三大数据技术有限公司 | Webpage processing method and device |
| CN112907742A (en)* | 2021-02-18 | 2021-06-04 | 湖南国科微电子股份有限公司 | Visual synchronous positioning and mapping method, device, equipment and medium |
| CN113689400A (en)* | 2021-08-24 | 2021-11-23 | 凌云光技术股份有限公司 | Method and device for detecting section contour edge of depth image |
| CN113793417A (en)* | 2021-09-24 | 2021-12-14 | 东北林业大学 | Monocular SLAM method capable of creating large-scale map |
| CN114066981A (en)* | 2021-11-11 | 2022-02-18 | 国网辽宁省电力有限公司沈阳供电公司 | Unmanned aerial vehicle ground target positioning method |
| CN115273538A (en)* | 2022-08-29 | 2022-11-01 | 王炜程 | GNSS-RTK technology-based parking space detection system and deployment and working methods thereof |
| CN116993740A (en)* | 2023-09-28 | 2023-11-03 | 山东万世机械科技有限公司 | Concrete structure surface defect detection method based on image data |
| CN118941648A (en)* | 2024-08-02 | 2024-11-12 | 华中科技大学 | A method for estimating the three-dimensional position of a UAV based on the depth estimation of the UAV |
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN101377405A (en)* | 2008-07-11 | 2009-03-04 | 北京航空航天大学 | Vision measuring method of space round gesture parameter and geometric parameter |
| CN102128617A (en)* | 2010-12-08 | 2011-07-20 | 中国科学院自动化研究所 | Vision real-time measuring method based on color code block |
| CN103926933A (en)* | 2014-03-29 | 2014-07-16 | 北京航空航天大学 | Indoor simultaneous locating and environment modeling method for unmanned aerial vehicle |
| US20150332446A1 (en)* | 2014-05-16 | 2015-11-19 | GM Global Technology Operations LLC | Surround-view camera system (vpm) and vehicle dynamic |
| CN105157708A (en)* | 2015-10-10 | 2015-12-16 | 南京理工大学 | Unmanned aerial vehicle autonomous navigation system and method based on image processing and radar |
| CN105225241A (en)* | 2015-09-25 | 2016-01-06 | 广州极飞电子科技有限公司 | The acquisition methods of unmanned plane depth image and unmanned plane |
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN101377405A (en)* | 2008-07-11 | 2009-03-04 | 北京航空航天大学 | Vision measuring method of space round gesture parameter and geometric parameter |
| CN102128617A (en)* | 2010-12-08 | 2011-07-20 | 中国科学院自动化研究所 | Vision real-time measuring method based on color code block |
| CN103926933A (en)* | 2014-03-29 | 2014-07-16 | 北京航空航天大学 | Indoor simultaneous locating and environment modeling method for unmanned aerial vehicle |
| US20150332446A1 (en)* | 2014-05-16 | 2015-11-19 | GM Global Technology Operations LLC | Surround-view camera system (vpm) and vehicle dynamic |
| CN105225241A (en)* | 2015-09-25 | 2016-01-06 | 广州极飞电子科技有限公司 | The acquisition methods of unmanned plane depth image and unmanned plane |
| CN105157708A (en)* | 2015-10-10 | 2015-12-16 | 南京理工大学 | Unmanned aerial vehicle autonomous navigation system and method based on image processing and radar |
| Title |
|---|
| SAYANAN SIVARAMAN ET AL.: "Combining Monocular and Stereo-Vision for Real-Time Vehicle Ranging and Tracking on Multilane Highways", 《2011 14TH INTERNATIONAL IEEE CONFERENCE ON INTELLIGENT TRANSPORTATION SYSTEMS》* |
| 刘艳丽: "融合颜色和深度信息的三维同步定位与地图构建研究", 《中国博士学位论文全文数据库 信息科技辑》* |
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN106570820B (en)* | 2016-10-18 | 2019-12-03 | 浙江工业大学 | A monocular vision 3D feature extraction method based on quadrotor UAV |
| CN106570820A (en)* | 2016-10-18 | 2017-04-19 | 浙江工业大学 | Monocular visual 3D feature extraction method based on four-rotor unmanned aerial vehicle (UAV) |
| CN106652028A (en)* | 2016-12-28 | 2017-05-10 | 深圳乐行天下科技有限公司 | Environment three-dimensional mapping method and apparatus |
| US11294379B2 (en) | 2017-01-22 | 2022-04-05 | Sichuan Golden Ridge Intelligence Science & Technology Co., Ltd. | Systems and methods for controlling intelligent wheelchair |
| CN110177532A (en)* | 2017-01-22 | 2019-08-27 | 四川金瑞麒智能科学技术有限公司 | An intelligent wheelchair system based on big data and artificial intelligence |
| CN110099656A (en)* | 2017-01-22 | 2019-08-06 | 四川金瑞麒智能科学技术有限公司 | System and method for controlling intelligent wheel chair |
| CN106920279B (en)* | 2017-03-07 | 2018-06-19 | 百度在线网络技术(北京)有限公司 | Three-dimensional map construction method and device |
| CN107341814A (en)* | 2017-06-14 | 2017-11-10 | 宁波大学 | The four rotor wing unmanned aerial vehicle monocular vision ranging methods based on sparse direct method |
| CN107341814B (en)* | 2017-06-14 | 2020-08-18 | 宁波大学 | Monocular visual odometry method for quadrotor UAV based on sparse direct method |
| CN107687850B (en)* | 2017-07-26 | 2021-04-23 | 哈尔滨工业大学深圳研究生院 | A Pose and Attitude Estimation Method for Unmanned Aircraft Based on Vision and Inertial Measurement Unit |
| CN107687850A (en)* | 2017-07-26 | 2018-02-13 | 哈尔滨工业大学深圳研究生院 | A kind of unmanned vehicle position and orientation estimation method of view-based access control model and Inertial Measurement Unit |
| CN107527366A (en)* | 2017-08-23 | 2017-12-29 | 上海视智电子科技有限公司 | A kind of camera tracking towards depth camera |
| CN107527366B (en)* | 2017-08-23 | 2020-04-10 | 上海视智电子科技有限公司 | Camera tracking method for depth camera |
| CN107888828A (en)* | 2017-11-22 | 2018-04-06 | 网易(杭州)网络有限公司 | Space-location method and device, electronic equipment and storage medium |
| CN107888828B (en)* | 2017-11-22 | 2020-02-21 | 杭州易现先进科技有限公司 | Space positioning method and device, electronic device, and storage medium |
| CN108780577A (en)* | 2017-11-30 | 2018-11-09 | 深圳市大疆创新科技有限公司 | Image processing method and equipment |
| CN108151728A (en)* | 2017-12-06 | 2018-06-12 | 华南理工大学 | A kind of half dense cognitive map creation method for binocular SLAM |
| CN108398139B (en)* | 2018-03-01 | 2021-07-16 | 北京航空航天大学 | A Dynamic Environment Visual Odometry Method Fusion Fisheye Image and Depth Image |
| CN108398139A (en)* | 2018-03-01 | 2018-08-14 | 北京航空航天大学 | A kind of dynamic environment visual odometry method of fusion fish eye images and depth image |
| CN108615244A (en)* | 2018-03-27 | 2018-10-02 | 中国地质大学(武汉) | A kind of image depth estimation method and system based on CNN and depth filter |
| CN108615244B (en)* | 2018-03-27 | 2019-11-15 | 中国地质大学(武汉) | An Image Depth Estimation Method and System Based on CNN and Depth Filter |
| CN108615246B (en)* | 2018-04-19 | 2021-02-26 | 浙江大承机器人科技有限公司 | Method for improving robustness of visual odometer system and reducing calculation consumption of algorithm |
| CN108615246A (en)* | 2018-04-19 | 2018-10-02 | 浙江大承机器人科技有限公司 | It improves visual odometry system robustness and reduces the method that algorithm calculates consumption |
| CN108759833A (en)* | 2018-04-25 | 2018-11-06 | 中国科学院合肥物质科学研究院 | A kind of intelligent vehicle localization method based on priori map |
| CN108986037A (en)* | 2018-05-25 | 2018-12-11 | 重庆大学 | Monocular vision odometer localization method and positioning system based on semi-direct method |
| CN108986037B (en)* | 2018-05-25 | 2020-06-16 | 重庆大学 | Monocular vision odometer positioning method and positioning system based on semi-direct method |
| CN108765481B (en)* | 2018-05-25 | 2021-06-11 | 亮风台(上海)信息科技有限公司 | Monocular video depth estimation method, device, terminal and storage medium |
| CN108765481A (en)* | 2018-05-25 | 2018-11-06 | 亮风台(上海)信息科技有限公司 | A kind of depth estimation method of monocular video, device, terminal and storage medium |
| CN110889799A (en)* | 2018-09-11 | 2020-03-17 | 顶级公司 | Method, apparatus and processor for generating higher resolution frames |
| CN111260698B (en)* | 2018-12-03 | 2024-01-02 | 北京魔门塔科技有限公司 | Binocular image feature matching method and vehicle-mounted terminal |
| CN111260698A (en)* | 2018-12-03 | 2020-06-09 | 北京初速度科技有限公司 | Binocular image feature matching method and vehicle-mounted terminal |
| CN109798897A (en)* | 2019-01-22 | 2019-05-24 | 广东工业大学 | A method of it is assessed by environmental model integrity degree to improve monocular vision reliability of positioning |
| CN109798897B (en)* | 2019-01-22 | 2022-07-01 | 广东工业大学 | A method to improve the reliability of monocular vision localization through environment model integrity assessment |
| CN110044358A (en)* | 2019-04-29 | 2019-07-23 | 清华大学 | Method for positioning mobile robot based on live field wire feature |
| CN110135376A (en)* | 2019-05-21 | 2019-08-16 | 北京百度网讯科技有限公司 | Method, device and medium for determining coordinate system conversion parameters of image sensor |
| CN110349212B (en)* | 2019-06-28 | 2023-08-25 | Oppo广东移动通信有限公司 | Optimization method and device, medium and electronic equipment for real-time positioning and map construction |
| CN110349212A (en)* | 2019-06-28 | 2019-10-18 | Oppo广东移动通信有限公司 | Immediately optimization method and device, medium and the electronic equipment of positioning and map structuring |
| CN110428462A (en)* | 2019-07-17 | 2019-11-08 | 清华大学 | Polyphaser solid matching method and device |
| CN112364677A (en)* | 2020-11-23 | 2021-02-12 | 盛视科技股份有限公司 | Robot vision positioning method based on two-dimensional code |
| CN112632426B (en)* | 2020-12-22 | 2022-08-30 | 新华三大数据技术有限公司 | Webpage processing method and device |
| CN112632426A (en)* | 2020-12-22 | 2021-04-09 | 新华三大数据技术有限公司 | Webpage processing method and device |
| CN112907742A (en)* | 2021-02-18 | 2021-06-04 | 湖南国科微电子股份有限公司 | Visual synchronous positioning and mapping method, device, equipment and medium |
| CN113689400A (en)* | 2021-08-24 | 2021-11-23 | 凌云光技术股份有限公司 | Method and device for detecting section contour edge of depth image |
| CN113689400B (en)* | 2021-08-24 | 2024-04-19 | 凌云光技术股份有限公司 | Method and device for detecting profile edge of depth image section |
| CN113793417A (en)* | 2021-09-24 | 2021-12-14 | 东北林业大学 | Monocular SLAM method capable of creating large-scale map |
| CN114066981A (en)* | 2021-11-11 | 2022-02-18 | 国网辽宁省电力有限公司沈阳供电公司 | Unmanned aerial vehicle ground target positioning method |
| CN115273538A (en)* | 2022-08-29 | 2022-11-01 | 王炜程 | GNSS-RTK technology-based parking space detection system and deployment and working methods thereof |
| CN116993740A (en)* | 2023-09-28 | 2023-11-03 | 山东万世机械科技有限公司 | Concrete structure surface defect detection method based on image data |
| CN118941648A (en)* | 2024-08-02 | 2024-11-12 | 华中科技大学 | A method for estimating the three-dimensional position of a UAV based on the depth estimation of the UAV |
| CN118941648B (en)* | 2024-08-02 | 2025-09-09 | 华中科技大学 | Unmanned aerial vehicle three-dimensional position estimation method based on unmanned aerial vehicle depth estimation |
| Publication number | Publication date |
|---|---|
| CN105809687B (en) | 2019-09-27 |
| Publication | Publication Date | Title |
|---|---|---|
| CN105809687B (en) | A Monocular Vision Odometry Method Based on Edge Point Information in Image | |
| CN109784333B (en) | Three-dimensional target detection method and system based on point cloud weighted channel characteristics | |
| CN108520554B (en) | Binocular three-dimensional dense mapping method based on ORB-SLAM2 | |
| CN107025668B (en) | Design method of visual odometer based on depth camera | |
| CN106940704B (en) | Positioning method and device based on grid map | |
| CN107341814B (en) | Monocular visual odometry method for quadrotor UAV based on sparse direct method | |
| CN106679634B (en) | A kind of space non-cooperative target pose measuring method based on stereoscopic vision | |
| CN108010081B (en) | RGB-D visual odometer method based on Census transformation and local graph optimization | |
| CN110118556A (en) | A kind of robot localization method and device based on covariance mixing together SLAM | |
| CN106595659A (en) | Map merging method of unmanned aerial vehicle visual SLAM under city complex environment | |
| CN106384382A (en) | Three-dimensional reconstruction system and method based on binocular stereoscopic vision | |
| CN104517289B (en) | A kind of indoor scene localization method based on hybrid camera | |
| CN116258817A (en) | A method and system for constructing an autonomous driving digital twin scene based on multi-view 3D reconstruction | |
| KR101869605B1 (en) | Three-Dimensional Space Modeling and Data Lightening Method using the Plane Information | |
| CN117671022B (en) | Mobile robot vision positioning system and method in indoor weak texture environment | |
| CN116645392A (en) | Space target relative pose iterative estimation method and system based on key point weight | |
| CN110749308A (en) | SLAM-oriented outdoor positioning method using consumer-grade GPS and 2.5D building models | |
| CN110751722B (en) | Simultaneous positioning and mapping method and device | |
| CN114323038A (en) | Outdoor positioning method fusing binocular vision and 2D laser radar | |
| CN115471748A (en) | A Monocular Vision SLAM Method for Dynamic Environment | |
| CN114248778B (en) | Positioning method and positioning device of mobile equipment | |
| CN117372460A (en) | Moon rover autonomous positioning method based on self-adaptive threshold edge registration visual odometer | |
| CN104217442A (en) | Aerial video moving object detection method based on multiple model estimation | |
| CN114812601A (en) | State estimation method and device of visual inertial odometer and electronic equipment | |
| CN111812978A (en) | A Multi-UAV Collaborative SLAM Method and System |
| Date | Code | Title | Description |
|---|---|---|---|
| C06 | Publication | ||
| PB01 | Publication | ||
| C10 | Entry into substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| GR01 | Patent grant | ||
| GR01 | Patent grant | ||
| CF01 | Termination of patent right due to non-payment of annual fee | Granted publication date:20190927 | |
| CF01 | Termination of patent right due to non-payment of annual fee |