Movatterモバイル変換


[0]ホーム

URL:


CN105261060A - Point cloud compression and inertial navigation based mobile context real-time three-dimensional reconstruction method - Google Patents

Point cloud compression and inertial navigation based mobile context real-time three-dimensional reconstruction method
Download PDF

Info

Publication number
CN105261060A
CN105261060ACN201510437252.5ACN201510437252ACN105261060ACN 105261060 ACN105261060 ACN 105261060ACN 201510437252 ACN201510437252 ACN 201510437252ACN 105261060 ACN105261060 ACN 105261060A
Authority
CN
China
Prior art keywords
point cloud
point
compression
scene
carrier
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201510437252.5A
Other languages
Chinese (zh)
Inventor
王艺楠
郝矿荣
丁永生
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Donghua University
Original Assignee
Donghua University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Donghua UniversityfiledCriticalDonghua University
Priority to CN201510437252.5ApriorityCriticalpatent/CN105261060A/en
Publication of CN105261060ApublicationCriticalpatent/CN105261060A/en
Pendinglegal-statusCriticalCurrent

Links

Landscapes

Abstract

Translated fromChinese

本发明涉及一种基于PFH和惯性导航的移动场景实时三维重构方法,分为以下三个阶段:第1阶段,设计一种基于PFH的点云压缩算法;第2阶段,设计了一种基于惯性导航的点云配准算法;第3阶段,完成点云融合。本发明可实现较高压缩比,有效减少ICP算法输入点集的数量,从而节约点云配准、点云融合计算时间;减少ICP算法的迭代次数,提高收敛速度,从而提高整个重构算法的实时性。

The present invention relates to a real-time three-dimensional reconstruction method of a mobile scene based on PFH and inertial navigation, which is divided into the following three stages: in the first stage, a point cloud compression algorithm based on PFH is designed; in the second stage, a method based on Point cloud registration algorithm for inertial navigation; in the third stage, point cloud fusion is completed. The present invention can achieve a higher compression ratio, effectively reduce the number of input point sets of the ICP algorithm, thereby saving calculation time for point cloud registration and point cloud fusion; reduce the number of iterations of the ICP algorithm, increase the convergence speed, and thereby improve the efficiency of the entire reconstruction algorithm real-time.

Description

Translated fromChinese
基于点云压缩和惯性导航的移动场景实时三维重构方法Real-time 3D reconstruction method of moving scene based on point cloud compression and inertial navigation

技术领域technical field

本发明涉及一种基于点云压缩和惯性导航的移动场景下实时三维重构的方法,属于计算机机器视觉、增强现实领域。The invention relates to a method for real-time three-dimensional reconstruction in a moving scene based on point cloud compression and inertial navigation, and belongs to the fields of computer machine vision and augmented reality.

背景技术Background technique

三维重构技术是计算机视觉领域中的热点研究方向之一,主要研究如何获取物体在空间中的三维信息,实现在计算机中真实再现客观环境,为机器和人类提供准确的3D场景信息。3D reconstruction technology is one of the hot research directions in the field of computer vision. It mainly studies how to obtain the 3D information of objects in space, realize the real reproduction of the objective environment in the computer, and provide accurate 3D scene information for machines and humans.

三维重构的主要研究方向包括场景重构、地图构建以及模型重建等。周围环境的3D场景信息可以帮助机器人更加清楚的了解自身所处环境并实现与环境的友好交互,在机器人导航、考古勘测、灾害救援等领域中发挥重要作用;此外,3D场景信息也可用在虚拟现实、军事模拟、城市规划、文物保护等领域。在工业产品零件的CAD加工、模具制作、医学整形手术、人脸建模与识别、计算机三维动画等应用中,经常需要物体的三维模型,这也对三维重构技术做出了要求。目前,三维场景重构技术凭借其在这些领域的重要应用,越来越受到各界研究者的青睐,逐渐成为多领域的研究热点。The main research directions of 3D reconstruction include scene reconstruction, map construction and model reconstruction. The 3D scene information of the surrounding environment can help the robot understand its environment more clearly and realize friendly interaction with the environment, which plays an important role in robot navigation, archaeological survey, disaster rescue and other fields; in addition, 3D scene information can also be used in virtual Reality, military simulation, urban planning, cultural relics protection and other fields. In applications such as CAD processing of industrial product parts, mold making, medical plastic surgery, face modeling and recognition, and computer 3D animation, 3D models of objects are often required, which also requires 3D reconstruction technology. At present, with its important applications in these fields, 3D scene reconstruction technology is more and more favored by researchers from all walks of life, and has gradually become a research hotspot in many fields.

目前3D场景点云模型的三维重构方法主要分为两大类:基于多幅图像的三维重构方法和基于红外或者激光扫描技术的三维重构方法。前者通过数码摄像机等拍摄的二维图像恢复出建筑物表面的三维信息,如公开号为CN101398937、名称为《基于同一场景散乱照片集的三维重构方法》的专利。这类方法虽然成本低廉,自动化程度较高,且重建的三维模型包含丰富的纹理等信息,但需要对图片信息进行大量的处理,运行速度较慢,适用于小型单一建筑物建模,但对于地貌复杂的大规模场景建模,由于受到计算机处理时间的限制并不适用。At present, the 3D reconstruction methods of 3D scene point cloud models are mainly divided into two categories: 3D reconstruction methods based on multiple images and 3D reconstruction methods based on infrared or laser scanning technology. The former restores the three-dimensional information of the building surface from two-dimensional images taken by digital cameras, such as the patent with the publication number CN101398937 and the title "Three-dimensional Reconstruction Method Based on Scattered Photo Collection of the Same Scene". Although this type of method is low in cost, high in automation, and the reconstructed 3D model contains rich texture and other information, it requires a lot of processing of image information, and its running speed is slow. It is suitable for small single building modeling, but for Large-scale scene modeling with complex landforms is not suitable due to the limitation of computer processing time.

因此,近年来基于红外或者激光扫描的三维重构技术成为了研究热点,红外或者激光扫描仪能够直接获得景物的深度信息,具有速度快,精度高,不受表面复杂度影响等优势。此外,利用激光扫描技术进行三维重构能够有效恢复出具有准确几何信息和照片真实感的三维模型。这些准确的三维模型不仅能够提供场景可视化和虚拟漫游方面的功能,更可以满足数据的存档,测量和分析等更高层次的需求。Therefore, in recent years, 3D reconstruction technology based on infrared or laser scanning has become a research hotspot. Infrared or laser scanning can directly obtain the depth information of the scene, which has the advantages of fast speed, high precision, and is not affected by surface complexity. In addition, 3D reconstruction using laser scanning technology can effectively recover 3D models with accurate geometric information and photorealism. These accurate 3D models can not only provide scene visualization and virtual roaming functions, but also meet higher-level requirements such as data archiving, measurement and analysis.

发明内容Contents of the invention

本发明的目的是提供一种利用传感器在室内场景移动,不断获取不同视角下的点云,并实时重构出整个场景三维模型的方法。The purpose of the present invention is to provide a method for using a sensor to move in an indoor scene, continuously acquire point clouds from different perspectives, and reconstruct a 3D model of the entire scene in real time.

为了达到上述目的,本发明的技术方案是提供了一种基于点云压缩和惯性导航的移动场景实时三维重构方法,其特征在于,包括如下步骤:In order to achieve the above object, the technical solution of the present invention provides a method for real-time three-dimensional reconstruction of a mobile scene based on point cloud compression and inertial navigation, which is characterized in that it includes the following steps:

第一步、对于场景S,利用Kinect体感控制器的摄像头分别从视角V1,…,Vk,…Vn依次获取点云集合M1,…,Mk,…Mn,Mk为k时刻从视觉Vk获取的点云;Step 1. For the scene S, use the camera of the Kinect somatosensory controller to obtain point cloud sets M1 , ..., Mk , ... Mn sequentially from the perspectives V1 , ..., Vk , ... Vn , where Mk is k The point cloud obtained from the visual Vk at any time;

第二步、将点云集合{M1,…,Mk,…Mn}压缩为{P1,…,Pk,…Pn},其中,Pk为Mk的一个子集,且满足:式中,ρ为预设比,Fc(□)为特征点数量度量函数,δ为设定的特征点保留率;The second step is to compress the point cloud set {M1 ,...,Mk ,...Mn } into {P1 ,...,Pk ,...Pn }, where Pk is a subset of Mk , and satisfy: In the formula, ρ is the preset ratio, Fc (□) is the measurement function of the number of feature points, and δ is the set feature point retention rate;

第三步、利用惯性导航技术获取场景S内运载体在视角V1,…,Vk,…Vn下的位置、角度偏移量,转化为ICP算法所需的初始变换矩阵The third step is to use the inertial navigation technology to obtain the position and angle offset of the vehicle in the scene S under the viewing angle V1 ,...,Vk ,...Vn , and convert it into the initial transformation matrix required by the ICP algorithm and

第四步、利用初始变换矩阵对压缩后的点云{P1,…,Pk,…Pn}进行粗配准,再基于ICP算法确定变换矩阵Rk与Tk,完成精确配准;The fourth step, using the initial transformation matrix and Perform rough registration on the compressed point cloud {P1 ,...,Pk ,...Pn }, and then determine the transformation matrix Rk and Tk based on the ICP algorithm to complete the precise registration;

第五步、随着Kinect体感控制器的传感器在场景S中的漫游,不断将新获取的每一片点云与局部模型拼接、融合,直到重构出整个三维场景模型。Step 5: As the sensor of the Kinect somatosensory controller roams in the scene S, each piece of newly acquired point cloud is spliced and fused with the local model until the entire 3D scene model is reconstructed.

优选地,所述第三步包括:Preferably, the third step includes:

步骤3.1、在场景S内运载体上放置Kinect体感控制器的深度摄像头、加速度计和陀螺仪;Step 3.1, place the depth camera, accelerometer and gyroscope of the Kinect somatosensory controller on the carrier in the scene S;

步骤3.2、设定一固定坐标系,设运动起始位置为坐标原点,深度摄像头初始方向与X轴正方向重合,利用加速度计和陀螺仪测量运载体在X、Y、Z轴上的加速度,将加速度一次积分得到运载体的速度,将加速度二次积分得到运载体的位置,从而达到对运载体定位的目的,同时,通过陀螺仪获取运载体的旋转角度;Step 3.2, set a fixed coordinate system, set the starting position of the movement as the coordinate origin, the initial direction of the depth camera coincides with the positive direction of the X-axis, and use the accelerometer and gyroscope to measure the acceleration of the carrier on the X, Y, and Z axes. Integrate the acceleration once to get the velocity of the carrier, and integrate the acceleration twice to get the position of the carrier, so as to achieve the purpose of positioning the carrier, and at the same time, obtain the rotation angle of the carrier through the gyroscope;

步骤3.3、将运载体在不同位姿间的相对位移、旋转角度信息,转化为两片点云间的初始变换矩阵Step 3.3, transform the relative displacement and rotation angle information of the vehicle between different poses into the initial transformation matrix between the two point clouds and

优选地,所述第四步包括:Preferably, the fourth step includes:

步骤4.1、将两个不同视角下的压缩点云Pk与Pk-1作为ICP算法的输入集合,将初始变换矩阵作为ICP算法的初始变换矩阵;Step 4.1. Take the compressed point clouds Pk and Pk-1 under two different viewing angles as the input set of the ICP algorithm, and set the initial transformation matrix and As the initial transformation matrix of the ICP algorithm;

步骤4.2、利用初始变换矩阵对压缩后的参考点云Pk进行旋转、平移处理,得到Pk′;Step 4.2, using the initial transformation matrix and Rotate and translate the compressed reference point cloud Pk to obtain Pk ′;

步骤4.3、筛选匹配点对:采用最邻近点法在压缩后的目标点云Pk-1中寻找与Pk′中的特征点欧式距离最小的匹配特征点,组成特征点对集合;Step 4.3, screening matching point pairs: use the nearest neighbor point method to find the matching feature points with the smallest Euclidean distance to the feature points in Pk ' in the compressed target point cloud Pk-1 , and form a set of feature point pairs;

步骤4.4、计算误差:计算变换后两组特征点对的距离平方和,记为dkStep 4.4, calculation error: calculate the sum of the squares of the distances between the two groups of feature point pairs after transformation, denoted as dk ;

步骤4.5、基于四元数法利用匹配点对求解空间坐标变换的旋转、平移矩阵再用于参考点云,得到Pk″,重新筛选匹配点对,同时更新dkStep 4.5, based on the quaternion method, use the matching point pair to solve the rotation and translation matrix of the space coordinate transformation and Then use it as a reference point cloud to get Pk ″, re-screen matching point pairs, and update dk at the same time;

步骤4.6、迭代步骤4.5中的运算,直到收敛,dk小于阈值或达到既定的迭代次数Step 4.6, iterate the operation in step 4.5 until convergence, dk is less than the threshold or reaches the predetermined number of iterations

本方法的创新点在于:The innovation of this method lies in:

(1)使用一种基于PFH特征的点云压缩算法:可实现较高压缩比,有效减少ICP算法输入点集的数量,从而节约点云配准、点云融合计算时间,同时保留了足够多的特征点,为点云配准提供方便。(1) Use a point cloud compression algorithm based on PFH features: it can achieve a higher compression ratio and effectively reduce the number of input point sets of the ICP algorithm, thereby saving the calculation time of point cloud registration and point cloud fusion, while retaining enough The feature points provide convenience for point cloud registration.

(2)提出一种基于惯性导航技术的点云配准:点云配准的本质是计算获得不同视角下两片点云间的旋转、平移矩阵R与T,很多实验中使用ICP算法解决这类问题,但传统ICP算法的计算精度依赖于给定的初值。本文试验中,Kinect摄像头随运载体在室内场景中移动,获取不同视角下的点云,如果运载体上的Kinect摄像头位置固定,那么R与T同样也可以由运载体在不同视角间的旋转、平移矩阵表示,通过惯性传感器提供位置和角度偏移量,可以给ICP提供一个近似准确的初值以此来减少ICP算法的迭代次数,提高收敛速度,从而提高整个重构算法的实时性。(2) Propose a point cloud registration based on inertial navigation technology: The essence of point cloud registration is to calculate the rotation and translation matrices R and T between two point clouds under different viewing angles. Many experiments use the ICP algorithm to solve this problem. class problems, but the calculation accuracy of the traditional ICP algorithm depends on the given initial value. In this experiment, the Kinect camera moves with the vehicle in the indoor scene to obtain point clouds from different perspectives. If the position of the Kinect camera on the vehicle is fixed, then R and T can also be rotated and translated between different perspectives by the vehicle. Matrix representation, providing position and angle offsets through inertial sensors, can provide an approximately accurate initial value for ICP and In this way, the number of iterations of the ICP algorithm is reduced, the convergence speed is improved, and the real-time performance of the entire reconstruction algorithm is improved.

附图说明Description of drawings

图1为三维重构模块介绍图;Figure 1 is an introduction diagram of the three-dimensional reconstruction module;

图2为三维重构算法流程图;Fig. 2 is the flow chart of three-dimensional reconstruction algorithm;

图3为点云压缩算法流程图;Fig. 3 is a flowchart of point cloud compression algorithm;

图4为点云配准算法流程图。Figure 4 is a flow chart of the point cloud registration algorithm.

具体实施方式detailed description

为使本发明更明显易懂,兹以优选实施例,并配合附图作详细说明如下。In order to make the present invention more comprehensible, preferred embodiments are described in detail below with accompanying drawings.

本发明主要研究利用传感器在室内场景移动,不断获取不同视角下的点云,并实时重构出整个场景三维模型的方法研究。The present invention mainly researches the method of using the sensor to move in the indoor scene, continuously acquiring point clouds under different viewing angles, and reconstructing the three-dimensional model of the whole scene in real time.

问题的描述:Description of the problem:

对于场景S,Kinect摄像头分别从视角{V1,…,Vk,…Vn}依次获取点云集合{M1,…,Mk,…Mn},移动三维重构的目的是通过获取不同视角下的点云数据,在重构算法R(□)的作用下,求取S′=R(M1,…,Mk,…Mn),使得S与S′满足相似性度量函数E(□)的约束:For the scene S, the Kinect camera obtains the point cloud set {M1 , ..., Mk , ...Mn } sequentially from the viewing angle {V1 , ..., Vk , ...Vn }, and the purpose of mobile 3D reconstruction is to obtain Point cloud data from different perspectives, under the action of the reconstruction algorithm R(□), obtain S′=R(M1 ,…,Mk ,…Mn ), so that S and S′ satisfy the similarity measurement function Constraints on E(□):

|E(S)-E(S′)|<ε|E(S)-E(S')|<ε

问题的分析:Analysis of the problem:

由于场景中的点云数量庞大,为实现算法的实时性,首先需要对获取的点云进行去噪、压缩等预处理;同时,重构算法R(□)中,最重要的一步是求取Mk与Mk-1配准所需的变换矩阵Rk与Tk,并且配准过程依赖于Mk与Mk-1中的特征点对。因此,压缩算法不仅要有较高的压缩比,还需要保留足够多的特征点。Due to the large number of point clouds in the scene, in order to realize the real-time performance of the algorithm, it is first necessary to perform preprocessing such as denoising and compression on the obtained point clouds; at the same time, the most important step in the reconstruction algorithm R(□) is to obtain The transformation matrices Rk and Tk required for the registration of Mk and Mk-1 , and the registration process depends on the feature point pairs in Mk and Mk-1 . Therefore, the compression algorithm should not only have a high compression ratio, but also need to retain enough feature points.

ICP算法是进行点云配准的经典研究方法,但传统的ICP算法由于缺少配准初值或初值精度较差,很容易陷入局部最优解,影响迭代收敛速度。因此,基于ICP算法实现点云配准的前提条件是具有一个良好的配准初值。The ICP algorithm is a classic research method for point cloud registration, but the traditional ICP algorithm is easy to fall into the local optimal solution due to the lack of registration initial value or the poor accuracy of the initial value, which affects the iterative convergence speed. Therefore, the prerequisite for point cloud registration based on the ICP algorithm is to have a good registration initial value.

针对以上问题的分析,本发明提出一种基于PFH和惯性导航的移动场景实时三维重构方法,包括如下步骤:For the analysis of the above problems, the present invention proposes a real-time three-dimensional reconstruction method of a mobile scene based on PFH and inertial navigation, comprising the following steps:

步骤1、启动系统,将传感器信息清零,建立一个固定坐标系;Step 1. Start the system, clear the sensor information, and establish a fixed coordinate system;

步骤2、获取初始位姿(即运载体位于坐标原点)下的点云,进行压缩处理后用于建立场景局部点云模型;Step 2. Obtain the point cloud under the initial pose (that is, the vehicle is located at the origin of the coordinates), and use it to establish a local point cloud model of the scene after compression processing;

步骤3、由惯性元件(加速度计、陀螺仪)实时获取位姿信息(位姿指运载体的位置和角度信息),当位姿信息变化较大,超出阈值时,获取新视角下的点云,同时进行压缩处理;Step 3. Acquire the pose information in real time from the inertial components (accelerometer, gyroscope) (the pose refers to the position and angle information of the carrier). When the pose information changes greatly and exceeds the threshold, obtain the point cloud under the new perspective , and perform compression processing at the same time;

步骤4、由位置变化计算当前点云与固定坐标系下的场景局部点云间初始变换矩阵;Step 4. Calculate the initial transformation matrix between the current point cloud and the local point cloud of the scene under the fixed coordinate system according to the position change;

步骤5、基于改进的ICP算法(初始变换矩阵作为ICP初值)进行点云配准;Step 5. Carry out point cloud registration based on the improved ICP algorithm (initial transformation matrix is used as ICP initial value);

步骤6、将配准后的两片点云融合,重复步骤3到6,直至判断重构完成,停止实验。Step 6. Merge the two registered point clouds, and repeat steps 3 to 6 until the reconstruction is judged to be completed, and the experiment is stopped.

如图3所示,为三维重构算法流程中的步骤2,基于PFH的移动场景点云压缩算法流程图。具体操作步骤如下:As shown in Figure 3, it is step 2 in the 3D reconstruction algorithm flow, the flow chart of the PFH-based mobile scene point cloud compression algorithm. The specific operation steps are as follows:

(1)从一个点云获取装置中接收点云Mk(1) Receive point cloud Mk from a point cloud acquisition device.

(2)计算每个点的PFH特征算子。(2) Calculate the PFH characteristic operator of each point.

(3)计算PFH特征直方图的标准差,通过观察、分析可知,标准差小的是角点,标准差大的是平面点,因而利用标准差作为分类特征设计精简准则。(3) Calculate the standard deviation of the PFH feature histogram. Through observation and analysis, it can be known that the small standard deviation is the corner point, and the large standard deviation is the plane point. Therefore, the standard deviation is used as the simplification criterion for classification feature design.

(4)确定压缩所需的标准差阈值,选取点云中任意一个点,判断其PFH标准差与阈值的大小关系;如果小于或等于阈值则保留该点,否则剔除该点;重复上述步骤,直至点云中所有点都计算完毕,将压缩后的点保存在集合Pk中,记为压缩后的点云。(4) Determine the standard deviation threshold required for compression, select any point in the point cloud, and judge the size relationship between its PFH standard deviation and the threshold; if it is less than or equal to the threshold, keep the point, otherwise remove the point; repeat the above steps, Until all the points in the point cloud are calculated, the compressed points are stored in the set Pk and recorded as the compressed point cloud.

基于上述压缩算法,点云压缩率可达95%以上,为后续点云配准、融合过程节约大量计算时间,提高了实时重构的可行性。同时,利用PFH特征算子在不同位置的显著差异,保留了能够完成场景重构的特征点,为后续的点云配准提供方便。上述压缩算法的实现过程和理论依据可以参考作者之前提出过的《基于点特征直方图的移动场景点云精简算法》发明专利。Based on the above compression algorithm, the point cloud compression rate can reach more than 95%, which saves a lot of computing time for the subsequent point cloud registration and fusion process, and improves the feasibility of real-time reconstruction. At the same time, using the significant difference of PFH feature operators in different positions, the feature points that can complete scene reconstruction are retained, which provides convenience for subsequent point cloud registration. The implementation process and theoretical basis of the above-mentioned compression algorithm can refer to the invention patent of the author's previous "Point Cloud Streamlining Algorithm for Mobile Scenes Based on Point Feature Histogram".

下面为三维重构算法流程中的步骤4,利用惯性传感信息获取初始变换矩阵的步骤及原理说明:The following is step 4 in the 3D reconstruction algorithm process, the steps and principle description of using the inertial sensor information to obtain the initial transformation matrix:

(1)利用惯性元件(加速度计、陀螺仪)获取不同位姿间相对位移、旋转角度信息:(1) Use inertial components (accelerometers, gyroscopes) to obtain relative displacement and rotation angle information between different poses:

本实验通过在场景中安放了一辆小车,小车上安装获取信息的传感器,本文所使用的是kinect深度摄像头、加速度计和陀螺仪。In this experiment, a car is placed in the scene, and sensors for obtaining information are installed on the car. The kinect depth camera, accelerometer and gyroscope are used in this article.

首先,将传感信息清零,设定一固定坐标系,设运动起始位置为坐标原点,摄像头初始方向与X轴正方向重合,利用惯性元件(加速度计、陀螺仪)测量运载体本身在X、Y、Z轴上的加速度,如公式(1)、(2)所示,一次积分得到速度,二次积分得到位置,经过两次积分运算得到运载体本身的速度和位置,从而达到对运载体定位的目的,同时,通过陀螺仪可以获取运载体的旋转角度信息。First, clear the sensing information, set a fixed coordinate system, set the starting position of the movement as the coordinate origin, and the initial direction of the camera coincides with the positive direction of the X-axis, and use inertial components (accelerometers, gyroscopes) to measure the position of the carrier itself. Acceleration on the X, Y, and Z axes, as shown in the formulas (1) and (2), the velocity is obtained by one integral, the position is obtained by the second integral, and the velocity and position of the carrier itself are obtained after two integral operations, so as to achieve The purpose of vehicle positioning, at the same time, the rotation angle information of the vehicle can be obtained through the gyroscope.

式中,分别为加速度、速度及位置。In the formula, are acceleration, velocity and position, respectively.

虽然经过长时间积分后获得的信息会积累一定的误差,但在任意两个间隔较小的位姿下获取小车相对位移、旋转角度信息可以实现很小的误差,且组成定位系统的设备都安装在运载体内,工作时不依赖外界信息,也不向外界辐射能量,因而不易受到干扰。Although the information obtained after long-term integration will accumulate a certain amount of error, the relative displacement and rotation angle information of the car can be obtained at any two poses with a small interval to achieve a small error, and the equipment that makes up the positioning system is installed. In the carrier, it does not rely on external information and does not radiate energy to the outside during work, so it is not easily disturbed.

(2)将小车在不同位姿间的相对位移、旋转角度信息,转化为两片点云间的初始变换矩阵(2) Transform the relative displacement and rotation angle information of the car between different poses into the initial transformation matrix between the two point clouds and

如图4所示,为三维重构算法流程中的步骤5,基于惯性导航的点云配准算法流程图:As shown in Figure 4, it is step 5 in the 3D reconstruction algorithm process, the flow chart of the point cloud registration algorithm based on inertial navigation:

(1)将两个不同视角下的压缩点云Pk与Pk-1作为ICP算法的输入集合,将初始变换矩阵作为ICP算法的初始变换矩阵;(1) The compressed point clouds Pk and Pk-1 under two different viewing angles are used as the input set of the ICP algorithm, and the initial transformation matrix and As the initial transformation matrix of the ICP algorithm;

(2)利用初始变换矩阵对压缩后的参考点云Pk进行旋转、平移处理,得到Pk′;(2) Using the initial transformation matrix and Rotate and translate the compressed reference point cloud Pk to obtain Pk ′;

(3)筛选匹配点对:采用最邻近点法在压缩后的目标点云Pk-1中寻找与Pk′中的特征点欧式距离最小的匹配特征点,组成特征点对集合;(3) Filter matching point pairs: use the nearest neighbor point method to find the matching feature point with the smallest Euclidean distance to the feature point inPk ' in the compressed target point cloudPk-1 , and form a set of feature point pairs;

(4)计算误差:计算变换后两组特征点对的距离平方和,记为dk(4) Calculation error: calculate the distance square sum of two groups of feature point pairs after transformation, denoted as dk ;

(5)基于四元数法利用匹配点对求解空间坐标变换的旋转、平移矩阵再用于参考点云,得到Pk″,重新筛选匹配点对,同时更新dk(5) Based on the quaternion method, use matching point pairs to solve the rotation and translation matrix of space coordinate transformation and Then use it as a reference point cloud to get Pk ″, re-screen matching point pairs, and update dk at the same time.

(6)迭代步骤(5)中的运算,直到收敛,dk小于阈值或达到既定的迭代次数。(6) Iterate the operation in step (5) until it converges, dk is less than the threshold or reaches a predetermined number of iterations.

Claims (3)

CN201510437252.5A2015-07-232015-07-23Point cloud compression and inertial navigation based mobile context real-time three-dimensional reconstruction methodPendingCN105261060A (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
CN201510437252.5ACN105261060A (en)2015-07-232015-07-23Point cloud compression and inertial navigation based mobile context real-time three-dimensional reconstruction method

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN201510437252.5ACN105261060A (en)2015-07-232015-07-23Point cloud compression and inertial navigation based mobile context real-time three-dimensional reconstruction method

Publications (1)

Publication NumberPublication Date
CN105261060Atrue CN105261060A (en)2016-01-20

Family

ID=55100731

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN201510437252.5APendingCN105261060A (en)2015-07-232015-07-23Point cloud compression and inertial navigation based mobile context real-time three-dimensional reconstruction method

Country Status (1)

CountryLink
CN (1)CN105261060A (en)

Cited By (36)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN106056664A (en)*2016-05-232016-10-26武汉盈力科技有限公司Real-time three-dimensional scene reconstruction system and method based on inertia and depth vision
CN106504275A (en)*2016-10-122017-03-15杭州深瞳科技有限公司A kind of inertial positioning and the real-time three-dimensional method for reconstructing of point cloud registering coupling and complementing
CN106529394A (en)*2016-09-192017-03-22广东工业大学Indoor scene and object simultaneous recognition and modeling method
CN106796728A (en)*2016-11-162017-05-31深圳市大疆创新科技有限公司Generate method, device, computer system and the mobile device of three-dimensional point cloud
CN106898022A (en)*2017-01-172017-06-27徐渊A kind of hand-held quick three-dimensional scanning system and method
CN107194964A (en)*2017-05-242017-09-22电子科技大学A kind of VR social intercourse systems and its method based on real-time body's three-dimensional reconstruction
CN107507177A (en)*2017-08-302017-12-22广东工业大学Processing of robots object localization method and device based on 3-D scanning
CN107610219A (en)*2017-08-292018-01-19武汉大学The thick densification method of Pixel-level point cloud that geometry clue perceives in a kind of three-dimensional scenic reconstruct
CN107633518A (en)*2017-09-262018-01-26南昌航空大学A kind of product design detection method based on Kinect
WO2018049843A1 (en)*2016-09-142018-03-22杭州思看科技有限公司Three-dimensional sensor system and three-dimensional data acquisition method
CN108280866A (en)*2016-12-302018-07-13乐视汽车(北京)有限公司Road Processing Method of Point-clouds and system
CN108345005A (en)*2018-02-222018-07-31重庆大学The real-time continuous autonomous positioning orientation system and navigation locating method of tunnelling machine
CN108847121A (en)*2018-07-042018-11-20深圳地平线机器人科技有限公司The method and apparatus for constructing high-precision map
WO2019033882A1 (en)*2017-08-162019-02-21北京京东尚科信息技术有限公司Data processing method, apparatus and system, and computer readable storage medium
CN109931923A (en)*2017-12-152019-06-25阿里巴巴集团控股有限公司A kind of navigation guide map generalization method and apparatus
CN110345936A (en)*2019-07-092019-10-18上海有个机器人有限公司The track data processing method and its processing system of telecontrol equipment
CN111028247A (en)*2019-12-122020-04-17长春工业大学Disc type element identification method and system based on point cloud loss
CN111133476A (en)*2017-09-182020-05-08苹果公司Point cloud compression
CN111784766A (en)*2020-06-082020-10-16易思维(杭州)科技有限公司Method for calculating pose of threaded target object
CN111784835A (en)*2020-06-282020-10-16北京百度网讯科技有限公司 Drawing method, apparatus, electronic device and readable storage medium
CN114565689A (en)*2022-02-282022-05-31燕山大学Axial symmetry three-dimensional model data compression reconstruction method
CN114820604A (en)*2022-06-272022-07-29四川大学 Method and device for splicing blade profile data based on distance loss of nearest neighbors
US11663744B2 (en)2018-07-022023-05-30Apple Inc.Point cloud compression with adaptive filtering
US11676309B2 (en)2017-09-182023-06-13Apple IncPoint cloud compression using masks
US11683525B2 (en)2018-07-052023-06-20Apple Inc.Point cloud compression with multi-resolution video encoding
CN116342671A (en)*2023-05-232023-06-27第六镜科技(成都)有限公司Point cloud and CAD model registration method, device, electronic equipment and storage medium
US11748916B2 (en)2018-10-022023-09-05Apple Inc.Occupancy map block-to-patch information compression
US11798196B2 (en)2020-01-082023-10-24Apple Inc.Video-based point cloud compression with predicted patches
US11818401B2 (en)2017-09-142023-11-14Apple Inc.Point cloud geometry compression using octrees and binary arithmetic encoding with adaptive look-up tables
US11895307B2 (en)2019-10-042024-02-06Apple Inc.Block-based predictive coding for point cloud compression
US11935272B2 (en)2017-09-142024-03-19Apple Inc.Point cloud compression
US11948338B1 (en)2021-03-292024-04-02Apple Inc.3D volumetric content encoding using 2D videos and simplified 3D meshes
US12094179B2 (en)2018-10-052024-09-17Apple Inc.Quantized depths for projection point cloud compression
US12100183B2 (en)2018-04-102024-09-24Apple Inc.Point cloud attribute transfer algorithm
US12401822B2 (en)2018-07-122025-08-26Apple Inc.Bit stream structure for compressed point cloud data
US12439083B2 (en)2019-07-022025-10-07Apple Inc.Point cloud compression with supplemental information messages

Citations (4)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN101236660A (en)*2008-03-062008-08-06张利群Three-dimensional scanners and its three-dimensional model reconfiguration method
CN102467753A (en)*2010-11-042012-05-23中国科学院深圳先进技术研究院 Time-varying point cloud reconstruction method and system based on skeleton registration
US20120194516A1 (en)*2011-01-312012-08-02Microsoft CorporationThree-Dimensional Environment Reconstruction
CN103092897A (en)*2011-11-082013-05-08南京理工大学常熟研究院有限公司Quick K neighbor searching method for point cloud data processing

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN101236660A (en)*2008-03-062008-08-06张利群Three-dimensional scanners and its three-dimensional model reconfiguration method
CN102467753A (en)*2010-11-042012-05-23中国科学院深圳先进技术研究院 Time-varying point cloud reconstruction method and system based on skeleton registration
US20120194516A1 (en)*2011-01-312012-08-02Microsoft CorporationThree-Dimensional Environment Reconstruction
CN103092897A (en)*2011-11-082013-05-08南京理工大学常熟研究院有限公司Quick K neighbor searching method for point cloud data processing

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
黄军君: ""基于PFH与信息融合的移动场景实时三维重构研究 精简算法"", 《中国优秀硕士学位论文全文数据库 信息科技辑》*

Cited By (52)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN106056664B (en)*2016-05-232018-09-21武汉盈力科技有限公司A kind of real-time three-dimensional scene reconstruction system and method based on inertia and deep vision
CN106056664A (en)*2016-05-232016-10-26武汉盈力科技有限公司Real-time three-dimensional scene reconstruction system and method based on inertia and depth vision
US10309770B2 (en)2016-09-142019-06-04Hangzhou Scantech Co., LtdThree-dimensional sensor system and three-dimensional data acquisition method
WO2018049843A1 (en)*2016-09-142018-03-22杭州思看科技有限公司Three-dimensional sensor system and three-dimensional data acquisition method
CN106529394A (en)*2016-09-192017-03-22广东工业大学Indoor scene and object simultaneous recognition and modeling method
CN106529394B (en)*2016-09-192019-07-19广东工业大学 A Simultaneous Recognition and Modeling Method for Objects in Indoor Scenes
CN106504275A (en)*2016-10-122017-03-15杭州深瞳科技有限公司A kind of inertial positioning and the real-time three-dimensional method for reconstructing of point cloud registering coupling and complementing
CN106504275B (en)*2016-10-122019-03-05杭州深瞳科技有限公司A kind of real-time three-dimensional method for reconstructing of inertial positioning and point cloud registering coupling and complementing
CN106796728A (en)*2016-11-162017-05-31深圳市大疆创新科技有限公司Generate method, device, computer system and the mobile device of three-dimensional point cloud
US11004261B2 (en)2016-11-162021-05-11SZ DJI Technology Co., Ltd.Method, device, computer system, and mobile apparatus for generating three-dimensional point cloud
CN108280866A (en)*2016-12-302018-07-13乐视汽车(北京)有限公司Road Processing Method of Point-clouds and system
CN108280866B (en)*2016-12-302021-07-27法法汽车(中国)有限公司Road point cloud data processing method and system
CN106898022A (en)*2017-01-172017-06-27徐渊A kind of hand-held quick three-dimensional scanning system and method
CN107194964A (en)*2017-05-242017-09-22电子科技大学A kind of VR social intercourse systems and its method based on real-time body's three-dimensional reconstruction
CN107194964B (en)*2017-05-242020-10-09电子科技大学VR social contact system based on real-time human body three-dimensional reconstruction and method thereof
WO2019033882A1 (en)*2017-08-162019-02-21北京京东尚科信息技术有限公司Data processing method, apparatus and system, and computer readable storage medium
CN107610219B (en)*2017-08-292020-03-10武汉大学Pixel-level point cloud densification method for sensing geometric clues in three-dimensional scene reconstruction
CN107610219A (en)*2017-08-292018-01-19武汉大学The thick densification method of Pixel-level point cloud that geometry clue perceives in a kind of three-dimensional scenic reconstruct
CN107507177A (en)*2017-08-302017-12-22广东工业大学Processing of robots object localization method and device based on 3-D scanning
US11818401B2 (en)2017-09-142023-11-14Apple Inc.Point cloud geometry compression using octrees and binary arithmetic encoding with adaptive look-up tables
US11935272B2 (en)2017-09-142024-03-19Apple Inc.Point cloud compression
CN111133476B (en)*2017-09-182023-11-10苹果公司 Systems, devices and methods for compressing and decompressing point clouds including multiple points
US11676309B2 (en)2017-09-182023-06-13Apple IncPoint cloud compression using masks
CN111133476A (en)*2017-09-182020-05-08苹果公司Point cloud compression
US11922665B2 (en)2017-09-182024-03-05Apple Inc.Point cloud compression
CN107633518A (en)*2017-09-262018-01-26南昌航空大学A kind of product design detection method based on Kinect
CN109931923A (en)*2017-12-152019-06-25阿里巴巴集团控股有限公司A kind of navigation guide map generalization method and apparatus
CN108345005A (en)*2018-02-222018-07-31重庆大学The real-time continuous autonomous positioning orientation system and navigation locating method of tunnelling machine
CN108345005B (en)*2018-02-222020-02-07重庆大学Real-time continuous autonomous positioning and orienting system and navigation positioning method of tunnel boring machine
US12100183B2 (en)2018-04-102024-09-24Apple Inc.Point cloud attribute transfer algorithm
US11663744B2 (en)2018-07-022023-05-30Apple Inc.Point cloud compression with adaptive filtering
CN108847121A (en)*2018-07-042018-11-20深圳地平线机器人科技有限公司The method and apparatus for constructing high-precision map
US11683525B2 (en)2018-07-052023-06-20Apple Inc.Point cloud compression with multi-resolution video encoding
US12401822B2 (en)2018-07-122025-08-26Apple Inc.Bit stream structure for compressed point cloud data
US11748916B2 (en)2018-10-022023-09-05Apple Inc.Occupancy map block-to-patch information compression
US12094179B2 (en)2018-10-052024-09-17Apple Inc.Quantized depths for projection point cloud compression
US12439083B2 (en)2019-07-022025-10-07Apple Inc.Point cloud compression with supplemental information messages
CN110345936B (en)*2019-07-092021-02-09上海有个机器人有限公司Track data processing method and processing system of motion device
CN110345936A (en)*2019-07-092019-10-18上海有个机器人有限公司The track data processing method and its processing system of telecontrol equipment
US11895307B2 (en)2019-10-042024-02-06Apple Inc.Block-based predictive coding for point cloud compression
CN111028247A (en)*2019-12-122020-04-17长春工业大学Disc type element identification method and system based on point cloud loss
US11798196B2 (en)2020-01-082023-10-24Apple Inc.Video-based point cloud compression with predicted patches
CN111784766B (en)*2020-06-082024-05-24易思维(杭州)科技股份有限公司Method for calculating pose of threaded target object
CN111784766A (en)*2020-06-082020-10-16易思维(杭州)科技有限公司Method for calculating pose of threaded target object
CN111784835A (en)*2020-06-282020-10-16北京百度网讯科技有限公司 Drawing method, apparatus, electronic device and readable storage medium
CN111784835B (en)*2020-06-282024-04-12北京百度网讯科技有限公司Drawing method, drawing device, electronic equipment and readable storage medium
US11948338B1 (en)2021-03-292024-04-02Apple Inc.3D volumetric content encoding using 2D videos and simplified 3D meshes
CN114565689B (en)*2022-02-282024-02-02燕山大学Axisymmetric three-dimensional model data compression reconstruction method
CN114565689A (en)*2022-02-282022-05-31燕山大学Axial symmetry three-dimensional model data compression reconstruction method
CN114820604A (en)*2022-06-272022-07-29四川大学 Method and device for splicing blade profile data based on distance loss of nearest neighbors
CN116342671B (en)*2023-05-232023-08-08第六镜科技(成都)有限公司Point cloud and CAD model registration method, device, electronic equipment and storage medium
CN116342671A (en)*2023-05-232023-06-27第六镜科技(成都)有限公司Point cloud and CAD model registration method, device, electronic equipment and storage medium

Similar Documents

PublicationPublication DateTitle
CN105261060A (en)Point cloud compression and inertial navigation based mobile context real-time three-dimensional reconstruction method
CN109166149B (en)Positioning and three-dimensional line frame structure reconstruction method and system integrating binocular camera and IMU
CN110189399B (en)Indoor three-dimensional layout reconstruction method and system
CN106446815B (en) A Simultaneous Localization and Map Construction Method
CN112734765B (en)Mobile robot positioning method, system and medium based on fusion of instance segmentation and multiple sensors
CN110176032B (en)Three-dimensional reconstruction method and device
CN102848389B (en)Realization method for mechanical arm calibrating and tracking system based on visual motion capture
CN100388319C (en) Multi-view pose estimation and self-calibration method for 3D active vision sensor
CN109186606B (en)Robot composition and navigation method based on SLAM and image information
CN107240129A (en)Object and indoor small scene based on RGB D camera datas recover and modeling method
CN109272537A (en) Panoramic point cloud registration method based on structured light
CN104077809B (en)Visual SLAM method based on structural lines
JP6483832B2 (en) Method and system for scanning an object using an RGB-D sensor
CN102693543B (en)Method for automatically calibrating Pan-Tilt-Zoom in outdoor environments
CN112767482B (en)Indoor and outdoor positioning method and system with multi-sensor fusion
CN113960614A (en)Elevation map construction method based on frame-map matching
CN112833892A (en)Semantic mapping method based on track alignment
TW202238449A (en)Indoor positioning system and indoor positioning method
CN114972668A (en)Laser SLAM method and system based on height information
CN116563377A (en) A Martian Rock Measurement Method Based on Hemispherical Projection Model
CN114529585A (en)Mobile equipment autonomous positioning method based on depth vision and inertial measurement
CN114782639A (en)Rapid differential latent AGV dense three-dimensional reconstruction method based on multi-sensor fusion
CN116958418A (en)High-precision three-dimensional mapping method for indoor multi-layer building fused with BEV features
Ming et al.Refined 3D modeling of complex models based on stereo vision
CN117095130A (en)Three-dimensional modeling method and system thereof

Legal Events

DateCodeTitleDescription
C06Publication
PB01Publication
C10Entry into substantive examination
SE01Entry into force of request for substantive examination
WD01Invention patent application deemed withdrawn after publication
WD01Invention patent application deemed withdrawn after publication

Application publication date:20160120


[8]ページ先頭

©2009-2025 Movatter.jp