技术领域technical field
本发明涉及点云校正的技术领域,尤其涉及一种建筑全过程数字点云校正处理方法。The invention relates to the technical field of point cloud correction, in particular to a digital point cloud correction processing method in the whole construction process.
背景技术Background technique
目前在现代化建筑生产过程中主要通过人工标定和机械自动化灌装完成建筑任务。首先通过建筑工人进行建筑骨架离线标定,然后利用灌装设备到达指定位置完成建筑任务。然而,工业现场中目标工件多为平面弱几何轮廓结构。平面弱几何轮廓结构主要是指整体以平面特征为主的被测物,其轮廓几何特征单一、表面法向量变化有限且表面纹理较少。同时由于存在灌装设备自身结构参数,工件标定结果、系统开环控制等存在偏差等影响因素,导致无法完成严格公差下的建筑任务,使得建筑物与实际预期建筑存在弱偏差,该种类型弱偏差虽然在低楼层建筑中无较大影响,但是随着建筑物高度的提升,存在偏差积累效应,具有一定的安全隐患。针对该问题,本发明提出一种建筑全过程数字点云校正处理方法,实现建筑全过程的精准误差控制和校正。At present, in the modern construction production process, construction tasks are mainly completed through manual calibration and mechanical automatic filling. Firstly, the building skeleton is calibrated offline by construction workers, and then the filling equipment is used to reach the designated location to complete the construction task. However, the target workpieces in industrial sites are mostly planar weak geometric contour structures. The planar weak geometric contour structure mainly refers to the measured object whose overall feature is mainly planar, and its contour geometric features are single, the surface normal vector changes limited and the surface texture is less. At the same time, due to the structural parameters of the filling equipment itself, the deviation of the workpiece calibration results, the system open-loop control and other influencing factors, it is impossible to complete the construction tasks under strict tolerances, resulting in weak deviations between the building and the actual expected building. This type of weak Although the deviation has no great influence in low-rise buildings, as the height of the building increases, there is a cumulative effect of deviation, which has certain potential safety hazards. In view of this problem, the present invention proposes a digital point cloud correction processing method for the whole construction process to realize precise error control and correction in the whole construction process.
发明内容Contents of the invention
有鉴于此,本发明提供一种建筑全过程数字点云校正处理方法,目的在于:1)基于目标建筑当前状态的三维点云数据,建立由相机拍摄像素坐标与世界坐标之间的映射转换关系,进而将三维点云数据由相机坐标系转换到世界坐标系,并采取相同方法得到三维点云数据与建筑设计目标三维数据之间的变换参数,通过结合极坐标系下三维点云坐标的极角以及建筑设计目标三维坐标的极角进行两者的相似度计算,以表征三维数据在方向上的差异性,若相似度计算结果小于指定阈值,则表示当前建筑状态与建筑设计目标存在较大差异,因此基于变换参数对当前建筑状态的三维点云数据进行校正,并根据校正结果对当前建筑进行对应的校正处理,实现建筑全过程数字点云校正处理;2)通过构建三维点云数据增强模型,在不同分辨率下对点云数据进行采样处理,对不同分辨率的采样结果进行编码处理,形成多层次编码特征,进而对多层次编码特征进行感知以及上采样处理,得到不同分辨率下的缺失点云坐标,将不同分辨率下的缺失点云坐标同原始点云坐标进行结合,形成补全增强后的三维点云坐标集合,实现缺失点云数据的补全增强。In view of this, the present invention provides a digital point cloud correction processing method for the whole process of construction, the purpose of which is: 1) based on the 3D point cloud data of the current state of the target building, to establish the mapping conversion relationship between the pixel coordinates captured by the camera and the world coordinates , and then convert the 3D point cloud data from the camera coordinate system to the world coordinate system, and adopt the same method to obtain the transformation parameters between the 3D point cloud data and the 3D data of the architectural design target. The similarity between the angle and the polar angle of the three-dimensional coordinates of the architectural design target is calculated to characterize the difference in direction of the three-dimensional data. If the similarity calculation result is less than the specified threshold, it means that there is a large gap between the current building state and the architectural design target. Therefore, based on the transformation parameters, the 3D point cloud data of the current building state is corrected, and the corresponding correction processing is performed on the current building according to the correction results, so as to realize the digital point cloud correction processing of the whole building process; model, sampling point cloud data at different resolutions, coding the sampling results at different resolutions to form multi-level coding features, and then performing perception and up-sampling processing on multi-level coding features to obtain The missing point cloud coordinates of different resolutions are combined with the original point cloud coordinates to form a complete and enhanced 3D point cloud coordinate set, which realizes the completion and enhancement of missing point cloud data.
实现上述目的,本发明提供的一种建筑全过程数字点云校正处理方法,包括以下步骤:To achieve the above object, the present invention provides a digital point cloud correction processing method for the whole construction process, comprising the following steps:
S1:采集目标建筑当前状态的三维点云数据,并对采集的三维点云数据进行坐标变换得到世界坐标系下的三维点云数据;S1: Collect the 3D point cloud data of the current state of the target building, and perform coordinate transformation on the collected 3D point cloud data to obtain the 3D point cloud data in the world coordinate system;
S2:构建三维点云数据增强模型,所构建模型以世界坐标系下的三维点云数据为输入,以增强后的三维点云数据为输出;S2: Build a 3D point cloud data enhancement model, the constructed model takes the 3D point cloud data in the world coordinate system as input, and the enhanced 3D point cloud data as output;
S3:确定三维点云数据增强模型的优化目标函数并进行优化训练,得到最优三维点云数据增强模型;S3: Determine the optimization objective function of the 3D point cloud data enhancement model and perform optimization training to obtain the optimal 3D point cloud data enhancement model;
S4:利用最优三维点云数据增强模型对坐标转换后的三维点云数据进行增强,对增强后的三维点云数据结合建筑设计目标三维数据进行相似度计算,并计算得到变换参数;S4: Use the optimal 3D point cloud data enhancement model to enhance the 3D point cloud data after coordinate transformation, perform similarity calculation on the enhanced 3D point cloud data combined with the 3D data of the architectural design target, and calculate the transformation parameters;
S5:若相似度计算结果小于指定阈值,则基于变换参数对当前建筑状态的三维点云数据进行校正,并根据校正结果对当前建筑进行对应的校正处理。S5: If the similarity calculation result is less than the specified threshold, correct the 3D point cloud data of the current building state based on the transformation parameters, and perform corresponding correction processing on the current building according to the correction result.
作为本发明的进一步改进方法:As a further improvement method of the present invention:
可选地,所述S1步骤中采集目标建筑当前状态的三维点云数据,包括:Optionally, in the S1 step, the three-dimensional point cloud data of the current state of the target building is collected, including:
利用相机采集目标建筑当前状态的三维点云数据,所述相机由RGB彩色相机以及一个深度相机组成,RGB彩色相机用于获取目标建筑的彩色信息,深度相机用于获取目标建筑的深度信息,相机所拍摄目标建筑的图像为I,将图像I中任意像素I(x,y)转换为三维点云坐标的公式为:Use the camera to collect the 3D point cloud data of the current state of the target building. The camera is composed of an RGB color camera and a depth camera. The RGB color camera is used to obtain the color information of the target building, and the depth camera is used to obtain the depth information of the target building. The captured image of the target building is I, and the formula for converting any pixel I(x, y) in the image I into three-dimensional point cloud coordinates is:
其中:in:
g(x,y,z)表示目标建筑图像中像素I(x,y)对应的三维点云坐标,像素I(x,y)表示目标建筑图像I中第x行第y列的像素;g (x, y, z) represents the three-dimensional point cloud coordinates corresponding to the pixel I (x, y) in the target building image, and the pixel I (x, y) represents the pixel of the xth row and the yth column in the target building image I;
cX表示相机在水平方向的主点,cY表示相机在竖直方向的主点;cX represents the principal point of the camera in the horizontal direction, and cY represents the principal point of the camera in the vertical direction;
fX表示相机在水平方向的焦距,fY表示相机在竖直方向的焦距;fX represents the focal length of the camera in the horizontal direction, and fY represents the focal length of the camera in the vertical direction;
dx,y表示深度相机部分拍摄得到的像素I(x,y)的深度信息;dx, y represent the depth information of the pixel I(x, y) captured by the depth camera part;
并利用RGB彩色相机获取像素在RGB颜色通道的颜色值,构成三维点云数据,其中像素I(x,y)的三维点云数据为(g(x,y,z),R(x,y),G(x,y),B(x,y)},其中R(x,y),G(x,y),B(x,y)分别表示像素I(x,y)在RGB颜色通道的颜色值。And use the RGB color camera to obtain the color value of the pixel in the RGB color channel to form three-dimensional point cloud data, where the three-dimensional point cloud data of pixel I(x, y) is (g(x, y, z), R(x, y ), G(x, y), B(x, y)}, where R(x, y), G(x, y), B(x, y) represent pixel I(x, y) in RGB color The color value of the channel.
在本发明实施例中,所述相机的主点和焦距均为相机内部参数,即已知参数信息。In the embodiment of the present invention, the principal point and the focal length of the camera are internal parameters of the camera, that is, known parameter information.
可选地,所述S1步骤中对采集的三维点云数据进行坐标变换得到世界坐标系下的三维点云数据,包括:Optionally, performing coordinate transformation on the collected 3D point cloud data in the S1 step to obtain 3D point cloud data in the world coordinate system, including:
对所采集的三维点云数据进行坐标变换,其中坐标变换对象为三维点云数据中的三维点云坐标,将三维点云坐标变换到世界坐标系下,其中对于三维点云坐标的坐标系变换公式为:Perform coordinate transformation on the collected 3D point cloud data, where the coordinate transformation object is the 3D point cloud coordinates in the 3D point cloud data, transform the 3D point cloud coordinates into the world coordinate system, and the coordinate system transformation of the 3D point cloud coordinates The formula is:
g′(x,y,z)=Rg(x,y,z)T+Hg'(x, y, z) = Rg(x, y, z)T + H
其中:in:
R表示当前点云坐标系与世界坐标系之间的旋转矩阵,H表示平移矩阵;R represents the rotation matrix between the current point cloud coordinate system and the world coordinate system, and H represents the translation matrix;
T表示转置;T means transpose;
g′(x,y,z)表示三维点云坐标g(x,y,z)的坐标变换结果;g'(x, y, z) represents the coordinate transformation result of the three-dimensional point cloud coordinates g(x, y, z);
所述旋转矩阵以及平移矩阵的计算流程为:The calculation process of the rotation matrix and the translation matrix is as follows:
S11:利用相机拍摄得到N个已知世界坐标系下坐标的三维点云坐标:S11: Obtain the 3D point cloud coordinates of the coordinates in N known world coordinate systems by using the camera:
{(pi,qi)|i∈[1,N]}{(pi ,qi )|i∈[1,N]}
其中:in:
pi表示相机所拍摄得到的第i个三维点云坐标,qi表示pi在世界坐标系下的坐标;pi represents the i-th 3D point cloud coordinates captured by the camera, and qi represents the coordinates of pi in the world coordinate system;
S12:对坐标进行标准化处理:S12: Standardize the coordinates:
其中:in:
分别为pi,qi的标准化处理后结果; Respectively pi, qi after standardized processing results;
S13:将标准化处理后的坐标分别构成矩阵P和矩阵Q:S13: Form the standardized coordinates into a matrix P and a matrix Q respectively:
S14:计算得到变换矩阵A和B:S14: Calculate transformation matrices A and B:
A=PTQA=PT Q
B=PQTB=PQT
S15:分别计算得到变换矩阵A和B的N个特征值和特征向量,并按特征值由大到小顺序对特征向量进行排序,其中变换矩阵A的N个特征向量为:S15: Calculate N eigenvalues and eigenvectors of transformation matrices A and B respectively, and sort the eigenvectors in descending order of eigenvalues, wherein the N eigenvectors of transformation matrix A are:
(α1,α2,...,αi,...,αN)(α1 , α2 , ..., αi , ..., αN )
其中:in:
αi表示变换矩阵A中第i个特征向量;αi represents the i-th eigenvector in the transformation matrix A;
变换矩阵B的N个特征向量为:The N eigenvectors of the transformation matrix B are:
(β1,β2,...,βi,...,βN)(β1 , β2 , ..., βi, ..., βN )
其中:in:
βi表示变换矩阵B中第i个特征向量;βi represents the i-th eigenvector in the transformation matrix B;
S16:将特征向量分别构成矩阵形式α,β:S16: Form the eigenvectors into matrix forms α and β respectively:
α=[α1,α2,...,αi,...,αN]α=[α1 , α2 , . . . , αi , . . . , αN ]
β=[β1,β2,...,βi,...,βN]β=[β1 , β2 , . . . , βi , . . . , βN ]
S17:计算得到当前点云坐标系与世界坐标系之间的旋转矩阵R,以及平移矩阵H:S17: Calculate the rotation matrix R between the current point cloud coordinate system and the world coordinate system, and the translation matrix H:
R=αβTR=αβT
H=Q-RPH=Q-RP
利用计算得到的旋转矩阵R以及平移矩阵H将三维点云数据中的三维点云坐标变换到世界坐标系下。Use the calculated rotation matrix R and translation matrix H to transform the 3D point cloud coordinates in the 3D point cloud data into the world coordinate system.
可选地,所述S2步骤中构建三维点云数据增强模型,包括:Optionally, constructing a three-dimensional point cloud data enhancement model in the S2 step includes:
构建三维点云数据增强模型,所构建三维点云数据增强模型以世界坐标系下的三维点云数据为输入,以增强后的三维点云数据为输出;Build a 3D point cloud data enhancement model, the constructed 3D point cloud data enhancement model takes the 3D point cloud data in the world coordinate system as input, and the enhanced 3D point cloud data as output;
所构建三维点云数据增强模型包括输入层、多层次编码层以及解码点云增强层,其中输入层用于接收三维点云数据,并对所接收的三维点云数据进行多分辨率下采样处理,多层次编码层用于对不同分辨率的采样结果进行编码处理,形成多层次编码特征,解码点云增强层用于对多层次编码特征进行解码处理,对不同分辨率下的采样结果进行缺失部分补全,得到增强后的三维点云数据;The constructed 3D point cloud data enhancement model includes an input layer, a multi-level encoding layer and a decoding point cloud enhancement layer, where the input layer is used to receive 3D point cloud data and perform multi-resolution downsampling processing on the received 3D point cloud data , the multi-level encoding layer is used to encode the sampling results of different resolutions to form multi-level encoding features, and the decoding point cloud enhancement layer is used to decode the multi-level encoding features to delete the sampling results at different resolutions Partial completion to obtain enhanced 3D point cloud data;
基于三维点云数据增强模型的增强流程为:The enhancement process based on the 3D point cloud data enhancement model is as follows:
S21:以世界坐标系下的三维点云数据Data为输入,其中Data中包含D组世界坐标系下的三维点云坐标以及坐标所对应的颜色值;S21: Taking the 3D point cloud data Data in the world coordinate system as input, wherein Data includes the 3D point cloud coordinates in the world coordinate system of Group D and the color values corresponding to the coordinates;
S22:输入层对D组三维点云坐标进行D/2,D/4,D/8的分辨率采样,得到三组采样结果,其中D/2分辨率的采样流程为:S22: The input layer performs D/2, D/4, and D/8 resolution sampling on D group 3D point cloud coordinates, and obtains three sets of sampling results, wherein the D/2 resolution sampling process is as follows:
S221:从D组三维点云坐标中选取一个坐标作为起始点,构建采样点集合,将起始点添加到采样点集合中;S221: Select a coordinate from the three-dimensional point cloud coordinates of group D as a starting point, construct a sampling point set, and add the starting point to the sampling point set;
S222:计算采样点集合外坐标到起始点的距离,并选取距离最大的坐标添加到采样点集合中;S222: Calculate the distance from the external coordinates of the sampling point set to the starting point, and select the coordinate with the largest distance and add it to the sampling point set;
S223:计算采样点集合外坐标到采样点集合内任意坐标的最近距离,选取最大最近距离对应的坐标添加到采样点集合中;S223: Calculate the shortest distance from the external coordinates of the sampling point set to any coordinates in the sampling point set, and select the coordinate corresponding to the largest shortest distance and add it to the sampling point set;
S224:重复步骤S222至S223,直到采样点集合中的采样点数目达到S224: Repeat steps S222 to S223 until the number of sampling points in the sampling point set reaches
S23:多层次编码层利用多层感知机对每组采样结果进行升维处理,直到每个采样坐标的维数达到128;利用最大池化层提取每组采样结果最高维数的特征图,每个特征图的维数为128,其中D/2分辨率采样下的特征图featureD/2为:S23: The multi-level encoding layer uses a multi-layer perceptron to perform dimension-up processing on each set of sampling results until the dimension of each sampling coordinate reaches 128; uses the maximum pooling layer to extract the feature map with the highest dimension of each set of sampling results, each The dimension of each feature map is 128, where the feature map featureD /2 under D/2 resolution sampling is:
featureD/2=Pooling(MLE(sampleD/2(1))||...||MLE(sampleD/2(D/2)))featureD/2 =Pooling(MLE(sampleD/2 (1))||...||MLE(sampleD /2 (D/2)))
其中:in:
||表示多层感知机处理结果的拼接;|| indicates the concatenation of the processing results of the multi-layer perceptron;
MLE(sampleD/2(D/2))表示D/2分辨率采样下第D/2个采样结果的升维处理结果,MLE表示多层感知机;MLE(sampleD/2 (D/2)) indicates the result of dimension-up processing of the D/2 sampling result under D/2 resolution sampling, and MLE indicates a multi-layer perceptron;
Pooling(·)表示池化操作,对于D/2分辨率采样下的采样结果,将升维后128×3的升维处理结果MLE(sampleD/2(1))||...||MLE(sampleD/2(D/2))池化为128维,得到特征图featureD/2;Pooling(·) represents the pooling operation. For the sampling results under D/2 resolution sampling, the dimension-up processing result of 128×3 is MLE(sampleD/2 (1))||...|| MLE (sampleD/2 (D/2)) is pooled to 128 dimensions, and the feature map featureD/2 is obtained;
S24:多层次编码层利用多个独立的卷积层对三组采样结果的特征图进行卷积处理,并赋予三组特征图卷积处理结果不同的权重,将赋权后卷积处理结果进行拼接,得到多层次编码特征;在本发明实施例中,D/2分辨率采样下特征图的权重为0.6,D/4分辨率采样下特征图的权重为0.2,D/8分辨率采样下特征图的权重为0.2;S24: The multi-level encoding layer uses multiple independent convolution layers to perform convolution processing on the feature maps of the three sets of sampling results, and assigns different weights to the convolution processing results of the three sets of feature maps, and performs convolution processing results after weighting. Splicing to obtain multi-level coding features; in the embodiment of the present invention, the weight of the feature map under D/2 resolution sampling is 0.6, the weight of the feature map under D/4 resolution sampling is 0.2, and the weight of the feature map under D/8 resolution sampling The weight of the feature map is 0.2;
S25:解码点云增强层中包含多个不同参数的多层感知机,利用多层感知机对多层次编码特征进行重建以及上采样处理,得到不同分辨率下的缺失点云坐标,并将不同分辨率下的缺失点云坐标同原始点云坐标进行结合,形成补全增强后的三维点云坐标集合,其中缺失点云坐标的颜色值为邻近三维点云坐标的颜色值均值,得到增强后的三维点云数据。S25: Decoding the multi-layer perceptron with multiple different parameters in the point cloud enhancement layer, using the multi-layer perceptron to reconstruct and up-sample the multi-level coding features, and obtain the missing point cloud coordinates at different resolutions, and convert the different The missing point cloud coordinates at the resolution are combined with the original point cloud coordinates to form a complete and enhanced 3D point cloud coordinate set, in which the color value of the missing point cloud coordinates is the average color value of the adjacent 3D point cloud coordinates, and the enhanced 3D point cloud data.
可选地,所述S3步骤中确定三维点云数据增强模型的优化目标函数并进行优化训练,包括:Optionally, in the S3 step, the optimization objective function of the three-dimensional point cloud data enhancement model is determined and optimized training is performed, including:
确定三维点云数据增强模型的优化目标函数,并基于优化目标函数对三维点云数据增强模型进行优化训练,得到训练后的模型参数,其中待训练模型参数为多层次编码层中多层感知机参数以及卷积层参数θ1,解码点云增强层中多层感知机参数θ2,其中θ1,θ2为参数向量;Determine the optimization objective function of the 3D point cloud data enhancement model, and optimize the training of the 3D point cloud data enhancement model based on the optimization objective function, and obtain the trained model parameters, where the model parameters to be trained are multi-layer perceptrons in the multi-level coding layer Parameters and convolutional layer parameters θ1 , decode the multi-layer perceptron parameters θ2 in the point cloud enhancement layer, where θ1 and θ2 are parameter vectors;
获取存在缺失点云数据的三维点云数据集合,并获取缺失点云数据的真实三维点云坐标集合,所确定优化目标函数为:Obtain the 3D point cloud data set with missing point cloud data, and obtain the real 3D point cloud coordinate set of missing point cloud data. The determined optimization objective function is:
其中:in:
Ω表示缺失点云数据的真实三维点云坐标集合,num(Ω)表示缺失点云数据的数目,num表示三维点云数据增强模型得到的缺失点云数目;Ω represents the real 3D point cloud coordinate set of missing point cloud data, num(Ω) represents the number of missing point cloud data, and num represents the number of missing point clouds obtained by the 3D point cloud data enhancement model;
locc表示第c个缺失点云数据的真实三维点云坐标;locc represents the real 3D point cloud coordinates of the cth missing point cloud data;
表示三维点云数据增强模型输出的距离locc最近的缺失点云坐标; Represents the missing point cloud coordinates closest to locc output by the 3D point cloud data augmentation model;
dis(·)表示距离计算公式;dis( ) represents the distance calculation formula;
ε表示比例权重,将其设置为0.2;ε represents the proportional weight, which is set to 0.2;
基于优化目标函数,利用Adam优化器对三维点云数据增强模型进行训练,得到训练后的模型参数,并基于训练后的模型参数构建得到最优三维点云数据增强模型。Based on the optimized objective function, the Adam optimizer is used to train the 3D point cloud data enhancement model, and the trained model parameters are obtained, and the optimal 3D point cloud data enhancement model is constructed based on the trained model parameters.
可选地,所述S4步骤中利用最优三维点云数据增强模型对坐标转换后的三维点云数据进行增强,包括:Optionally, in the step S4, the optimal three-dimensional point cloud data enhancement model is used to enhance the coordinate-transformed three-dimensional point cloud data, including:
将步骤S2中坐标变换后的三维点云数据输入到最优三维点云数据增强模型中,最优三维点云数据增强模型输出增强后的三维点云数据,其中增强后的三维点云数据集合为:Input the 3D point cloud data after the coordinate transformation in step S2 into the optimal 3D point cloud data enhancement model, and the optimal 3D point cloud data enhancement model outputs the enhanced 3D point cloud data, wherein the enhanced 3D point cloud data set for:
{rm|m∈[1,M]}{rm |m∈[1, M]}
其中:in:
rm表示最优点云增强模型输出的第m个三维点云数据,三维点云数据中包括三维点云坐标以及三维点云颜色值,M表示最优点云增强模型输出的三维点云数据数目。rm represents the mth 3D point cloud data output by the optimal point cloud enhancement model. The 3D point cloud data includes the coordinates of the 3D point cloud and the color values of the 3D point cloud. M represents the number of 3D point cloud data output by the optimal point cloud enhancement model.
可选地,所述S4步骤中对增强后的三维点云数据结合建筑设计目标三维数据进行相似度计算,并计算得到变换参数,包括:Optionally, in the step S4, the similarity calculation is performed on the enhanced three-dimensional point cloud data combined with the three-dimensional data of the architectural design target, and the transformation parameters are calculated, including:
对增强后的三维点云数据结合建筑设计目标三维数据进行相似度计算,并计算得到变换参数,其中相似度计算流程为:The similarity calculation is performed on the enhanced 3D point cloud data combined with the 3D data of the architectural design target, and the transformation parameters are calculated. The similarity calculation process is as follows:
S41:利用Delaunay三角剖分方法将三维点云数据集合中的三维点云坐标转换为若干Delaunay三角网,并利用Delaunay三角剖分方法将建筑设计目标三维数据转换为若干Delaunay三角网;S41: Using the Delaunay triangulation method to convert the three-dimensional point cloud coordinates in the three-dimensional point cloud data set into several Delaunay triangulations, and using the Delaunay triangulation method to convert the three-dimensional data of the architectural design target into several Delaunay triangulations;
S42:提取三维点云数据中的三维点云坐标,其中rm中的三维点云坐标为Lm;S42: extracting the three-dimensional point cloud coordinates in the three-dimensional point cloud data, wherein the three-dimensional point cloud coordinates in rm are Lm ;
S43:以任意三维点云坐标Lm为顶点,得到Lm的若干组直接连接点,并以三维点云坐标Lm为极点,以Lm至任意直接连接点的射线方向为极轴,计算得到极点Lm与不同直接连接点所构成射线的极角,得到极点Lm对应的极角集合;S43: Take any three-dimensional point cloud coordinate Lm as the vertex, obtain several groups of direct connection points of Lm , and take the three-dimensional point cloud coordinate Lm as the pole, and take the ray direction from Lm to any direct connection point as the polar axis, and calculate Obtain the polar angle of the ray formed by the pole Lm and different direct connection points, and obtain the set of polar angles corresponding to the pole Lm ;
S44:选取建筑设计目标三维数据中与三维点云坐标Lm最接近的三维坐标点L′m,以该三维坐标点L′m为极点,以L′m至任意直接连接点的射线方向为极轴,计算得到极点L′m与不同直接连接点所构成射线的极角,得到极点L′m对应的极角集合;S44: Select the three-dimensional coordinate point L′m closest to the three-dimensional point cloud coordinate Lm in the three-dimensional data of the architectural design target, take the three-dimensional coordinate point L′m as the pole, and take the ray direction from L′m to any direct connection point as Polar axis, calculate the polar angle of the ray formed by the pole L′m and different direct connection points, and obtain the polar angle set corresponding to the pole L′m ;
S45:计算极点Lm对应的极角集合与极点L′m对应的极角集合的余弦相似度;S45: Calculate the cosine similarity between the polar angle set corresponding to the pole Lm and the polar angle set corresponding to the pole L′m ;
S46:重复步骤S43至步骤S45,得到三维点云数据中所有三维点云坐标与对应建筑设计目标三维坐标的余弦相似度,将最小的余弦相似度作为相似度计算结果;S46: Repeat steps S43 to S45 to obtain the cosine similarity between all three-dimensional point cloud coordinates in the three-dimensional point cloud data and the three-dimensional coordinates of the corresponding architectural design target, and use the smallest cosine similarity as the similarity calculation result;
构建三维点云坐标与建筑设计目标三维坐标的坐标匹配集:Construct the coordinate matching set of the 3D point cloud coordinates and the 3D coordinates of the architectural design target:
{(Lm,L′m)|m∈[1,M]}{(Lm , L′m )|m∈[1, M]}
按照步骤S1中旋转矩阵以及平移矩阵的计算流程,计算得到三维点云坐标与建筑设计目标三维坐标之间的旋转矩阵以及平移矩阵,并将计算结果作为变换参数。According to the calculation process of the rotation matrix and translation matrix in step S1, the rotation matrix and translation matrix between the three-dimensional point cloud coordinates and the three-dimensional coordinates of the architectural design target are calculated, and the calculation results are used as transformation parameters.
可选地,所述S5步骤中若相似度计算结果小于指定阈值,则基于变换参数对当前建筑状态的三维点云数据进行校正,包括:Optionally, in the S5 step, if the similarity calculation result is less than the specified threshold, then the 3D point cloud data of the current building state is corrected based on the transformation parameters, including:
若相似度计算结果小于指定阈值,则基于变换参数对当前建筑状态的三维点云数据进行校正,并根据校正结果对当前建筑进行对应的校正处理。If the similarity calculation result is less than the specified threshold, the 3D point cloud data of the current building state is corrected based on the transformation parameters, and the current building is corrected according to the correction result.
为了解决上述问题,本发明提供一种电子设备,所述电子设备包括:In order to solve the above problems, the present invention provides an electronic device, which includes:
存储器,存储至少一个指令;a memory storing at least one instruction;
通信接口,实现电子设备通信;及a communication interface to enable electronic device communication; and
处理器,执行所述存储器中存储的指令以实现上述所述的建筑全过程数字点云校正处理方法。The processor executes the instructions stored in the memory to implement the above-mentioned method for correcting and processing the digital point cloud in the whole construction process.
为了解决上述问题,本发明还提供一种计算机可读存储介质,所述计算机可读存储介质中存储有至少一个指令,所述至少一个指令被电子设备中的处理器执行以实现上述所述的建筑全过程数字点云校正处理方法。In order to solve the above problems, the present invention also provides a computer-readable storage medium, at least one instruction is stored in the computer-readable storage medium, and the at least one instruction is executed by a processor in the electronic device to realize the above-mentioned Digital point cloud correction processing method for the whole construction process.
相对于现有技术,本发明提出一种建筑全过程数字点云校正处理方法,该技术具有以下优势:Compared with the prior art, the present invention proposes a digital point cloud correction processing method for the whole construction process, which has the following advantages:
首先,本方案提出一种三维坐标映射以及相似性度量方法,对所采集的三维点云数据进行坐标变换,其中坐标变换对象为三维点云数据中的三维点云坐标,将三维点云坐标变换到世界坐标系下,其中对于三维点云坐标的坐标系变换公式为:First of all, this program proposes a 3D coordinate mapping and similarity measurement method to perform coordinate transformation on the collected 3D point cloud data, where the object of coordinate transformation is the 3D point cloud coordinates in the 3D point cloud data, and transform the 3D point cloud coordinates In the world coordinate system, the coordinate system transformation formula for 3D point cloud coordinates is:
g′(x,y,z)=Rg(x,y,z)T+Hg'(x, y, z) = Rg(x, y, z)T + H
其中:R表示当前点云坐标系与世界坐标系之间的旋转矩阵,H表示平移矩阵;T表示转置;g′(x,y,z)表示三维点云坐标g(x,y,z)的坐标变换结果;所述旋转矩阵以及平移矩阵的计算流程为:利用相机拍摄得到N个已知世界坐标系下坐标的三维点云坐标:Among them: R represents the rotation matrix between the current point cloud coordinate system and the world coordinate system, H represents the translation matrix; T represents transposition; g′(x, y, z) represents the three-dimensional point cloud coordinates g(x, y, z ) coordinate transformation result; the calculation process of the rotation matrix and the translation matrix is: use the camera to shoot and obtain the three-dimensional point cloud coordinates of coordinates under N known world coordinate systems:
{(pi,qi)|i∈[1,N]}{(pi ,qi )|i∈[1,N]}
其中:pi表示相机所拍摄得到的第i个三维点云坐标,qi表示pi在世界坐标系下的坐标;对坐标进行标准化处理:Among them: pi represents the i-th 3D point cloud coordinates captured by the camera, and qi represents the coordinates of pi in the world coordinate system; standardize the coordinates:
其中:分别为pi,qi的标准化处理后结果;将标准化处理后的坐标分别构成矩阵P和矩阵Q:in: are the normalized results of pi and qi respectively; the standardized coordinates form matrix P and matrix Q respectively:
计算得到变换矩阵A和B:Calculate the transformation matrices A and B:
A=PTQA=PT Q
B=PQTB=PQT
分别计算得到变换矩阵A和B的N个特征值和特征向量,并按特征值由大到小顺序对特征向量进行排序,其中变换矩阵A的N个特征向量为:Calculate the N eigenvalues and eigenvectors of the transformation matrices A and B respectively, and sort the eigenvectors in descending order of the eigenvalues, where the N eigenvectors of the transformation matrix A are:
(α1,α2,...,αi,...,αN)(α1 , α2 , ..., αi , ..., αN )
其中:αi表示变换矩阵A中第i个特征向量;变换矩阵B的N个特征向量为:Among them: αi represents the i-th eigenvector in the transformation matrix A; the N eigenvectors of the transformation matrix B are:
(β1,β2,...,βi,...,βN)(β1 , β2 , ..., βi, ..., βN )
其中:βi表示变换矩阵B中第i个特征向量;将特征向量分别构成矩阵形式α,β:Among them: βi represents the i-th eigenvector in the transformation matrix B; the eigenvectors are formed into matrix forms α and β respectively:
α=[α1,α2,...,αi,...,αN]α=[α1 , α2 , . . . , αi , . . . , αN ]
β=[β1,β2,...,βi,...,βN]β=[β1 , β2 , . . . , βi, . . . , βN ]
计算得到当前点云坐标系与世界坐标系之间的旋转矩阵R,以及平移矩阵H:Calculate the rotation matrix R between the current point cloud coordinate system and the world coordinate system, and the translation matrix H:
R=αβTR=αβT
H=Q-RPH=Q-RP
利用计算得到的旋转矩阵R以及平移矩阵H将三维点云数据中的三维点云坐标变换到世界坐标系下。利用Delaunay三角剖分方法将三维点云数据集合中的三维点云坐标转换为若干Delaunay三角网,并利用Delaunay三角剖分方法将建筑设计目标三维数据转换为若干Delaunay三角网;提取三维点云数据中的三维点云坐标,其中rm中的三维点云坐标为Lm;以任意三维点云坐标Lm为顶点,得到Lm的若干组直接连接点,并以三维点云坐标Lm为极点,以Lm至任意直接连接点的射线方向为极轴,计算得到极点Lm与不同直接连接点所构成射线的极角,得到极点Lm对应的极角集合;选取建筑设计目标三维数据中与三维点云坐标Lm最接近的三维坐标点L′m,以该三维坐标点L′m为极点,以L′m至任意直接连接点的射线方向为极轴,计算得到极点L′m与不同直接连接点所构成射线的极角,得到极点L′m对应的极角集合;计算极点Lm对应的极角集合与极点L′m对应的极角集合的余弦相似度;得到三维点云数据中所有三维点云坐标与对应建筑设计目标三维坐标的余弦相似度,将最小的余弦相似度作为相似度计算结果;本方案基于目标建筑当前状态的三维点云数据,建立由相机拍摄像素坐标与世界坐标之间的映射转换关系,进而将三维点云数据由相机坐标系转换到世界坐标系,并采取相同方法得到三维点云数据与建筑设计目标三维数据之间的变换参数,通过结合极坐标系下三维点云坐标的极角以及建筑设计目标三维坐标的极角进行两者的相似度计算,以表征三维数据在方向上的差异性,若相似度计算结果小于指定阈值,则表示当前建筑状态与建筑设计目标存在较大差异,因此基于变换参数对当前建筑状态的三维点云数据进行校正,并根据校正结果对当前建筑进行对应的校正处理,实现建筑全过程数字点云校正处理。Use the calculated rotation matrix R and translation matrix H to transform the 3D point cloud coordinates in the 3D point cloud data into the world coordinate system. Use the Delaunay triangulation method to convert the 3D point cloud coordinates in the 3D point cloud data set into several Delaunay triangulations, and use the Delaunay triangulation method to convert the 3D data of the architectural design target into several Delaunay triangulations; extract the 3D point cloud data 3D point cloud coordinates in , where the 3D point cloud coordinates in rm are Lm ; with any 3D point cloud coordinate Lm as the vertex, several groups of direct connection points of Lm are obtained, and the 3D point cloud coordinate Lm is For the pole, take the ray direction from Lm to any direct connection point as the polar axis, calculate the polar angle of the ray formed by the pole Lm and different direct connection points, and obtain the polar angle set corresponding to the pole Lm ; select the three-dimensional data of the architectural design target The three-dimensional coordinate point L′m that is closest to the three-dimensional point cloud coordinate Lm is the pole point, and the ray direction from L′m to any direct connection point is the polar axis, and thepole L′ is calculated The polar angle of the ray formed bym and different direct connection points is obtained to obtain the polar angle set corresponding to the pole L′m ; calculate the cosine similarity between the polar angle set corresponding to the pole Lm and the polar angle set corresponding to the pole L′m ; obtain the three-dimensional The cosine similarity between all 3D point cloud coordinates in the point cloud data and the 3D coordinates of the corresponding architectural design target, the smallest cosine similarity is used as the similarity calculation result; this scheme is based on the 3D point cloud data of the current state of the target building, and establishes The mapping conversion relationship between pixel coordinates and world coordinates, and then transform the 3D point cloud data from the camera coordinate system to the world coordinate system, and adopt the same method to obtain the transformation parameters between the 3D point cloud data and the 3D data of the architectural design target, through Combining the polar angle of the three-dimensional point cloud coordinates in the polar coordinate system and the polar angle of the three-dimensional coordinates of the architectural design target to perform similarity calculations to characterize the difference in direction of the three-dimensional data, if the similarity calculation result is less than the specified threshold, then It means that there is a large difference between the current building state and the architectural design goal, so the 3D point cloud data of the current building state is corrected based on the transformation parameters, and the current building is corrected according to the correction result to realize the digital point cloud correction of the whole building process deal with.
同时,本方案提出一种三维点云补全增强方法,通过构建三维点云数据增强模型,所构建三维点云数据增强模型以世界坐标系下的三维点云数据为输入,以增强后的三维点云数据为输出;所构建三维点云数据增强模型包括输入层、多层次编码层以及解码点云增强层,其中输入层用于接收三维点云数据,并对所接收的三维点云数据进行多分辨率下采样处理,多层次编码层用于对不同分辨率的采样结果进行编码处理,形成多层次编码特征,解码点云增强层用于对多层次编码特征进行解码处理,对不同分辨率下的采样结果进行缺失部分补全,得到增强后的三维点云数据。本方案通过在不同分辨率下对点云数据进行采样处理,对不同分辨率的采样结果进行编码处理,形成多层次编码特征,进而对多层次编码特征进行感知以及上采样处理,得到不同分辨率下的缺失点云坐标,将不同分辨率下的缺失点云坐标同原始点云坐标进行结合,形成补全增强后的三维点云坐标集合,实现缺失点云数据的补全增强。At the same time, this program proposes a 3D point cloud completion and enhancement method. By constructing a 3D point cloud data enhancement model, the constructed 3D point cloud data enhancement model takes the 3D point cloud data in the world coordinate system as input, and uses the enhanced 3D point cloud data as input. The point cloud data is the output; the constructed 3D point cloud data enhancement model includes an input layer, a multi-level encoding layer and a decoding point cloud enhancement layer, where the input layer is used to receive the 3D point cloud data, and process the received 3D point cloud data Multi-resolution down-sampling processing, the multi-level encoding layer is used to encode the sampling results of different resolutions to form multi-level encoding features, and the decoding point cloud enhancement layer is used to decode multi-level encoding features, and different resolutions The missing parts of the sampling results below are completed, and the enhanced 3D point cloud data is obtained. This solution samples point cloud data at different resolutions, encodes the sampling results of different resolutions, forms multi-level coding features, and then performs perception and up-sampling processing on multi-level coding features to obtain different resolutions Under the missing point cloud coordinates, the missing point cloud coordinates at different resolutions are combined with the original point cloud coordinates to form a complete and enhanced 3D point cloud coordinate set, which realizes the completion and enhancement of missing point cloud data.
附图说明Description of drawings
图1为本发明一实施例提供的一种建筑全过程数字点云校正处理方法的流程示意图;Fig. 1 is a schematic flow diagram of a method for correcting and processing digital point clouds in the whole construction process provided by an embodiment of the present invention;
图2为本发明一实施例提供的实现建筑全过程数字点云校正处理方法的电子设备的结构示意图。Fig. 2 is a schematic structural diagram of an electronic device for implementing a method for correcting and processing a digital point cloud in the whole construction process provided by an embodiment of the present invention.
本发明目的的实现、功能特点及优点将结合实施例,参照附图做进一步说明。The realization of the purpose of the present invention, functional characteristics and advantages will be further described in conjunction with the embodiments and with reference to the accompanying drawings.
具体实施方式Detailed ways
应当理解,此处所描述的具体实施例仅仅用以解释本发明,并不用于限定本发明。It should be understood that the specific embodiments described here are only used to explain the present invention, not to limit the present invention.
本申请实施例提供一种建筑全过程数字点云校正处理方法。所述建筑全过程数字点云校正处理方法的执行主体包括但不限于服务端、终端等能够被配置为执行本申请实施例提供的该方法的电子设备中的至少一种。换言之,所述建筑全过程数字点云校正处理方法可以由安装在终端设备或服务端设备的软件或硬件来执行,所述软件可以是区块链平台。所述服务端包括但不限于:单台服务器、服务器集群、云端服务器或云端服务器集群等。An embodiment of the present application provides a digital point cloud correction processing method in the whole construction process. The subject of execution of the method for correcting and processing digital point clouds in the whole construction process includes but is not limited to at least one of electronic devices such as a server and a terminal that can be configured to execute the method provided by the embodiment of the present application. In other words, the method for correcting and processing the digital point cloud in the whole construction process can be executed by software or hardware installed on the terminal device or server device, and the software can be a block chain platform. The server includes, but is not limited to: a single server, a server cluster, a cloud server or a cloud server cluster, and the like.
实施例1:Example 1:
S1:采集目标建筑当前状态的三维点云数据,并对采集的三维点云数据进行坐标变换得到世界坐标系下的三维点云数据。S1: Collect the 3D point cloud data of the current state of the target building, and perform coordinate transformation on the collected 3D point cloud data to obtain the 3D point cloud data in the world coordinate system.
所述S1步骤中采集目标建筑当前状态的三维点云数据,包括:The three-dimensional point cloud data of collecting target building current state in the described S1 step comprises:
利用相机采集目标建筑当前状态的三维点云数据,所述相机由RGB彩色相机以及一个深度相机组成,RGB彩色相机用于获取目标建筑的彩色信息,深度相机用于获取目标建筑的深度信息,相机所拍摄目标建筑的图像为I,将图像I中任意像素I(x,y)转换为三维点云坐标的公式为:Use the camera to collect the 3D point cloud data of the current state of the target building. The camera is composed of an RGB color camera and a depth camera. The RGB color camera is used to obtain the color information of the target building, and the depth camera is used to obtain the depth information of the target building. The image of the captured target building is I, and the formula for converting any pixel I(x, y) in the image I to the coordinates of the three-dimensional point cloud is:
其中:in:
g(x,y,z)表示目标建筑图像中像素I(x,y)对应的三维点云坐标,像素I(x,y)表示目标建筑图像I中第x行第y列的像素;g (x, y, z) represents the three-dimensional point cloud coordinates corresponding to the pixel I (x, y) in the target building image, and the pixel I (x, y) represents the pixel of the xth row and the yth column in the target building image I;
cX表示相机在水平方向的主点,cY表示相机在竖直方向的主点;cX represents the principal point of the camera in the horizontal direction, and cY represents the principal point of the camera in the vertical direction;
fX表示相机在水平方向的焦距,fY表示相机在竖直方向的焦距;fX represents the focal length of the camera in the horizontal direction, and fY represents the focal length of the camera in the vertical direction;
dx,y表示深度相机部分拍摄得到的像素I(x,y)的深度信息;dx, y represent the depth information of the pixel I(x, y) captured by the depth camera part;
并利用RGB彩色相机获取像素在RGB颜色通道的颜色值,构成三维点云数据,其中像素I(x,y)的三维点云数据为(g(x,y,z),R(x,y),G(x,y),B(x,y)},其中R(x,y),G(x,y),B(x,y)分别表示像素I(x,y)在RGB颜色通道的颜色值。And use the RGB color camera to obtain the color value of the pixel in the RGB color channel to form three-dimensional point cloud data, where the three-dimensional point cloud data of pixel I(x, y) is (g(x, y, z), R(x, y ), G(x, y), B(x, y)}, where R(x, y), G(x, y), B(x, y) represent pixel I(x, y) in RGB color The color value of the channel.
在本发明实施例中,所述相机的主点和焦距均为相机内部参数,即已知参数信息。In the embodiment of the present invention, the principal point and the focal length of the camera are internal parameters of the camera, that is, known parameter information.
所述S1步骤中对采集的三维点云数据进行坐标变换得到世界坐标系下的三维点云数据,包括:In the S1 step, coordinate transformation is carried out to the collected three-dimensional point cloud data to obtain three-dimensional point cloud data under the world coordinate system, including:
对所采集的三维点云数据进行坐标变换,其中坐标变换对象为三维点云数据中的三维点云坐标,将三维点云坐标变换到世界坐标系下,其中对于三维点云坐标的坐标系变换公式为:Perform coordinate transformation on the collected 3D point cloud data, where the coordinate transformation object is the 3D point cloud coordinates in the 3D point cloud data, transform the 3D point cloud coordinates into the world coordinate system, and the coordinate system transformation of the 3D point cloud coordinates The formula is:
g′(x,y,z)=Rg(x,y,z)T+Hg'(x, y, z) = Rg(x, y, z)T + H
其中:in:
R表示当前点云坐标系与世界坐标系之间的旋转矩阵,H表示平移矩阵;R represents the rotation matrix between the current point cloud coordinate system and the world coordinate system, and H represents the translation matrix;
T表示转置;T means transpose;
g′(x,y,z)表示三维点云坐标g(x,y,z)的坐标变换结果;g'(x, y, z) represents the coordinate transformation result of the three-dimensional point cloud coordinates g(x, y, z);
所述旋转矩阵以及平移矩阵的计算流程为:The calculation process of the rotation matrix and the translation matrix is as follows:
S11:利用相机拍摄得到N个已知世界坐标系下坐标的三维点云坐标:S11: Obtain the 3D point cloud coordinates of the coordinates in N known world coordinate systems by using the camera:
{(pi,qi)|i∈[1,N]}{(pi ,qi )|i∈[1,N]}
其中:in:
pi表示相机所拍摄得到的第i个三维点云坐标,qi表示pi在世界坐标系下的坐标;pi represents the i-th 3D point cloud coordinates captured by the camera, and qi represents the coordinates of pi in the world coordinate system;
S12:对坐标进行标准化处理:S12: Standardize the coordinates:
其中:in:
分别为pi,qi的标准化处理后结果; Respectively pi, qi after standardized processing results;
S13:将标准化处理后的坐标分别构成矩阵P和矩阵Q:S13: Form the standardized coordinates into a matrix P and a matrix Q respectively:
S14:计算得到变换矩阵A和B:S14: Calculate transformation matrices A and B:
A=PTQA=PT Q
B=PQTB=PQT
S15:分别计算得到变换矩阵A和B的N个特征值和特征向量,并按特征值由大到小顺序对特征向量进行排序,其中变换矩阵A的N个特征向量为:S15: Calculate N eigenvalues and eigenvectors of transformation matrices A and B respectively, and sort the eigenvectors in descending order of eigenvalues, wherein the N eigenvectors of transformation matrix A are:
(α1,α2,...,αi,...,αN)(α1 , α2 , ..., αi , ..., αN )
其中:in:
αi表示变换矩阵A中第i个特征向量;αi represents the i-th eigenvector in the transformation matrix A;
变换矩阵B的N个特征向量为:The N eigenvectors of the transformation matrix B are:
(β1,β2...,βi,...,βN)(β1 , β2 ..., βi , ..., βN )
其中:in:
βi表示变换矩阵B中第i个特征向量;βi represents the i-th eigenvector in the transformation matrix B;
S16:将特征向量分别构成矩阵形式α,β:S16: Form the eigenvectors into matrix forms α and β respectively:
α=[α1,α2,...,αi,...,αN]α=[α1 , α2 , . . . , αi , . . . , αN ]
β=[β1,β2,...,βi,...,βN]β=[β1 , β2 , . . . , βi , . . . , βN ]
S17:计算得到当前点云坐标系与世界坐标系之间的旋转矩阵R,以及平移矩阵H:S17: Calculate the rotation matrix R between the current point cloud coordinate system and the world coordinate system, and the translation matrix H:
R=αβTR=αβT
H=Q-RPH=Q-RP
利用计算得到的旋转矩阵R以及平移矩阵H将三维点云数据中的三维点云坐标变换到世界坐标系下。Use the calculated rotation matrix R and translation matrix H to transform the 3D point cloud coordinates in the 3D point cloud data into the world coordinate system.
S2:构建三维点云数据增强模型,所构建模型以世界坐标系下的三维点云数据为输入,以增强后的三维点云数据为输出。S2: Build a 3D point cloud data augmentation model. The constructed model takes the 3D point cloud data in the world coordinate system as input and the enhanced 3D point cloud data as output.
所述S2步骤中构建三维点云数据增强模型,包括:The three-dimensional point cloud data augmentation model is constructed in the described S2 step, including:
构建三维点云数据增强模型,所构建三维点云数据增强模型以世界坐标系下的三维点云数据为输入,以增强后的三维点云数据为输出;Build a 3D point cloud data enhancement model, the constructed 3D point cloud data enhancement model takes the 3D point cloud data in the world coordinate system as input, and the enhanced 3D point cloud data as output;
所构建三维点云数据增强模型包括输入层、多层次编码层以及解码点云增强层,其中输入层用于接收三维点云数据,并对所接收的三维点云数据进行多分辨率下采样处理,多层次编码层用于对不同分辨率的采样结果进行编码处理,形成多层次编码特征,解码点云增强层用于对多层次编码特征进行解码处理,对不同分辨率下的采样结果进行缺失部分补全,得到增强后的三维点云数据;The constructed 3D point cloud data enhancement model includes an input layer, a multi-level encoding layer and a decoding point cloud enhancement layer, where the input layer is used to receive 3D point cloud data and perform multi-resolution downsampling processing on the received 3D point cloud data , the multi-level encoding layer is used to encode the sampling results of different resolutions to form multi-level encoding features, and the decoding point cloud enhancement layer is used to decode the multi-level encoding features to delete the sampling results at different resolutions Partial completion to obtain enhanced 3D point cloud data;
基于三维点云数据增强模型的增强流程为:The enhancement process based on the 3D point cloud data enhancement model is as follows:
S21:以世界坐标系下的三维点云数据Data为输入,其中Data中包含D组世界坐标系下的三维点云坐标以及坐标所对应的颜色值;S21: Taking the 3D point cloud data Data in the world coordinate system as input, wherein Data includes the 3D point cloud coordinates in the world coordinate system of Group D and the color values corresponding to the coordinates;
S22:输入层对D组三维点云坐标进行D/2,D/4,D/8的分辨率采样,得到三组采样结果,其中D/2分辨率的采样流程为:S22: The input layer performs D/2, D/4, and D/8 resolution sampling on D group 3D point cloud coordinates, and obtains three sets of sampling results, wherein the D/2 resolution sampling process is as follows:
S221:从D组三维点云坐标中选取一个坐标作为起始点,构建采样点集合,将起始点添加到采样点集合中;S221: Select a coordinate from the three-dimensional point cloud coordinates of group D as a starting point, construct a sampling point set, and add the starting point to the sampling point set;
S222:计算采样点集合外坐标到起始点的距离,并选取距离最大的坐标添加到采样点集合中;S222: Calculate the distance from the external coordinates of the sampling point set to the starting point, and select the coordinate with the largest distance and add it to the sampling point set;
S223:计算采样点集合外坐标到采样点集合内任意坐标的最近距离,选取最大最近距离对应的坐标添加到采样点集合中;S223: Calculate the shortest distance from the external coordinates of the sampling point set to any coordinates in the sampling point set, and select the coordinate corresponding to the largest shortest distance and add it to the sampling point set;
S224:重复步骤S222至S223,直到采样点集合中的采样点数目达到S224: Repeat steps S222 to S223 until the number of sampling points in the sampling point set reaches
S23:多层次编码层利用多层感知机对每组采样结果进行升维处理,直到每个采样坐标的维数达到128;利用最大池化层提取每组采样结果最高维数的特征图,每个特征图的维数为128,其中D/2分辨率采样下的特征图featureD/2为:S23: The multi-level encoding layer uses a multi-layer perceptron to perform dimension-up processing on each set of sampling results until the dimension of each sampling coordinate reaches 128; uses the maximum pooling layer to extract the feature map with the highest dimension of each set of sampling results, each The dimension of each feature map is 128, where the feature map feature D/2 under D/2 resolution sampling is:
featureD/2=Pooling(MLE(sampleD/2(1))||...||MLE(sampleD/2(D/2)))featureD/2 =Pooling(MLE(sampleD/2 (1))||...||MLE(sampleD/2 (D/2)))
其中:in:
||表示多层感知机处理结果的拼接;|| indicates the concatenation of the processing results of the multi-layer perceptron;
MLE(sampleD/2(D/2))表示D/2分辨率采样下第D/2个采样结果的升维处理结果,MLE表示多层感知机;MLE(sampleD/2 (D/2)) indicates the result of dimension-up processing of the D/2 sampling result under D/2 resolution sampling, and MLE indicates a multi-layer perceptron;
Pooling(·)表示池化操作,对于D/2分辨率采样下的采样结果,将升维后128×3的升维处理结果MLE(sampleD/2(1))||...||MLE(sampleD/2(D/2))池化为128维,得到特征图featureD/2;Pooling(·) represents the pooling operation. For the sampling results under D/2 resolution sampling, the dimension-up processing result of 128×3 is MLE(sampleD/2 (1))||...|| MLE (sampleD/2 (D/2)) is pooled to 128 dimensions, and the feature map featureD/2 is obtained;
S24:多层次编码层利用多个独立的卷积层对三组采样结果的特征图进行卷积处理,并赋予三组特征图卷积处理结果不同的权重,将赋权后卷积处理结果进行拼接,得到多层次编码特征;在本发明实施例中,D/2分辨率采样下特征图的权重为0.6,D/4分辨率采样下特征图的权重为0.2,D/8分辨率采样下特征图的权重为0.2;S24: The multi-level encoding layer uses multiple independent convolution layers to perform convolution processing on the feature maps of the three sets of sampling results, and assigns different weights to the convolution processing results of the three sets of feature maps, and performs convolution processing results after weighting. Splicing to obtain multi-level coding features; in the embodiment of the present invention, the weight of the feature map under D/2 resolution sampling is 0.6, the weight of the feature map under D/4 resolution sampling is 0.2, and the weight of the feature map under D/8 resolution sampling The weight of the feature map is 0.2;
S25:解码点云增强层中包含多个不同参数的多层感知机,利用多层感知机对多层次编码特征进行重建以及上采样处理,得到不同分辨率下的缺失点云坐标,并将不同分辨率下的缺失点云坐标同原始点云坐标进行结合,形成补全增强后的三维点云坐标集合,其中缺失点云坐标的颜色值为邻近三维点云坐标的颜色值均值,得到增强后的三维点云数据。S25: Decoding the multi-layer perceptron with multiple different parameters in the point cloud enhancement layer, using the multi-layer perceptron to reconstruct and up-sample the multi-level coding features, and obtain the missing point cloud coordinates at different resolutions, and convert the different The missing point cloud coordinates at the resolution are combined with the original point cloud coordinates to form a complete and enhanced 3D point cloud coordinate set, in which the color value of the missing point cloud coordinates is the average color value of the adjacent 3D point cloud coordinates, and the enhanced 3D point cloud data.
S3:确定三维点云数据增强模型的优化目标函数并进行优化训练,得到最优三维点云数据增强模型。S3: Determine the optimization objective function of the 3D point cloud data enhancement model and perform optimization training to obtain the optimal 3D point cloud data enhancement model.
所述S3步骤中确定三维点云数据增强模型的优化目标函数并进行优化训练,包括:In the S3 step, determine the optimization objective function of the three-dimensional point cloud data enhancement model and perform optimization training, including:
确定三维点云数据增强模型的优化目标函数,并基于优化目标函数对三维点云数据增强模型进行优化训练,得到训练后的模型参数,其中待训练模型参数为多层次编码层中多层感知机参数以及卷积层参数θ1,解码点云增强层中多层感知机参数θ2,其中θ1,θ2为参数向量;Determine the optimization objective function of the 3D point cloud data enhancement model, and optimize the training of the 3D point cloud data enhancement model based on the optimization objective function, and obtain the trained model parameters, where the model parameters to be trained are multi-layer perceptrons in the multi-level coding layer Parameters and convolutional layer parameters θ1 , decode the multi-layer perceptron parameters θ2 in the point cloud enhancement layer, where θ1 and θ2 are parameter vectors;
获取存在缺失点云数据的三维点云数据集合,并获取缺失点云数据的真实三维点云坐标集合,所确定优化目标函数为:Obtain the 3D point cloud data set with missing point cloud data, and obtain the real 3D point cloud coordinate set of missing point cloud data. The determined optimization objective function is:
其中:in:
Ω表示缺失点云数据的真实三维点云坐标集合,num(Ω)表示缺失点云数据的数目,num表示三维点云数据增强模型得到的缺失点云数目;Ω represents the real 3D point cloud coordinate set of missing point cloud data, num(Ω) represents the number of missing point cloud data, and num represents the number of missing point clouds obtained by the 3D point cloud data enhancement model;
locc表示第c个缺失点云数据的真实三维点云坐标;locc represents the real 3D point cloud coordinates of the cth missing point cloud data;
表示三维点云数据增强模型输出的距离locc最近的缺失点云坐标; Represents the missing point cloud coordinates closest to locc output by the 3D point cloud data augmentation model;
dis(·)表示距离计算公式;dis( ) represents the distance calculation formula;
ε表示比例权重,将其设置为0.2;ε represents the proportional weight, which is set to 0.2;
基于优化目标函数,利用Adam优化器对三维点云数据增强模型进行训练,得到训练后的模型参数,并基于训练后的模型参数构建得到最优三维点云数据增强模型。Based on the optimized objective function, the Adam optimizer is used to train the 3D point cloud data enhancement model, and the trained model parameters are obtained, and the optimal 3D point cloud data enhancement model is constructed based on the trained model parameters.
S4:利用最优三维点云数据增强模型对坐标转换后的三维点云数据进行增强,对增强后的三维点云数据结合建筑设计目标三维数据进行相似度计算,并计算得到变换参数。S4: Use the optimal 3D point cloud data enhancement model to enhance the 3D point cloud data after coordinate transformation, perform similarity calculation on the enhanced 3D point cloud data combined with the 3D data of the architectural design target, and calculate the transformation parameters.
所述S4步骤中利用最优三维点云数据增强模型对坐标转换后的三维点云数据进行增强,包括:In the S4 step, the optimal three-dimensional point cloud data enhancement model is used to enhance the three-dimensional point cloud data after coordinate conversion, including:
将步骤S2中坐标变换后的三维点云数据输入到最优三维点云数据增强模型中,最优三维点云数据增强模型输出增强后的三维点云数据,其中增强后的三维点云数据集合为:Input the 3D point cloud data after the coordinate transformation in step S2 into the optimal 3D point cloud data enhancement model, and the optimal 3D point cloud data enhancement model outputs the enhanced 3D point cloud data, wherein the enhanced 3D point cloud data set for:
{rm|m∈[1,M]}{rm |m∈[1, M]}
其中:in:
rm表示最优点云增强模型输出的第m个三维点云数据,三维点云数据中包括三维点云坐标以及三维点云颜色值,M表示最优点云增强模型输出的三维点云数据数目。rm represents the mth 3D point cloud data output by the optimal point cloud enhancement model. The 3D point cloud data includes the coordinates of the 3D point cloud and the color values of the 3D point cloud. M represents the number of 3D point cloud data output by the optimal point cloud enhancement model.
所述S4步骤中对增强后的三维点云数据结合建筑设计目标三维数据进行相似度计算,并计算得到变换参数,包括:In the S4 step, the similarity calculation is performed on the enhanced three-dimensional point cloud data combined with the three-dimensional data of the architectural design target, and the transformation parameters are calculated, including:
对增强后的三维点云数据结合建筑设计目标三维数据进行相似度计算,并计算得到变换参数,其中相似度计算流程为:The similarity calculation is performed on the enhanced 3D point cloud data combined with the 3D data of the architectural design target, and the transformation parameters are calculated. The similarity calculation process is as follows:
S41:利用Delaunay三角剖分方法将三维点云数据集合中的三维点云坐标转换为若干Delaunay三角网,并利用Delaunay三角剖分方法将建筑设计目标三维数据转换为若干Delaunay三角网;S41: Using the Delaunay triangulation method to convert the three-dimensional point cloud coordinates in the three-dimensional point cloud data set into several Delaunay triangulations, and using the Delaunay triangulation method to convert the three-dimensional data of the architectural design target into several Delaunay triangulations;
S42:提取三维点云数据中的三维点云坐标,其中rm中的三维点云坐标为Lm;S42: extracting the three-dimensional point cloud coordinates in the three-dimensional point cloud data, wherein the three-dimensional point cloud coordinates in rm are Lm ;
S43:以任意三维点云坐标Lm为顶点,得到Lm的若干组直接连接点,并以三维点云坐标Lm为极点,以Lm至任意直接连接点的射线方向为极轴,计算得到极点Lm与不同直接连接点所构成射线的极角,得到极点Lm对应的极角集合;S43: Take any three-dimensional point cloud coordinate Lm as the vertex, obtain several groups of direct connection points of Lm , and take the three-dimensional point cloud coordinate Lm as the pole, and take the ray direction from Lm to any direct connection point as the polar axis, and calculate Obtain the polar angle of the ray formed by the pole Lm and different direct connection points, and obtain the set of polar angles corresponding to the pole Lm ;
S44:选取建筑设计目标三维数据中与三维点云坐标Lm最接近的三维坐标点L′m,以该三维坐标点L′m为极点,以L′m至任意直接连接点的射线方向为极轴,计算得到极点L′m与不同直接连接点所构成射线的极角,得到极点L′m对应的极角集合;S44: Select the three-dimensional coordinate point L′m closest to the three-dimensional point cloud coordinate Lm in the three-dimensional data of the architectural design target, take the three-dimensional coordinate point L′m as the pole, and take the ray direction from L′m to any direct connection point as Polar axis, calculate the polar angle of the ray formed by the pole L′m and different direct connection points, and obtain the polar angle set corresponding to the pole L′m ;
S45:计算极点Lm对应的极角集合与极点L′m对应的极角集合的余弦相似度;S45: Calculate the cosine similarity between the polar angle set corresponding to the pole Lm and the polar angle set corresponding to the pole L′m ;
S46:重复步骤S43至步骤S45,得到三维点云数据中所有三维点云坐标与对应建筑设计目标三维坐标的余弦相似度,将最小的余弦相似度作为相似度计算结果;S46: Repeat steps S43 to S45 to obtain the cosine similarity between all three-dimensional point cloud coordinates in the three-dimensional point cloud data and the three-dimensional coordinates of the corresponding architectural design target, and use the smallest cosine similarity as the similarity calculation result;
构建三维点云坐标与建筑设计目标三维坐标的坐标匹配集:Construct the coordinate matching set of the 3D point cloud coordinates and the 3D coordinates of the architectural design target:
{(Lm,L′m)|m∈[1,M]}{(Lm , L′m )|m∈[1, M]}
按照步骤S1中旋转矩阵以及平移矩阵的计算流程,计算得到三维点云坐标与建筑设计目标三维坐标之间的旋转矩阵以及平移矩阵,并将计算结果作为变换参数。According to the calculation process of the rotation matrix and translation matrix in step S1, the rotation matrix and translation matrix between the three-dimensional point cloud coordinates and the three-dimensional coordinates of the architectural design target are calculated, and the calculation results are used as transformation parameters.
S5:若相似度计算结果小于指定阈值,则基于变换参数对当前建筑状态的三维点云数据进行校正,并根据校正结果对当前建筑进行对应的校正处理。S5: If the similarity calculation result is less than the specified threshold, correct the 3D point cloud data of the current building state based on the transformation parameters, and perform corresponding correction processing on the current building according to the correction result.
所述S5步骤中若相似度计算结果小于指定阈值,则基于变换参数对当前建筑状态的三维点云数据进行校正,包括:In the S5 step, if the similarity calculation result is less than the specified threshold, the three-dimensional point cloud data of the current building state is corrected based on the transformation parameters, including:
若相似度计算结果小于指定阈值,则基于变换参数对当前建筑状态的三维点云数据进行校正,并根据校正结果对当前建筑进行对应的校正处理。If the similarity calculation result is less than the specified threshold, the 3D point cloud data of the current building state is corrected based on the transformation parameters, and the current building is corrected according to the correction result.
实施例2:Example 2:
如图2所示,是本发明一实施例提供的实现建筑全过程数字点云校正处理方法的电子设备的结构示意图。As shown in FIG. 2 , it is a schematic structural diagram of an electronic device for implementing a method for correcting and processing a digital point cloud in the whole construction process provided by an embodiment of the present invention.
所述电子设备1可以包括处理器10、存储器11、通信接口13和总线,还可以包括存储在所述存储器11中并可在所述处理器10上运行的计算机程序,如程序12。The electronic device 1 may include a processor 10 , a memory 11 , a communication interface 13 and a bus, and may also include a computer program stored in the memory 11 and executable on the processor 10 , such as the program 12 .
其中,所述存储器11至少包括一种类型的可读存储介质,所述可读存储介质包括闪存、移动硬盘、多媒体卡、卡型存储器(例如:SD或DX存储器等)、磁性存储器、磁盘、光盘等。所述存储器11在一些实施例中可以是电子设备1的内部存储单元,例如该电子设备1的移动硬盘。所述存储器11在另一些实施例中也可以是电子设备1的外部存储设备,例如电子设备1上配备的插接式移动硬盘、智能存储卡(Smart Media Card,SMC)、安全数字(SecureDigital,SD)卡、闪存卡(Flash Card)等。进一步地,所述存储器11还可以既包括电子设备1的内部存储单元也包括外部存储设备。所述存储器11不仅可以用于存储安装于电子设备1的应用软件及各类数据,例如程序12的代码等,还可以用于暂时地存储已经输出或者将要输出的数据。Wherein, the memory 11 includes at least one type of readable storage medium, and the readable storage medium includes flash memory, mobile hard disk, multimedia card, card type memory (for example: SD or DX memory, etc.), magnetic memory, magnetic disk, CD etc. The storage 11 may be an internal storage unit of the electronic device 1 in some embodiments, such as a mobile hard disk of the electronic device 1 . The memory 11 can also be an external storage device of the electronic device 1 in other embodiments, such as a plug-in mobile hard disk equipped on the electronic device 1, a smart memory card (Smart Media Card, SMC), a secure digital (SecureDigital, SD) card, flash memory card (Flash Card), etc. Further, the memory 11 may also include both an internal storage unit of the electronic device 1 and an external storage device. The memory 11 can not only be used to store application software and various data installed in the electronic device 1 , such as the code of the program 12 , but also can be used to temporarily store outputted or to-be-outputted data.
所述处理器10在一些实施例中可以由集成电路组成,例如可以由单个封装的集成电路所组成,也可以是由多个相同功能或不同功能封装的集成电路所组成,包括一个或者多个中央处理器(Central Processing unit,CPU)、微处理器、数字处理芯片、图形处理器及各种控制芯片的组合等。所述处理器10是所述电子设备的控制核心(Control Unit),利用各种接口和线路连接整个电子设备的各个部件,通过运行或执行存储在所述存储器11内的程序或者模块(用于实现建筑全过程数字点云校正处理的程序12等),以及调用存储在所述存储器11内的数据,以执行电子设备1的各种功能和处理数据。In some embodiments, the processor 10 may be composed of integrated circuits, for example, may be composed of a single packaged integrated circuit, or may be composed of multiple integrated circuits with the same function or different functions, including one or more Combination of central processing unit (Central Processing unit, CPU), microprocessor, digital processing chip, graphics processor and various control chips, etc. The processor 10 is the control core (Control Unit) of the electronic device, and uses various interfaces and lines to connect the various components of the entire electronic device, and runs or executes programs or modules stored in the memory 11 (for Realize the program 12 of digital point cloud correction processing in the whole building process, etc.), and call the data stored in the memory 11 to execute various functions of the electronic device 1 and process data.
所述通信接口13可以包括有线接口和/或无线接口(如WI-FI接口、蓝牙接口等),通常用于在该电子设备1与其他电子设备之间建立通信连接,并实现电子设备内部组件之间的连接通信。The communication interface 13 may include a wired interface and/or a wireless interface (such as a WI-FI interface, a Bluetooth interface, etc.), and is generally used to establish a communication connection between the electronic device 1 and other electronic devices, and to realize the internal components of the electronic device. communication between connections.
所述总线可以是外设部件互连标准(peripheral component interconnect,简称PCI)总线或扩展工业标准结构(extended industry standard architecture,简称EISA)总线等。该总线可以分为地址总线、数据总线、控制总线等。所述总线被设置为实现所述存储器11以及至少一个处理器10等之间的连接通信。The bus may be a peripheral component interconnect (PCI for short) bus or an extended industry standard architecture (EISA for short) bus or the like. The bus can be divided into address bus, data bus, control bus and so on. The bus is configured to realize connection and communication between the memory 11 and at least one processor 10 and the like.
图2仅示出了具有部件的电子设备,本领域技术人员可以理解的是,图2示出的结构并不构成对所述电子设备1的限定,可以包括比图示更少或者更多的部件,或者组合某些部件,或者不同的部件布置。Figure 2 only shows an electronic device with components, and those skilled in the art can understand that the structure shown in Figure 2 does not constitute a limitation to the electronic device 1, and may include fewer or more components than those shown in the illustration. components, or combinations of certain components, or different arrangements of components.
例如,尽管未示出,所述电子设备1还可以包括给各个部件供电的电源(比如电池),优选地,电源可以通过电源管理装置与所述至少一个处理器10逻辑相连,从而通过电源管理装置实现充电管理、放电管理、以及功耗管理等功能。电源还可以包括一个或一个以上的直流或交流电源、再充电装置、电源故障检测电路、电源转换器或者逆变器、电源状态指示器等任意组件。所述电子设备1还可以包括多种传感器、蓝牙模块、Wi-Fi模块等,在此不再赘述。For example, although not shown, the electronic device 1 can also include a power supply (such as a battery) for supplying power to various components. Preferably, the power supply can be logically connected to the at least one processor 10 through a power management device, so that the power supply can be controlled by power management. The device implements functions such as charge management, discharge management, and power consumption management. The power supply may also include one or more DC or AC power supplies, recharging devices, power failure detection circuits, power converters or inverters, power status indicators and other arbitrary components. The electronic device 1 may also include various sensors, bluetooth modules, Wi-Fi modules, etc., which will not be repeated here.
可选地,该电子设备1还可以包括用户接口,用户接口可以是显示器(Display)、输入单元(比如键盘(Keyboard)),可选地,用户接口还可以是标准的有线接口、无线接口。可选地,在一些实施例中,显示器可以是LED显示器、液晶显示器、触控式液晶显示器以及OLED(Organic Light-Emitting Diode,有机发光二极管)触摸器等。其中,显示器也可以适当的称为显示屏或显示单元,用于显示在电子设备1中处理的信息以及用于显示可视化的用户界面。Optionally, the electronic device 1 may further include a user interface, which may be a display (Display) or an input unit (such as a keyboard (Keyboard)). Optionally, the user interface may also be a standard wired interface or a wireless interface. Optionally, in some embodiments, the display may be an LED display, a liquid crystal display, a touch-sensitive liquid crystal display, an OLED (Organic Light-Emitting Diode, Organic Light-Emitting Diode) touch device, and the like. Wherein, the display may also be appropriately called a display screen or a display unit, and is used for displaying information processed in the electronic device 1 and for displaying a visualized user interface.
应该了解,所述实施例仅为说明之用,在专利申请范围上并不受此结构的限制。It should be understood that the embodiments are only for illustration, and are not limited by the structure in the scope of the patent application.
所述电子设备1中的所述存储器11存储的程序12是多个指令的组合,在所述处理器10中运行时,可以实现:The program 12 stored in the memory 11 in the electronic device 1 is a combination of multiple instructions. When running in the processor 10, it can realize:
采集目标建筑当前状态的三维点云数据,并对采集的三维点云数据进行坐标变换得到世界坐标系下的三维点云数据;Collect the 3D point cloud data of the current state of the target building, and perform coordinate transformation on the collected 3D point cloud data to obtain the 3D point cloud data in the world coordinate system;
构建三维点云数据增强模型;Build a 3D point cloud data augmentation model;
确定三维点云数据增强模型的优化目标函数并进行优化训练,得到最优三维点云数据增强模型;Determine the optimization objective function of the 3D point cloud data enhancement model and perform optimization training to obtain the optimal 3D point cloud data enhancement model;
利用最优三维点云数据增强模型对坐标转换后的三维点云数据进行增强,对增强后的三维点云数据结合建筑设计目标三维数据进行相似度计算,并计算得到变换参数;Use the optimal 3D point cloud data enhancement model to enhance the 3D point cloud data after coordinate transformation, perform similarity calculation on the enhanced 3D point cloud data combined with the 3D data of the architectural design target, and calculate the transformation parameters;
若相似度计算结果小于指定阈值,则基于变换参数对当前建筑状态的三维点云数据进行校正,并根据校正结果对当前建筑进行对应的校正处理。If the similarity calculation result is less than the specified threshold, the 3D point cloud data of the current building state is corrected based on the transformation parameters, and the current building is corrected according to the correction result.
具体地,所述处理器10对上述指令的具体实现方法可参考图1至图2对应实施例中相关步骤的描述,在此不赘述。Specifically, for the specific implementation method of the above instructions by the processor 10, reference may be made to the description of relevant steps in the embodiments corresponding to FIG. 1 to FIG. 2 , and details are not repeated here.
需要说明的是,上述本发明实施例序号仅仅为了描述,不代表实施例的优劣。并且本文中的术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、装置、物品或者方法不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、装置、物品或者方法所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括该要素的过程、装置、物品或者方法中还存在另外的相同要素。It should be noted that the serial numbers of the above embodiments of the present invention are only for description, and do not represent the advantages and disadvantages of the embodiments. And herein the term "comprises", "comprises" or any other variation thereof is intended to cover a non-exclusive inclusion such that a process, apparatus, article or method comprising a set of elements includes not only those elements, but also includes the elements not expressly included. other elements listed, or also include elements inherent in the process, apparatus, article, or method. Without further limitations, an element defined by the phrase "comprising a ..." does not preclude the presence of additional same elements in the process, apparatus, article or method comprising the element.
通过以上的实施方式的描述,本领域的技术人员可以清楚地了解到上述实施例方法可借助软件加必需的通用硬件平台的方式来实现,当然也可以通过硬件,但很多情况下前者是更佳的实施方式。基于这样的理解,本发明的技术方案本质上或者说对现有技术做出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品存储在如上所述的一个存储介质(如ROM/RAM、磁碟、光盘)中,包括若干指令用以使得一台终端设备(可以是手机,计算机,服务器,或者网络设备等)执行本发明各个实施例所述的方法。Through the description of the above embodiments, those skilled in the art can clearly understand that the methods of the above embodiments can be implemented by means of software plus a necessary general-purpose hardware platform, and of course also by hardware, but in many cases the former is better implementation. Based on such an understanding, the technical solution of the present invention can be embodied in the form of a software product in essence or in other words, the part that contributes to the prior art, and the computer software product is stored in a storage medium (such as ROM/RAM) as described above. , magnetic disk, optical disk), including several instructions to enable a terminal device (which may be a mobile phone, computer, server, or network device, etc.) to execute the methods described in various embodiments of the present invention.
以上仅为本发明的优选实施例,并非因此限制本发明的专利范围,凡是利用本发明说明书及附图内容所作的等效结构或等效流程变换,或直接或间接运用在其他相关的技术领域,均同理包括在本发明的专利保护范围内。The above are only preferred embodiments of the present invention, and are not intended to limit the patent scope of the present invention. Any equivalent structure or equivalent process conversion made by using the description of the present invention and the contents of the accompanying drawings, or directly or indirectly used in other related technical fields , are all included in the scope of patent protection of the present invention in the same way.
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202310456320.7ACN116543127B (en) | 2023-04-25 | 2023-04-25 | Digital point cloud correction processing method for whole building process |
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202310456320.7ACN116543127B (en) | 2023-04-25 | 2023-04-25 | Digital point cloud correction processing method for whole building process |
| Publication Number | Publication Date |
|---|---|
| CN116543127Atrue CN116543127A (en) | 2023-08-04 |
| CN116543127B CN116543127B (en) | 2025-01-03 |
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN202310456320.7AActiveCN116543127B (en) | 2023-04-25 | 2023-04-25 | Digital point cloud correction processing method for whole building process |
| Country | Link |
|---|---|
| CN (1) | CN116543127B (en) |
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN117610142A (en)* | 2024-01-23 | 2024-02-27 | 山东平安建设集团有限公司 | Building design method based on VR technology |
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| WO2019242174A1 (en)* | 2018-06-21 | 2019-12-26 | 华南理工大学 | Method for automatically detecting building structure and generating 3d model based on laser radar |
| CN112633657A (en)* | 2020-12-16 | 2021-04-09 | 中冶建筑研究总院有限公司 | Construction quality management method, device, equipment and storage medium |
| CN113920271A (en)* | 2021-09-07 | 2022-01-11 | 北京理工大学 | Three-dimensional point cloud completion method, device and equipment |
| CN114494274A (en)* | 2022-03-31 | 2022-05-13 | 清华大学 | Building construction evaluation method, building construction evaluation device, electronic equipment and storage medium |
| CN115033967A (en)* | 2022-06-27 | 2022-09-09 | 北京航空航天大学 | A method for instantiated modeling of building templates based on point cloud data |
| CN115965549A (en)* | 2022-12-27 | 2023-04-14 | 浙江大华技术股份有限公司 | A laser point cloud completion method and related device |
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| WO2019242174A1 (en)* | 2018-06-21 | 2019-12-26 | 华南理工大学 | Method for automatically detecting building structure and generating 3d model based on laser radar |
| CN112633657A (en)* | 2020-12-16 | 2021-04-09 | 中冶建筑研究总院有限公司 | Construction quality management method, device, equipment and storage medium |
| CN113920271A (en)* | 2021-09-07 | 2022-01-11 | 北京理工大学 | Three-dimensional point cloud completion method, device and equipment |
| CN114494274A (en)* | 2022-03-31 | 2022-05-13 | 清华大学 | Building construction evaluation method, building construction evaluation device, electronic equipment and storage medium |
| CN115033967A (en)* | 2022-06-27 | 2022-09-09 | 北京航空航天大学 | A method for instantiated modeling of building templates based on point cloud data |
| CN115965549A (en)* | 2022-12-27 | 2023-04-14 | 浙江大华技术股份有限公司 | A laser point cloud completion method and related device |
| Title |
|---|
| 张振超;张永生;戴晨光;李凯;季虹良;: "机载激光点云与摄影测量点云非监督建筑物变化检测", 测绘科学技术学报, no. 05, 31 October 2019 (2019-10-31)* |
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN117610142A (en)* | 2024-01-23 | 2024-02-27 | 山东平安建设集团有限公司 | Building design method based on VR technology |
| CN117610142B (en)* | 2024-01-23 | 2024-04-23 | 山东平安建设集团有限公司 | Building design method based on VR technology |
| Publication number | Publication date |
|---|---|
| CN116543127B (en) | 2025-01-03 |
| Publication | Publication Date | Title |
|---|---|---|
| CN116188893B (en) | Image detection model training and target detection method and device based on BEV | |
| CN115880555B (en) | Target detection method, model training method, device, equipment and medium | |
| CN111160108A (en) | Anchor-free face detection method and system | |
| CN116071520B (en) | Digital twin water affair simulation test method | |
| CN112990228A (en) | Image feature matching method and related device, equipment and storage medium | |
| JP2019159940A (en) | Point group feature extraction device, point group feature extraction method, and program | |
| CN117541816B (en) | Target detection method and device and electronic equipment | |
| CN111652245A (en) | Vehicle contour detection method and device, computer equipment and storage medium | |
| CN116543127B (en) | Digital point cloud correction processing method for whole building process | |
| CN112053383A (en) | Method and device for real-time positioning of robot | |
| CN113240585A (en) | Image processing method and device based on generation countermeasure network and storage medium | |
| CN118537834A (en) | Vehicle perception information acquisition method, device, equipment and storage medium | |
| CN115937182A (en) | A multi-view visual detection method for mechanical defects | |
| CN117935194A (en) | Lane line detection model training method, detection method, device, equipment and medium | |
| CN117152370B (en) | AIGC-based 3D terrain model generation method, system, equipment and storage medium | |
| CN119205608A (en) | A multi-device cascade visual monitoring method, device, electronic device and medium | |
| CN118365693A (en) | Target object size detection method, device, equipment and storage medium | |
| CN115130593B (en) | Connection relation determining method, device, equipment and medium | |
| CN118123832A (en) | Control method for self-adaptive precise control grape picking mechanical arm | |
| CN114926753B (en) | Rapid target scene information extraction method under condition of massive images | |
| CN117495812A (en) | Rolling bearing fault diagnosis method | |
| CN117558066A (en) | Model training method, joint point prediction method, device, equipment and storage medium | |
| CN115798004A (en) | Face card punching method and device based on local area, electronic equipment and medium | |
| CN109919998B (en) | Satellite attitude determination method, device and terminal equipment | |
| CN119478877B (en) | A non-periodic sampling remote control method for unmanned ships |
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| GR01 | Patent grant | ||
| GR01 | Patent grant |