

技术领域technical field
本发明涉及基于点结构化信息网络的点云质量计算方法,属于3D点云无参考质量评价技术领域。The invention relates to a point cloud quality calculation method based on a point structured information network, and belongs to the technical field of 3D point cloud no-reference quality evaluation.
背景技术Background technique
点云定义为一组三维点,其中每个点都表示为三维坐标和特定属性(例如颜色)。随着三维信息捕捉技术的发展,点云在虚拟现实、沉浸式临场感、移动地图和三维信息打印等应用中得到了广泛的应用。点云的一个典型用途是在虚拟现实和身临其境的临场感中表现人类的全息图像。然而,为了逼真地展现视觉信息,一个模型可能有数百万甚至上亿的点组成,在进行传输的过程中,通常采用有损压缩方案,与无损压缩方案相比,它可以节省很多的传输资源以及提升传输的速率,但带来的负面影响就是会产生压缩感知失真。此外,在采集、传输的过程中也可能会受到干扰从而产生下采样感知失真、高斯滤波失真等,从而导致人眼感知情况下降。为了更好地管理和控制点云的主观质量,因此提出符合人眼感知的高性能彩色点云质量评价至关重要。A point cloud is defined as a set of 3D points, where each point is represented as 3D coordinates and specific attributes (such as color). With the development of 3D information capture technology, point cloud has been widely used in applications such as virtual reality, immersive presence, moving maps, and 3D information printing. A typical use of point clouds is to represent holographic images of humans in virtual reality and immersive presence. However, in order to present visual information realistically, a model may consist of millions or even hundreds of millions of points. In the process of transmission, a lossy compression scheme is usually used, which can save a lot of transmission time compared with a lossless compression scheme. resources and increase the transmission rate, but the negative impact is that it will produce compressed sensing distortion. In addition, in the process of acquisition and transmission, it may also be interfered, resulting in down-sampling perception distortion, Gaussian filter distortion, etc., resulting in a decline in human perception. In order to better manage and control the subjective quality of point clouds, it is very important to propose a high-performance color point cloud quality evaluation that conforms to human perception.
为了量化这种视觉感知机制,人们常常从主观和客观两种质量评估的角度进行研究。主观质量评价依赖于人员的主观打分,为不同的失真程度提供真实的视觉感知分数,尽管这些方法是准确的,但是时间和人工成本都很昂贵。客观质量指标则是指运用模型评估点云的视觉质量,现有的客观质量评价模型大致可以分为三种,全参考点云质量评价、半参考点云质量评价以及无参考点云质量评价。然而由于在大部分场景下,是比较难获得原始点云的信息,且在储存和传输的过程中所需数据量过大,这就导致了全参考的点云质量评价方式在实际现实中难以运用,因此无参考的点云质量评价方式渐渐成为人们研究的重点。现有的无参考点云质量评价主要可以归为基于手工特征的度量和基于深度学习的度量两种方向。其中基于手工特征的方法有通过将3D点云投影到几何与颜色特征域,然后利用3D自然场景统计和熵提取质量感知特征,利用支持向量机将特征回归到视觉质量分数(3D-NSS)。然而,手工制作的特性高度依赖于领域知识,因此特征表示的性能常常受限。基于深度学习的方法有通过摄像机围绕点云旋转三个特定轨道从而获得三个视频序列,将ResNet3D作为特征提取模型以学习捕获视频与相应主观质量分数之间的相关性(VS-ResNET);通过从3D点云中提取分层特征,同时考虑几何以及纹理信息,采用稀疏张量表示,将张量输入到CNN中来预测得到质量评分(ResSCNN);通过将自然图像视为源域,将点云视为目标域,运用无监督对抗性自适应来推断点云质量(IT-PCQA);通过多视图的联合特征提取和融合模块、失真类型识别模块和质量矢量预测模块来预测最终分数(PQA-NET)。然而上述的基于深度学习的方法由于点云的非结构化性质阻碍了直接应用卷积运算,因此将3D点云投影为图像或者视频。这些方法均只考虑颜色信息而忽略了与人类感知高度相关的几何信息。In order to quantify this visual perception mechanism, people often conduct research from both subjective and objective quality assessment perspectives. Subjective quality assessment relies on human subjective scoring to provide realistic visual perception scores for different degrees of distortion. Although these methods are accurate, they are time and labor costly. The objective quality index refers to the use of models to evaluate the visual quality of point clouds. The existing objective quality evaluation models can be roughly divided into three types, full-reference point cloud quality evaluation, semi-reference point cloud quality evaluation, and no-reference point cloud quality evaluation. However, in most scenarios, it is difficult to obtain the information of the original point cloud, and the amount of data required in the process of storage and transmission is too large, which makes it difficult to evaluate the quality of the point cloud with full reference. Therefore, the point cloud quality evaluation method without reference has gradually become the focus of people's research. The existing no-reference point cloud quality evaluation can be mainly classified into two directions: measurement based on manual features and measurement based on deep learning. Among them, the method based on manual features projects 3D point clouds into the geometric and color feature domains, then uses 3D natural scene statistics and entropy to extract quality-aware features, and uses support vector machines to return features to visual quality scores (3D-NSS). However, hand-crafted features are highly dependent on domain knowledge, so the performance of feature representations is often limited. The method based on deep learning obtains three video sequences by rotating the camera around three specific orbits around the point cloud, and uses ResNet3D as a feature extraction model to learn the correlation between the captured video and the corresponding subjective quality score (VS-ResNET); through Extract hierarchical features from 3D point clouds, while considering geometric and texture information, using sparse tensor representation, input the tensor into CNN to predict the quality score (ResSCNN); by treating the natural image as the source domain, the point The cloud is regarded as the target domain, and unsupervised adversarial adaptation is used to infer point cloud quality (IT-PCQA); the final score is predicted by a multi-view joint feature extraction and fusion module, distortion type identification module and quality vector prediction module (PQA -NET). However, the above-mentioned deep learning-based methods project 3D point clouds as images or videos due to the unstructured nature of point clouds, which hinders the direct application of convolution operations. These methods only consider color information and ignore geometric information which is highly related to human perception.
发明内容Contents of the invention
本发明所要解决的技术问题是克服现有技术的缺陷,提供基于点结构化信息网络的点云质量计算方法。The technical problem to be solved by the present invention is to overcome the defects of the prior art and provide a point cloud quality calculation method based on point structured information network.
为达到上述目的,本发明提供基于点结构化信息网络的点云质量计算方法,包括:In order to achieve the above object, the present invention provides a point cloud quality calculation method based on point structured information network, including:
将预先获取的点云块的位置矢量特征、点云块的距离特征、点云块的亮度特征与点云块的亮度差特征联合输入到点结构化信息网络模型中,提取点云块的结构化信息特征;The pre-acquired position vector features of point cloud blocks, distance features of point cloud blocks, brightness features of point cloud blocks and brightness difference features of point cloud blocks are jointly input into the point structured information network model to extract the structure of point cloud blocks information characteristics;
将点结构化信息特征输入到迭代训练完成的失真感知流网络中,得到失真分类特征;Input the point structured information feature into the distortion-aware flow network completed by iterative training to obtain the distortion classification feature;
将结构化信息特征输入到基础质量感知流网络中,得到点云块的基础质量特征。The structured information features are input into the basic quality-aware flow network to obtain the basic quality features of point cloud blocks.
优先地,将点云块的基础质量特征与失真分类特征进行融合并输入两层第三全连接层,获得预测质量分数;Preferentially, the basic quality feature of the point cloud block is fused with the distortion classification feature and input into the third fully connected layer of two layers to obtain the predicted quality score;
将属于同一整体点云的多个点云块的预测质量分数进行平均计算,获得该整体点云的最终分数。The predicted quality scores of multiple point cloud blocks belonging to the same overall point cloud are averaged to obtain the final score of the overall point cloud.
优先地,将预先获取的位置矢量特征、距离特征、亮度特征与亮度差特征联合输入到点结构化信息网络模型中,提取点云块的结构化信息特征,通过以下步骤实现:Preferably, the pre-acquired position vector feature, distance feature, brightness feature and brightness difference feature are jointly input into the point structured information network model, and the structured information feature of the point cloud block is extracted, which is realized by the following steps:
点结构化信息网络模型包括第一卷积层、第二卷积层、第三卷积层和第四卷积层,第一卷积层、第二卷积层、第三卷积层和第四卷积层依次连接;The point structured information network model includes the first convolutional layer, the second convolutional layer, the third convolutional layer and the fourth convolutional layer, the first convolutional layer, the second convolutional layer, the third convolutional layer and the fourth convolutional layer The four convolutional layers are connected sequentially;
将位置矢量特征输入第一卷积层、第二卷积层、第三卷积层和第四卷积层处理,得到结构化特征权重;Input the position vector feature into the first convolutional layer, the second convolutional layer, the third convolutional layer and the fourth convolutional layer for processing to obtain the structured feature weight;
将位置矢量特征、距离特征、亮度特征和亮度差特征与结构化特征权重进行特征加权,获得点云块的结构化信息特征。The position vector feature, distance feature, brightness feature and brightness difference feature are weighted with the structural feature weight to obtain the structured information feature of the point cloud block.
优先地,将点结构化信息特征输入到迭代训练完成的失真感知流网络中,得到失真分类特征,通过以下步骤实现:Preferentially, the point structured information feature is input into the distortion-aware flow network completed by iterative training, and the distortion classification feature is obtained, which is realized by the following steps:
迭代训练之前的失真感知流网络包括第五卷积层、第六卷积层、第一最大池化层、第一全局平均池化层、第一全连接层和线性回归层,第五卷积层、第六卷积层、第一最大池化层、第一全局平均池化层、及第一全连接层和线性回归层依次连接;The distortion-aware flow network before iterative training includes the fifth convolutional layer, the sixth convolutional layer, the first maximum pooling layer, the first global average pooling layer, the first fully connected layer and the linear regression layer, and the fifth convolutional layer Layer, the sixth convolutional layer, the first maximum pooling layer, the first global average pooling layer, and the first fully connected layer and linear regression layer are connected in sequence;
对迭代训练完成的整个失真感知流网络进行冻结,去掉线性回归层;Freeze the entire distortion-aware flow network completed by iterative training, and remove the linear regression layer;
将点结构化信息特征输入到去掉线性回归层的迭代训练完成的失真感知流网络中,得到失真分类特征。The point structured information feature is input into the distortion-aware flow network completed by iterative training without the linear regression layer, and the distortion classification feature is obtained.
优先地,基础质量感知流网络包括多层第七卷积层、第二最大池化层、第二全局平均池化层以及第二全连接层,多层第七卷积层、第二最大池化层、第二全局平均池化层以及第二全连接层依次连接。Preferably, the basic quality-aware flow network includes a multi-layer seventh convolutional layer, a second maximum pooling layer, a second global average pooling layer and a second fully connected layer, a multi-layer seventh convolutional layer, a second maximum pooling layer The layer, the second global average pooling layer and the second fully connected layer are connected in sequence.
优先地,预先获取位置矢量特征、距离特征、亮度特征与亮度差特征,通过以下步骤实现:依据FPS最远点采样算法原理对原始点云进行采样,获得采样点;Preferentially, the position vector feature, distance feature, brightness feature and brightness difference feature are obtained in advance through the following steps: sampling the original point cloud according to the principle of FPS farthest point sampling algorithm to obtain sampling points;
通过KNN最近邻算法,选取每个采样点的1024个最近距离点组成一个点云块;Through the KNN nearest neighbor algorithm, select 1024 closest distance points of each sampling point to form a point cloud block;
计算得到每个点云块的位置矢量特征与距离特征;Calculate the position vector feature and distance feature of each point cloud block;
计算得到每个点云块的亮度特征与亮度差特征。The brightness feature and brightness difference feature of each point cloud block are calculated.
优先地,计算得到每个点云块的位置矢量特征与距离特征,通过以下步骤实现:Preferentially, the position vector feature and distance feature of each point cloud block are calculated through the following steps:
计算点云块中每个采样点的位置矢量特征{Δxj,Δyj,Δzj}:Calculate the position vector feature {Δxj ,Δyj ,Δzj } of each sampling point in the point cloud block:
{Δxj,Δyj,Δzj}={xj-x0,yj-y0,zj-z0},{Δxj ,Δyj ,Δzj }={xj -x0 ,yj -y0 ,zj -z0 },
式中,pj={xj,yj,zj}表示每个采样点的三维坐标,j=1,2,…,K,p0={x0,y0,z0},p0为形心点的三维坐标;In the formula, pj ={xj ,yj ,zj } represents the three-dimensional coordinates of each sampling point, j=1,2,...,K, p0 ={x0 ,y0 ,z0 },p0 is the three-dimensional coordinates of the centroid point;
计算点云块中每个采样点的距离特征:Compute distance features for each sample point in a point cloud block:
优先地,计算得到点云块的亮度特征与亮度差特征,通过以下步骤实现:Preferentially, the brightness feature and brightness difference feature of the point cloud block are calculated, which is realized by the following steps:
计算得到点云块中每个采样点的亮度特征lj:Calculate the brightness feature lj of each sampling point in the point cloud block:
lj=rj×0.229+gj×0.587+bj×0.114,lj =rj ×0.229+gj ×0.587+bj ×0.114,
式中,cj={rj,gj,bj}表示三维坐标为pj的采样点的颜色,rj表示红色的RGB值,gj表示绿色的RGB值,bj表示蓝色的RGB值;In the formula, cj = {rj , gj , bj } represents the color of the sampling point whose three-dimensional coordinate is pj , rj represents the RGB value of red, gj represents the RGB value of green, and bj represents the color of blue RGB value;
计算点云块中每个采样点的亮度差特征Δlj:Calculate the brightness difference feature Δlj of each sampling point in the point cloud block:
Δlj=l0-lj,式中,l0表示形心点的亮度值。Δlj = l0 -lj , where l0 represents the brightness value of the centroid point.
本发明所达到的有益效果:The beneficial effect that the present invention reaches:
本发明通过KNN最近邻算法,得到每个采样点的K个最近邻点组成一个点云块,并根据这些最近邻点计算位置矢量特征、距离特征、亮度特征与亮度差特征四种特征;The present invention obtains K nearest neighbor points of each sampling point to form a point cloud block through the KNN nearest neighbor algorithm, and calculates four characteristics of position vector feature, distance feature, brightness feature and brightness difference feature according to these nearest neighbor points;
本发明将位置矢量特征、距离特征、亮度特征与亮度差特征联输入到点结构化信息网络模型中,将相邻点的位置矢量映射到权重,而后将权重与四种特征进行矩阵相乘得到点云结构化信息;In the present invention, the position vector feature, distance feature, brightness feature and brightness difference feature are jointly input into the point structured information network model, the position vectors of adjacent points are mapped to weights, and then the weights are multiplied by matrix with the four features to obtain point cloud structured information;
本发明将提取的结构化信息特征输入到失真感知流网络,预训练得到失真分类特征;In the present invention, the extracted structured information features are input to the distortion-aware flow network, and the distortion classification features are obtained through pre-training;
本发明将结构化信息特征输入到基础质量感知流网络中,得到点云块的基础质量特征;The invention inputs the structured information features into the basic quality perception flow network to obtain the basic quality features of the point cloud block;
本发明将基础质量特征与失真分类特征进行融合并输入两层第三全连接层,获得预测质量分数;In the present invention, the basic quality feature and the distortion classification feature are fused and input into the third fully connected layer of two layers to obtain the predicted quality score;
本发明结合多维特征以及点结构化信息网络模型,将位置矢量特征、距离特征、亮度特征与亮度差特征引入双流质量评价网络;The present invention combines multi-dimensional features and point structured information network models, and introduces position vector features, distance features, brightness features and brightness difference features into the dual-stream quality evaluation network;
本发明不仅考虑到点云亮度、距离、相对位置等信息对点云质量的影响,还引入亮度差特征、结构化信息特征对点云主观质量进行综合评估;在模型构造方面,构造了双流分支网络,通过预训练的失真分类模型来辅以质量评价模型,更精确地回归了点云质量分数,对提高点云质量评价的准确性具有重要的意义。The present invention not only considers the impact of point cloud brightness, distance, relative position and other information on point cloud quality, but also introduces brightness difference features and structured information features to comprehensively evaluate point cloud subjective quality; in terms of model construction, a dual-stream branch is constructed The network, supplemented by the quality evaluation model through the pre-trained distortion classification model, more accurately regresses the point cloud quality score, which is of great significance for improving the accuracy of point cloud quality evaluation.
附图说明Description of drawings
图1是本发明中点结构化信息网络模型的架构图;Fig. 1 is the frame diagram of the midpoint structured information network model of the present invention;
图2是本发明的框架图。Figure 2 is a block diagram of the present invention.
具体实施方式Detailed ways
以下实施例仅用于更加清楚地说明本发明的技术方案,而不能以此来限制本发明的保护范围。The following examples are only used to illustrate the technical solutions of the present invention more clearly, but not to limit the protection scope of the present invention.
实施例一Embodiment one
本发明提供基于点结构化信息网络的点云质量计算方法,包括:The present invention provides a point cloud quality calculation method based on a point structured information network, including:
模型应用阶段,将预先获取的点云块的位置矢量特征、点云块的距离特征、点云块的亮度特征与点云块的亮度差特征联合输入到点结构化信息网络模型中,提取点云块的结构化信息特征;In the model application stage, the pre-acquired position vector features of point cloud blocks, distance features of point cloud blocks, brightness features of point cloud blocks and brightness difference features of point cloud blocks are jointly input into the point structured information network model to extract point Structured information characteristics of cloud blocks;
将点结构化信息特征输入到迭代训练完成的失真感知流网络中,得到失真分类特征;Input the point structured information feature into the distortion-aware flow network completed by iterative training to obtain the distortion classification feature;
将结构化信息特征输入到基础质量感知流网络中,得到点云块的基础质量特征。The structured information features are input into the basic quality-aware flow network to obtain the basic quality features of point cloud blocks.
进一步地,本实施例中模型应用阶段,将点云块的基础质量特征与失真分类特征进行融合并输入两层第三全连接层,获得预测质量分数;Further, in the model application stage in this embodiment, the basic quality features of the point cloud block and the distortion classification features are fused and input into the third fully connected layer of two layers to obtain the predicted quality score;
将属于同一整体点云的多个点云块的预测质量分数进行平均计算,获得该整体点云的最终分数。The predicted quality scores of multiple point cloud blocks belonging to the same overall point cloud are averaged to obtain the final score of the overall point cloud.
进一步地,本实施例中将预先获取的位置矢量特征、距离特征、亮度特征与亮度差特征联合输入到点结构化信息网络模型中,提取点云块的结构化信息特征,通过以下步骤实现:点结构化信息网络模型包括第一卷积层、第二卷积层、第三卷积层和第四卷积层,第一卷积层、第二卷积层、第三卷积层和第四卷积层依次连接;Further, in this embodiment, the pre-acquired position vector feature, distance feature, brightness feature and brightness difference feature are jointly input into the point structured information network model, and the structured information feature of the point cloud block is extracted, which is realized by the following steps: The point structured information network model includes the first convolutional layer, the second convolutional layer, the third convolutional layer and the fourth convolutional layer, the first convolutional layer, the second convolutional layer, the third convolutional layer and the fourth convolutional layer The four convolutional layers are connected sequentially;
将位置矢量特征输入第一卷积层、第二卷积层、第三卷积层和第四卷积层处理,得到结构化特征权重;Input the position vector feature into the first convolutional layer, the second convolutional layer, the third convolutional layer and the fourth convolutional layer for processing to obtain the structured feature weight;
将位置矢量特征、距离特征、亮度特征和亮度差特征与结构化特征权重进行特征加权,获得点云块的结构化信息特征。The position vector feature, distance feature, brightness feature and brightness difference feature are weighted with the structural feature weight to obtain the structured information feature of the point cloud block.
进一步地,本实施例中将点结构化信息特征输入到迭代训练完成的失真感知流网络中,得到失真分类特征,通过以下步骤实现:Further, in this embodiment, the point structured information feature is input into the distortion-aware flow network completed by iterative training, and the distortion classification feature is obtained, which is realized by the following steps:
迭代训练之前的失真感知流网络包括第五卷积层、第六卷积层、第一最大池化层、第一全局平均池化层、第一全连接层和线性回归层,第五卷积层、第六卷积层、第一最大池化层、第一全局平均池化层、及第一全连接层和线性回归层依次连接;The distortion-aware flow network before iterative training includes the fifth convolutional layer, the sixth convolutional layer, the first maximum pooling layer, the first global average pooling layer, the first fully connected layer and the linear regression layer, and the fifth convolutional layer Layer, the sixth convolutional layer, the first maximum pooling layer, the first global average pooling layer, and the first fully connected layer and linear regression layer are connected in sequence;
对迭代训练完成的整个失真感知流网络进行冻结,去掉线性回归层;Freeze the entire distortion-aware flow network completed by iterative training, and remove the linear regression layer;
将点结构化信息特征输入到去掉线性回归层的迭代训练完成的失真感知流网络中,得到失真分类特征。The point structured information feature is input into the distortion-aware flow network completed by iterative training without the linear regression layer, and the distortion classification feature is obtained.
进一步地,本实施例中基础质量感知流网络包括多层第七卷积层、第二最大池化层、第二全局平均池化层以及第二全连接层,多层第七卷积层、第二最大池化层、第二全局平均池化层以及第二全连接层依次连接。Further, the basic quality-aware flow network in this embodiment includes a multi-layer seventh convolutional layer, a second maximum pooling layer, a second global average pooling layer, and a second fully connected layer, and a multi-layer seventh convolutional layer, The second maximum pooling layer, the second global average pooling layer and the second fully connected layer are sequentially connected.
进一步地,本实施例中预先获取位置矢量特征、距离特征、亮度特征与亮度差特征,通过以下步骤实现:Further, in this embodiment, the position vector feature, the distance feature, the brightness feature and the brightness difference feature are pre-acquired through the following steps:
依据FPS最远点采样算法原理对原始点云进行采样,获得采样点;According to the principle of the FPS farthest point sampling algorithm, the original point cloud is sampled to obtain the sampling point;
通过KNN最近邻算法,选取每个采样点的1024个最近距离点组成一个点云块;Through the KNN nearest neighbor algorithm, select 1024 closest distance points of each sampling point to form a point cloud block;
计算得到每个点云块的位置矢量特征与距离特征;Calculate the position vector feature and distance feature of each point cloud block;
计算得到每个点云块的亮度特征与亮度差特征。The brightness feature and brightness difference feature of each point cloud block are calculated.
进一步地,本实施例中计算得到每个点云块的位置矢量特征与距离特征,通过以下步骤实现:计算点云块中每个采样点的位置矢量特征{Δxj,Δyj,Δzj}:Further, in this embodiment, the position vector feature and distance feature of each point cloud block are calculated and realized through the following steps: Calculate the position vector feature {Δxj , Δyj , Δzj } of each sampling point in the point cloud block :
{Δxj,Δyj,Δzj}={xj-x0,yj-y0,zj-z0},{Δxj ,Δyj ,Δzj }={xj -x0 ,yj -y0 ,zj -z0 },
式中,pj={xj,yj,zj}表示每个采样点的三维坐标,j=1,2,…,K,p0={x0,y0,z0},p0为形心点的三维坐标;In the formula, pj ={xj ,yj ,zj } represents the three-dimensional coordinates of each sampling point, j=1,2,...,K, p0 ={x0 ,y0 ,z0 },p0 is the three-dimensional coordinates of the centroid point;
计算点云块中每个采样点的距离特征:Compute distance features for each sample point in a point cloud block:
进一步地,本实施例中计算得到点云块的亮度特征与亮度差特征,通过以下步骤实现:Further, in this embodiment, the brightness feature and the brightness difference feature of the point cloud block are calculated, which is realized by the following steps:
计算得到点云块中每个采样点的亮度特征lj:Calculate the brightness feature lj of each sampling point in the point cloud block:
lj=rj×0.229+gj×0.587+bj×0.114,lj =rj ×0.229+gj ×0.587+bj ×0.114,
式中,cj={rj,gj,bj}表示三维坐标为pj的采样点的颜色,rj表示红色的RGB值,gj表示绿色的RGB值,bj表示蓝色的RGB值;In the formula, cj = {rj , gj , bj } represents the color of the sampling point whose three-dimensional coordinate is pj , rj represents the RGB value of red, gj represents the RGB value of green, and bj represents the color of blue RGB value;
计算点云块中每个采样点的亮度差特征Δlj:Calculate the brightness difference feature Δlj of each sampling point in the point cloud block:
Δlj=l0-lj,Δlj =l0 -lj ,
式中,l0表示形心点的亮度值。In the formula, l0 represents the brightness value of the centroid point.
实施例二Embodiment two
本发明技术方案包括以下几个部分:The technical solution of the present invention comprises the following parts:
1)局部采样组成点云块1) Local sampling to form point cloud blocks
一方面考虑到点云样本中点的数量庞大(通常有几万到几百万个点),现有计算资源(如内存、显存)难以同时分析和处理整个点云;另一方面现有的点云质量评价数据库中样本数量较少,无法满足基于机器学习方法对于训练样本在大规模和多样性等方面的要求。因此本发明借鉴图像质量评价中局部采样的思想,即局部代表全局的思路。具体地,利用最远点采样方法选取每个失真点云的N个点,并选取每个采样点的K个最近邻点组成一个点云块。在模型训练阶段,将整体点云的质量分数表示为训练点云块的真实分数;在模型应用于质量评估时,整体点云的最终质量分数为所有局部点云块分数的平均值。On the one hand, considering the large number of points in point cloud samples (usually tens of thousands to millions of points), it is difficult for existing computing resources (such as memory and video memory) to analyze and process the entire point cloud at the same time; on the other hand, the existing The number of samples in the point cloud quality evaluation database is small, which cannot meet the requirements of large-scale and diverse training samples based on machine learning methods. Therefore, the present invention draws on the idea of local sampling in image quality evaluation, that is, the idea that the local represents the whole. Specifically, the farthest point sampling method is used to select N points of each distorted point cloud, and select K nearest neighbor points of each sampling point to form a point cloud block. In the model training stage, the quality score of the overall point cloud is expressed as the true score of the training point cloud block; when the model is applied to quality evaluation, the final quality score of the overall point cloud is the average of all local point cloud block scores.
局部点云块的采样方法如下:(1)首先利用最远点采样方法选取采样点,通过不断迭代地选择距离已有采样点集合的最远点,以尽可能的将整个点云覆盖;(2)接着利用K近邻(KNN)搜索方法构造局部点云块,通过计算采样点到邻近点的距离,将这些距离计算进行从小到大排序,选出距离该采样点最近的K个样本组成一个点云块。The sampling method of the local point cloud block is as follows: (1) First, the sampling point is selected by the farthest point sampling method, and the farthest point from the existing sampling point set is selected iteratively to cover the entire point cloud as much as possible; ( 2) Then use the K nearest neighbor (KNN) search method to construct a local point cloud block. By calculating the distance from the sampling point to the neighboring point, these distance calculations are sorted from small to large, and the K samples closest to the sampling point are selected to form a Point cloud blocks.
2)点结构化信息提取模块(PSI)2) Point structured information extraction module (PSI)
以往的图像质量评估方法证明,人眼视觉系统高度适应于从观测场中提取结构信息,如梯度、对比度等。然而对于点云,结构信息是指局部3D块的变化,包括颜色和几何形状的变化。然而,点云的不规则性给三维空间中结构信息的有效提取带来了挑战。在本发明中,通过将特征权重作为非线性函数拟合到三维相对坐标,点结构化信息提取模块(PSI)模块可以同时从点云中提取几何和颜色结构信息。PSI模块的框架如图2所示。Previous approaches to image quality assessment have demonstrated that the human visual system is highly adapted to extract structural information, such as gradients, contrast, etc., from the observed field. For point clouds, however, structural information refers to variations in local 3D blocks, including variations in color and geometry. However, the irregularity of point clouds poses challenges for effective extraction of structural information in 3D space. In the present invention, the Point Structured Information Extraction Module (PSI) module can simultaneously extract geometric and color structure information from point clouds by fitting feature weights as nonlinear functions to 3D relative coordinates. The framework of the PSI module is shown in Figure 2.
特征预处理feature preprocessing
如图2所示,一个带有邻接点的采样块被输入到PSI模块中。为了更好地提取不规则点的结构信息,需要对输入块进行预处理。具体来说,我们把补丁表示为P={p0,p1,p2,…,pK}.其中p0={x0,y0,z0}是采样点(或称形心点),pj={xj,yj,zj},j=1,2,…,K是p0近邻点。向量cj={rj,gj,bj}表示pj的颜色。As shown in Fig. 2, a sampling block with adjacent points is input into the PSI module. In order to better extract the structural information of irregular points, the input block needs to be preprocessed. Specifically, we denote the patch as P={p0 ,p1 ,p2 ,…,pK }. Where p0 ={x0 ,y0 ,z0 } is the sampling point (or centroid point ), pj ={xj ,yj ,zj }, j=1,2,..., K is the nearest neighbor point of p0 . The vector cj ={rj ,gj ,bj } represents the color of pj .
由于结构信息是指局部三维块的颜色和几何信息的变化,在本研究中,它被计算为从邻近点到中心点的几何和颜色的变化,如位置矢量、距离和亮度差异。Since structural information refers to the change of color and geometric information of local 3D blocks, in this study, it is calculated as the change of geometry and color from neighboring points to the center point, such as position vector, distance and brightness difference.
计算每个点pj的位置矢量特征,pj={xj,yj,zj},j=1,2,…,K,相应的位置矢量特征被计算为形心点p0到pj之间的向量:Calculate the position vector feature of each point pj , pj ={xj ,yj ,zj },j=1,2,...,K, the corresponding position vector feature is calculated as the centroid point p0 to p Vector betweenj :
{Δxj,Δyj,Δzj}={xj-x0,yj-y0,zj-z0}(1){Δxj ,Δyj ,Δzj }={xj -x0 ,yj -y0 ,zj -z0 }(1)
计算相应的距离特征:Compute the corresponding distance features:
其中,dj表示p0到pj的距离。Among them, dj represents the distance from p0 to pj .
由于人眼视觉系统对亮度的相对变化很敏感,对于每个pj={xj,yj,zj},j=1,2,…,K,其颜色被转换为亮度值,计算得到每个点云块的亮度特征:Since the human visual system is very sensitive to relative changes in brightness, for each pj ={xj ,yj ,zj },j=1,2,…,K, its color is converted into a brightness value, calculated as Brightness features for each point cloud block:
lj=rj×0.229+gj×0.587+bj×0.114,(3)lj =rj ×0.229+gj ×0.587+bj ×0.114,(3)
其中lj表示pj的亮度差特征。pj与p0的亮度差计算如下,wherelj denotes the luminance difference feature ofpj . The brightness difference between pj and p0 is calculated as follows,
Δlj=l0-lj. (4)Δlj =l0 -lj . (4)
因此,对于每个邻接点pj={xj,yj,zj},j=1,2,…,K,我们可以得到一个预处理输入特征,即ij={Δxj,Δyj,Δzj,dj,lj,Δlj}T。也就是,PSI模块的输入为I={i1,i2,…,iK}T,即基于位置矢量加权Therefore, for each adjacent point pj ={xj ,yj ,zj },j=1,2,…,K, we can get a preprocessed input feature, that is, ij ={Δxj ,Δyj ,Δzj ,dj ,lj ,Δlj }T . That is, the input of the PSI module is I={i1 ,i2 ,…,iK }T , namely Weighting based on position vector
邻近的点与同一中心点有不同的相对位置,因此它们之间的局部变化对三维块的结构信息的计算具有不同的权重。CNN在图像特征的学习表示方面显示出强大的能力,其中卷积权重被视为相对位置的离散函数。而在三维点云上,卷积权重被视为对位置向量的非线性连续函数,即Neighboring points have different relative positions to the same central point, so the local variation between them has different weights for the calculation of the structural information of the 3D block. CNNs have shown strong capabilities in learning representations of image features, where convolutional weights are treated as discrete functions of relative positions. On the 3D point cloud, the convolution weight is regarded as a non-linear continuous function of the position vector, namely
W=f(Δx,Δy,Δz), (5)W=f(Δx,Δy,Δz), (5)
其中Δx={Δx1,Δx2,…,ΔxK}T,Δy={Δy1,Δy2,…,ΔyK}T和Δz={Δz1,Δz2,…,ΔzK}T,如图2所示,非线性函数f(·)被实现为4个卷积层。然后,三维块的结构信息特征是通过与非量化权重W和输入I的矩阵乘法计算的,即。where Δx={Δx1 ,Δx2 ,…,ΔxK }T , Δy={Δy1 ,Δy2 ,…,ΔyK }T and Δz={Δz1 ,Δz2 ,…,ΔzK }T , such as As shown in Figure 2, the nonlinear function f( ) is implemented as 4 convolutional layers. Then, the structure-informative features of the 3D block are computed by matrix multiplication with non-quantized weights W and input I, viz.
其中表示三维点云块的结构信息特征,也被称为ψ特征。in Represents the structural information feature of a 3D point cloud block, also known as the ψ feature.
3)点结构化信息网络结构3) Point structured information network structure
如图1所示,在点结构化信息网络中,PSI模块输出的结构信息特征Fψ被送入两个计算流,即DPS和EQPS。As shown in Fig. 1, in the point structured information network, the structural information feature Fψ output by the PSI module is fed into two computation streams, namely DPS and EQPS.
首先,DPS是基于点云失真分类任务的预训练。具体来说,Fψ被reshape为维度后,被送入具有2个卷积层和一个最大集合层的Conv1模块。First, DPS is pre-trained based on point cloud distortion classification task. Specifically, Fψ is reshaped as After dimension, it is fed into the Conv1 module with 2 convolutional layers and a max-ensemble layer.
为了防止过度拟合,Conv1模块的输出再通过一个大小为2×2的全局平均池化(GAP)层,然后通过2个全连接(FC)层将其映射到失真相关特征,表示为其次,与失真分类相比,质量回归需要更复杂的特征,因此EQPS比DPS多使用两个卷积模块,即Conv2和Conv3。具体来说,Conv2和Conv3模块均由3个卷积层和一个最大池化层组成,每个卷积层中分别有256和512个核。这是因为更多的卷积层和核可以扩大感受野,提取更丰富特征信息。To prevent overfitting, the output of the Conv1 module passes through a global average pooling (GAP) layer of
总的来说,来自PSI模块的结构信息特征依次通过Conv1、Conv2和Conv3模块,一个大小为1×1的GAP层,以及2个FC层,得到一个特征,表示为最后,这两个特征,即fdps和feqps,通过点乘进行融合。然后,根据融合后的特征,用2个FC层对三维贴片的预测质量得分进行回归。In general, the structural information features from the PSI module sequentially pass through the Conv1, Conv2 and Conv3 modules, a GAP layer of size 1×1, and 2 FC layers to obtain a feature, denoted as Finally, these two features, fdps and feqps , are fused by dot product. Then, based on the fused features, 2 FC layers are used to regress the predicted quality scores of the 3D patches.
3)点结构化信息网络训练由于DPS是通过失真分类任务进行预训练的,因此本发明提出的点结构化信息网络分两个阶段进行训练。预测的失真类型和真实的失真类型之间的交叉熵损失被用来预训练DPS。注意,每个三维点云块的真实失真类型与它所属的整个点云的失真类型相同。然后,通过冻结DPS的权重来训练整个网络。3) Point structured information network training Since the DPS is pre-trained through the distortion classification task, the point structured information network proposed in the present invention is trained in two stages. The cross-entropy loss between predicted distortion types and ground-truth distortion types is used to pre-train DPS. Note that the true distortion type of each 3D point cloud patch is the same as that of the entire point cloud it belongs to. Then, the whole network is trained by freezing the weights of DPS.
预测质量分数和实际质量分数之间的均方误差(MSE)损失被用来监督整个网络的训练,即。The mean squared error (MSE) loss between predicted and actual quality scores is used to supervise the training of the whole network, viz.
其中,M是最小批量的大小。符号Qi和分别表示为预测分数和真实分数。where M is the size of the mini-batch. Symbol Qi and are denoted as predicted and true scores, respectively.
同样,每个补丁的真实质量分数等于整个点云的质量分数。Likewise, the ground truth quality score of each patch is equal to the quality score of the whole point cloud.
4)点结构化信息网络应用于点云质量评估4) Point structured information network applied to point cloud quality assessment
将属于同一整体点云的多个点云块的预测评分取评分,平均计算,得到的分数作为该整体点云的最终分数。具体公式为:The prediction scores of multiple point cloud blocks belonging to the same overall point cloud are scored and calculated on average, and the obtained scores are used as the final score of the overall point cloud. The specific formula is:
本发明迭代训练阶段,包括以下步骤:The iterative training stage of the present invention comprises the following steps:
步骤1,迭代训练时,依据FPS最远点采样算法原理对原始点云进行采样,获得采样点,采样点组成训练数据集,获取训练数据集,历史采样点包括历史三维坐标和历史特定属性(红色、绿色和蓝色三种颜色);Step 1. During iterative training, the original point cloud is sampled according to the principle of FPS farthest point sampling algorithm to obtain sampling points, which form a training data set and obtain a training data set. Historical sampling points include historical three-dimensional coordinates and historical specific attributes ( red, green and blue colors);
通过KNN最近邻算法,选取每个采样点的1024个最近距离点组成一个点云块,执行第2步;第2步:根据公式(1)与公式(2)计算得到每个点云块的历史位置矢量特征与历史距离特征,执行第3步;Through the KNN nearest neighbor algorithm, select 1024 closest distance points of each sampling point to form a point cloud block, and execute the second step; step 2: calculate the point cloud block according to formula (1) and formula (2) Historical position vector feature and historical distance feature, perform
第3步:根据公式(3)与公式(4)计算得到每个点云块的历史亮度特征与历史亮度差特征,执行第4步。Step 3: According to formula (3) and formula (4), calculate the historical brightness feature and historical brightness difference feature of each point cloud block, and perform step 4.
第4步:将预先获取的位置矢量特征、距离特征、亮度特征与亮度差特征联合输入到点结构化信息网络模型PSI Module中,提取历史点云块的结构化信息特征,执行第5步;Step 4: Jointly input the pre-acquired position vector features, distance features, brightness features and brightness difference features into the point structured information network model PSI Module, extract the structured information features of historical point cloud blocks, and perform step 5;
点结构化信息网络模型PSI Module包括第一卷积层、第二卷积层、第三卷积层和第四卷积层,第一卷积层、第二卷积层、第三卷积层和第四卷积层依次连接,第一卷积层、第二卷积层和第三卷积层是3个卷积核为3且输出通道为8的卷积层,第四卷积层是1个卷积核为3且输出通道为16的卷积层;The point structured information network model PSI Module includes the first convolutional layer, the second convolutional layer, the third convolutional layer and the fourth convolutional layer, the first convolutional layer, the second convolutional layer, and the third convolutional layer Connected to the fourth convolutional layer in turn, the first convolutional layer, the second convolutional layer and the third convolutional layer are three convolutional layers with a convolution kernel of 3 and an output channel of 8, and the fourth convolutional layer is 1 convolution layer with a convolution kernel of 3 and an output channel of 16;
具体地,首先将历史位置矢量特征(K×3维度)输入到点结构化信息网络模型中,经过第一卷积层、第二卷积层、第三卷积层和第四卷积层处理,得到历史结构化特征权重(K×16维度)。其中将4层卷积层视为非线性函数,把位置矢量特征拟合为特征权重;Specifically, the historical position vector feature (K×3 dimension) is first input into the point structured information network model, and processed by the first convolutional layer, the second convolutional layer, the third convolutional layer and the fourth convolutional layer , to get historical structured feature weights (K×16 dimensions). Among them, the 4-layer convolutional layer is regarded as a nonlinear function, and the position vector feature is fitted as a feature weight;
将历史位置矢量特征、历史距离特征、历史亮度特征和历史亮度差特征(K×6维度)与历史结构化特征权重进行特征加权,即将二者进行矩阵相乘,获得历史点云块的结构化信息特征。具体公式为:W=f(Δx,Δy,Δz),其中f为上述第一卷积层、第二卷积层、第三卷积层和第四卷积层四个卷积层构成的非线性函数。The historical position vector feature, historical distance feature, historical brightness feature and historical brightness difference feature (K×6 dimension) are weighted with the historical structured feature weight, that is, the two are matrix multiplied to obtain the structured structure of the historical point cloud block information features. The specific formula is: W=f(Δx, Δy, Δz), where f is the non-convolutional layer composed of the first convolutional layer, the second convolutional layer, the third convolutional layer and the fourth convolutional layer. linear function.
第5步:将历史点结构化信息特征输入到失真感知流网络中,运用交叉熵损失公式对失真感知流网络进行预训练,获得训练完成的失真感知流网络;Step 5: Input the structural information features of historical points into the distortion-aware flow network, use the cross-entropy loss formula to pre-train the distortion-aware flow network, and obtain the trained distortion-aware flow network;
而后将训练完成的整个失真感知流网络进行冻结,去掉线性回归层后得到失真分类特征,执行第6步;Then freeze the entire distortion-aware flow network that has been trained, remove the linear regression layer to obtain the distortion classification features, and perform
失真感知流网络包括第五卷积层、第六卷积层、第一最大池化层、第一全局平均池化层、第一全连接层和线性回归层,第五卷积层、第六卷积层、第一最大池化层、第一全局平均池化层、及第一全连接层和线性回归层依次连接;第五卷积层和第六卷积层为2个卷积核为3输出通道为128维的卷积层,第一最大池化层步长为2滑动窗口大小为2。The distortion-aware flow network includes the fifth convolutional layer, the sixth convolutional layer, the first maximum pooling layer, the first global average pooling layer, the first fully connected layer and the linear regression layer, the fifth convolutional layer, the sixth The convolutional layer, the first maximum pooling layer, the first global average pooling layer, and the first fully connected layer and the linear regression layer are sequentially connected; the fifth convolutional layer and the sixth convolutional layer are 2 convolution kernels. 3 The output channel is a 128-dimensional convolutional layer, and the first maximum pooling layer has a step size of 2 and a sliding window size of 2.
将历史点结构化信息特征通过失真感知流网络中的线性回归层回归为失真类型(共4种)。第6步:将历史结构化信息特征输入到基础质量感知流网络中,得到历史点云块的基础质量特征;The structural information features of historical points are regressed into distortion types (4 types in total) through the linear regression layer in the distortion-aware flow network. Step 6: Input the historical structured information features into the basic quality-aware flow network to obtain the basic quality features of the historical point cloud block;
基础质量感知流网络包括多层第七卷积层、第二最大池化层、第二全局平均池化层以及第二全连接层,多层第七卷积层、第二最大池化层、第二全局平均池化层以及第二全连接层依次连接;第7步:将基础质量特征与失真分类特征进行融合并输入两层第三全连接层,获得预测质量分数;The basic quality-aware flow network includes a multi-layer seventh convolutional layer, a second maximum pooling layer, a second global average pooling layer, and a second fully connected layer, a multi-layer seventh convolutional layer, a second maximum pooling layer, The second global average pooling layer and the second fully connected layer are connected in sequence; Step 7: Fusion the basic quality feature and the distortion classification feature and input the two layers of the third fully connected layer to obtain the predicted quality score;
第8步:通过最小化预测质量分数和真实质量分数的均方误差和,迭代训练点结构化网络;点结构化网络的输入是位置M个批次的K个点云中点的矢量特征、距离特征、亮度特征、和亮度差特征,点结构化网络的输出为M个批次的质量分数。Step 8: By minimizing the mean square error sum of the predicted quality score and the true quality score, iteratively train the point-structured network; the input of the point-structured network is the vector feature of the points in the K point cloud of the position M batches, Distance features, brightness features, and brightness difference features, the output of the point structured network is the quality score of M batches.
将历史位置矢量特征、历史距离特征、历史亮度特征与历史亮度差特征输入点结构化信息网络模型、失真感知流网络和基础质量感知流网络中处理,利用均方误差迭代训练点结构化信息网络模型、失真感知流网络和基础质量感知流网络,均方误差的表达式为:Input the historical position vector feature, historical distance feature, historical brightness feature and historical brightness difference feature into the point structured information network model, the distortion-aware flow network and the basic quality-aware flow network for processing, and use the mean square error to iteratively train the point structured information network Model, distortion-aware flow network and basic quality-aware flow network, the expression of the mean square error is:
式中,M是训练数据集中历史采样点的总数,In the formula, M is the total number of historical sampling points in the training data set,
利用交叉熵损失函数迭代更新训练之前的失真感知流网络,交叉熵损失函数的表达式为:The cross-entropy loss function is used to iteratively update the distortion-aware flow network before training. The expression of the cross-entropy loss function is:
其中,LOSS为损失值,M为训练数据集中所有采样点的数量,Class为失真分类数,y为采样点的真实标签,为失真感知流网络输出采样点的预测标签。Among them, LOSS is the loss value, M is the number of all sampling points in the training data set, Class is the number of distortion classifications, y is the real label of the sampling point, Outputs predicted labels for sample points for the distortion-aware flow network.
图2中,PSI Module为点结构化信息网络模型,reshape的操作是为了将输出的特征变换成适合卷积神经网络学习的形状,将K×96的特征变换成√k×√k×96维度的特征。Pool-2的含义是步长为2,滑动窗口为2的最大池化层,为了降维、减少参数量、去除冗余信息。GAP为全局平均池化层,其作用为降维以及防止过拟合。Flatten的含义是展平,将输入的张量进行扁平化处理,使得全连接层可以对其进行处理。FC为全连接层,其作用是前边提取到的特征综合起来。·符号的含义是点乘,其含义是将两个分支的特征进行融合。Quality Score是网络输出的预测质量分数。In Figure 2, the PSI Module is a point-structured information network model. The reshape operation is to transform the output features into a shape suitable for convolutional neural network learning, and transform the K×96 features into √k×√k×96 dimensions Characteristics. The meaning of Pool-2 is the maximum pooling layer with a step size of 2 and a sliding window of 2, in order to reduce dimensionality, reduce the amount of parameters, and remove redundant information. GAP is a global average pooling layer, which is used to reduce dimensionality and prevent overfitting. The meaning of Flatten is to flatten the input tensor so that it can be processed by the fully connected layer. FC is a fully connected layer, and its function is to combine the features extracted earlier. The meaning of the symbol is dot product, and its meaning is to fuse the features of the two branches. Quality Score is the predicted quality score output by the network.
本申请是参照根据本申请实施例的方法、设备(系统)、和计算机程序产品的流程图和/或方框图来描述的。应理解可由计算机程序指令实现流程图和/或方框图中的每一流程和/或方框、以及流程图和/或方框图中的流程和/或方框的结合。可提供这些计算机程序指令到通用计算机、专用计算机、嵌入式处理机或其他可编程数据处理设备的处理器以产生一个机器,使得通过计算机或其他可编程数据处理设备的处理器执行的指令产生用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的装置。The present application is described with reference to flowcharts and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the present application. It should be understood that each procedure and/or block in the flowchart and/or block diagram, and a combination of procedures and/or blocks in the flowchart and/or block diagram can be realized by computer program instructions. These computer program instructions may be provided to a general purpose computer, special purpose computer, embedded processor, or processor of other programmable data processing equipment to produce a machine such that the instructions executed by the processor of the computer or other programmable data processing equipment produce a An apparatus for realizing the functions specified in one or more procedures of the flowchart and/or one or more blocks of the block diagram.
这些计算机程序指令也可存储在能引导计算机或其他可编程数据处理设备以特定方式工作的计算机可读存储器中,使得存储在该计算机可读存储器中的指令产生包括指令装置的制造品,该指令装置实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能。These computer program instructions may also be stored in a computer-readable memory capable of directing a computer or other programmable data processing apparatus to operate in a specific manner, such that the instructions stored in the computer-readable memory produce an article of manufacture comprising instruction means, the instructions The device realizes the function specified in one or more procedures of the flowchart and/or one or more blocks of the block diagram.
这些计算机程序指令也可装载到计算机或其他可编程数据处理设备上,使得在计算机或其他可编程设备上执行一系列操作步骤以产生计算机实现的处理,从而在计算机或其他可编程设备上执行的指令提供用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的步骤。These computer program instructions can also be loaded onto a computer or other programmable data processing device, causing a series of operational steps to be performed on the computer or other programmable device to produce a computer-implemented process, thereby The instructions provide steps for implementing the functions specified in the flow chart or blocks of the flowchart and/or the block or blocks of the block diagrams.
以上所述仅是本发明的优选实施方式,应当指出,对于本技术领域的普通技术人员来说,在不脱离本发明技术原理的前提下,还可以做出若干改进和变形,这些改进和变形也应视为本发明的保护范围。The above is only a preferred embodiment of the present invention, it should be pointed out that for those of ordinary skill in the art, without departing from the technical principle of the present invention, some improvements and modifications can also be made. It should also be regarded as the protection scope of the present invention.
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202211589728.3ACN115937150B (en) | 2022-12-12 | 2022-12-12 | Point cloud quality calculation method based on point structured information network |
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202211589728.3ACN115937150B (en) | 2022-12-12 | 2022-12-12 | Point cloud quality calculation method based on point structured information network |
| Publication Number | Publication Date |
|---|---|
| CN115937150Atrue CN115937150A (en) | 2023-04-07 |
| CN115937150B CN115937150B (en) | 2025-08-29 |
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN202211589728.3AActiveCN115937150B (en) | 2022-12-12 | 2022-12-12 | Point cloud quality calculation method based on point structured information network |
| Country | Link |
|---|---|
| CN (1) | CN115937150B (en) |
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN116137059A (en)* | 2023-04-17 | 2023-05-19 | 宁波大学科学技术学院 | Three-dimensional point cloud quality evaluation method based on multi-level feature extraction network model |
| CN117197093A (en)* | 2023-09-13 | 2023-12-08 | 南京邮电大学 | A point cloud quality evaluation method, device, storage medium and equipment |
| WO2025054771A1 (en)* | 2023-09-11 | 2025-03-20 | 北京大学深圳研究生院 | Acquisition method and acquisition apparatus for point cloud semantic information, and device and medium |
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN112488210A (en)* | 2020-12-02 | 2021-03-12 | 北京工业大学 | Three-dimensional point cloud automatic classification method based on graph convolution neural network |
| WO2022166400A1 (en)* | 2021-02-05 | 2022-08-11 | 中国科学院深圳先进技术研究院 | Method, apparatus and device for processing three-dimensional point cloud, and storage medium |
| WO2022183500A1 (en)* | 2021-03-05 | 2022-09-09 | 中国科学院深圳先进技术研究院 | Projection-based point cloud quality evaluation method and apparatus, device and storage medium |
| CN115359102A (en)* | 2022-08-23 | 2022-11-18 | 同济大学 | Multi-temporal point cloud data registration method and device and storage medium |
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN112488210A (en)* | 2020-12-02 | 2021-03-12 | 北京工业大学 | Three-dimensional point cloud automatic classification method based on graph convolution neural network |
| WO2022166400A1 (en)* | 2021-02-05 | 2022-08-11 | 中国科学院深圳先进技术研究院 | Method, apparatus and device for processing three-dimensional point cloud, and storage medium |
| WO2022183500A1 (en)* | 2021-03-05 | 2022-09-09 | 中国科学院深圳先进技术研究院 | Projection-based point cloud quality evaluation method and apparatus, device and storage medium |
| CN115359102A (en)* | 2022-08-23 | 2022-11-18 | 同济大学 | Multi-temporal point cloud data registration method and device and storage medium |
| Title |
|---|
| QI LIU等: "PQA-Net: Deep No Reference Point Cloud Quality Assessment via Multi-View Projection", 《IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY》, vol. 31, no. 12, 26 July 2021 (2021-07-26), XP011891055, DOI: 10.1109/TCSVT.2021.3100282* |
| 陈盟;陈兴华;邹鹏;: "一种面向3D点云识别的新型卷积神经网络", 计算机与数字工程, no. 05, 20 May 2020 (2020-05-20)* |
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN116137059A (en)* | 2023-04-17 | 2023-05-19 | 宁波大学科学技术学院 | Three-dimensional point cloud quality evaluation method based on multi-level feature extraction network model |
| CN116137059B (en)* | 2023-04-17 | 2024-04-26 | 宁波大学科学技术学院 | Three-dimensional point cloud quality evaluation method based on multi-level feature extraction network model |
| WO2025054771A1 (en)* | 2023-09-11 | 2025-03-20 | 北京大学深圳研究生院 | Acquisition method and acquisition apparatus for point cloud semantic information, and device and medium |
| CN117197093A (en)* | 2023-09-13 | 2023-12-08 | 南京邮电大学 | A point cloud quality evaluation method, device, storage medium and equipment |
| CN117197093B (en)* | 2023-09-13 | 2025-08-29 | 南京邮电大学 | Point cloud quality evaluation method, device, storage medium and equipment |
| Publication number | Publication date |
|---|---|
| CN115937150B (en) | 2025-08-29 |
| Publication | Publication Date | Title |
|---|---|---|
| CN108491880B (en) | Object classification and pose estimation method based on neural network | |
| CN109816725B (en) | Monocular camera object pose estimation method and device based on deep learning | |
| WO2022252272A1 (en) | Transfer learning-based method for improved vgg16 network pig identity recognition | |
| CN108520206B (en) | A Recognition Method of Fungal Microscopic Image Based on Fully Convolutional Neural Network | |
| US20210192271A1 (en) | Method and Apparatus for Pose Planar Constraining on the Basis of Planar Feature Extraction | |
| CN110298387A (en) | Incorporate the deep neural network object detection method of Pixel-level attention mechanism | |
| CN104992223B (en) | Intensive population estimation method based on deep learning | |
| CN115937150A (en) | Point Cloud Quality Calculation Method Based on Point Structured Information Network | |
| CN109685743B (en) | Image Mixed Noise Removal Method Based on Noise Learning Neural Network Model | |
| CN107103613B (en) | A kind of three-dimension gesture Attitude estimation method | |
| CN115272777B (en) | Semi-supervised image parsing method for power transmission scenarios | |
| CN111709397B (en) | Unmanned aerial vehicle variable-size target detection method based on multi-head self-attention mechanism | |
| CN106446936B (en) | Hyperspectral data classification method based on convolutional neural network combined spatial spectrum data to waveform map | |
| CN107945204A (en) | A kind of Pixel-level portrait based on generation confrontation network scratches drawing method | |
| CN108416266A (en) | A kind of video behavior method for quickly identifying extracting moving target using light stream | |
| CN107506692A (en) | A kind of dense population based on deep learning counts and personnel's distribution estimation method | |
| CN103886325B (en) | Cyclic matrix video tracking method with partition | |
| CN107529650A (en) | Network model construction and closed loop detection method, corresponding device and computer equipment | |
| CN112233129B (en) | Deep learning-based parallel multi-scale attention mechanism semantic segmentation method and device | |
| Chen et al. | Single image super-resolution using deep CNN with dense skip connections and inception-resnet | |
| CN115619718A (en) | Steel plate defect real-time detection system and method based on embedded edge platform | |
| CN115527072A (en) | Chip surface defect detection method based on sparse space perception and meta-learning | |
| CN112101262A (en) | Multi-feature fusion sign language recognition method and network model | |
| CN112686830B (en) | Super-resolution method for a single depth map based on image decomposition | |
| CN108830170A (en) | A kind of end-to-end method for tracking target indicated based on layered characteristic |
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| GR01 | Patent grant |