技术领域technical field
本发明涉及图像处理技术领域,具体涉及全卷积网络的单张图像超分辨率图像的重建。The invention relates to the technical field of image processing, in particular to the reconstruction of a single image super-resolution image of a fully convolutional network.
技术背景technical background
在诸如视频监控、卫星图像和医疗图像等领域,常常遇到收集的图像分辨率过低,从而导致收集到的图像无法使用。超分辨率重建则是指通过收集到的低分辨率图像重建出对应高分辨率图像的一种应用技术。In areas such as video surveillance, satellite imagery, and medical imagery, it is often encountered that the resolution of collected images is too low, making the collected images unusable. Super-resolution reconstruction refers to an application technology that reconstructs corresponding high-resolution images from collected low-resolution images.
因低分辨率图像仅仅含有高分辨率的一些高频信息,损失了大量的上下文信息,因此如何通过单个图像的低分辨率图像中的少量信息重建出高分辨率图像一直是研究的难点。Because low-resolution images only contain some high-frequency information of high resolution, a lot of context information is lost, so how to reconstruct high-resolution images from a small amount of information in low-resolution images of a single image has always been a difficult research point.
近年来基于深度学习的技术在图像处理及语音分析领域取得了较大的进展,其中,卷积神经网络因其权值共享和稀疏连接的特点,使得网络模型复杂度大大降低。同时残差网络的出现,让构建一个更深的网络模型变得可能。另外,全卷积网络的出现允许网络模型有不同大小的输入,为同一网络模型处理不同大小的图像提供了可能。In recent years, technologies based on deep learning have made great progress in the fields of image processing and speech analysis. Among them, convolutional neural networks have greatly reduced the complexity of network models due to their weight sharing and sparse connections. At the same time, the emergence of the residual network makes it possible to build a deeper network model. In addition, the emergence of fully convolutional networks allows network models to have inputs of different sizes, making it possible for the same network model to process images of different sizes.
目前单张图像超分辨率重建中主流的基于深度学习的算法主要是以双三次插值的结果作为输入,然后通过不断加深网络的深度来获得更大的感受野,从而更好地利用输入图像中的上下文信息,随着网络深度的增加,浅层卷积层输出的特征对于图像重建的贡献逐渐减弱,这些浅层的特征在网络模型中利用率不高。At present, the mainstream deep learning-based algorithms in single image super-resolution reconstruction mainly use the result of bicubic interpolation as input, and then obtain a larger receptive field by continuously deepening the depth of the network, so as to make better use of the input image. As the network depth increases, the contribution of the features output by the shallow convolutional layer to image reconstruction gradually weakens, and these shallow features are not highly utilized in the network model.
发明内容Contents of the invention
本发明所要解决的技术问题在于克服上述现有技术的缺点,提供一种流程简单、重建效果好的基于全卷积神经网络单张图像的超分辨率图像重建方法。The technical problem to be solved by the present invention is to overcome the above-mentioned shortcomings of the prior art, and provide a super-resolution image reconstruction method based on a single image of a fully convolutional neural network with a simple process and good reconstruction effect.
解决上述技术问题所采用的技术方案是由以下步骤组成:The technical solution adopted to solve the above technical problems is composed of the following steps:
(1)对采集的图像进行分割(1) Segment the collected image
获取不少于200张彩色图像,按照3:1的比例分割出训练数据集图像和测试集图像。Obtain no less than 200 color images, and divide the training data set image and test set image according to the ratio of 3:1.
(2)制作网络训练集和测试集图像进行预处理(2) Make network training set and test set images for preprocessing
使用数据增强方法扩充训练数据集图像并制作网络训练集,并对测试集图像进行预处理。Use data augmentation methods to augment training dataset images and make network training sets, and preprocess test set images.
该步骤中,数据增强方法扩充训练数据集图像并制作网络训练集的步骤为:In this step, the steps of the data enhancement method to expand the training data set image and make the network training set are:
1)对训练数据集图像中的所有图像依次顺时针旋转90°、180°、270°,将每次旋转得到的旋转图像加入训练数据集图像中。1) Rotate all the images in the training dataset images clockwise by 90°, 180°, and 270° sequentially, and add the rotated images obtained by each rotation into the training dataset images.
2)对训练数据集图像中所有图像进行水平翻转。2) Horizontally flip all the images in the training dataset images.
3)对训练数据集图像中所有图像从红、绿、蓝色彩空间,转为亮度、蓝色色度、红色色度空间,提取出亮度,从亮度中切割出16~64×16~64的像素块。3) Convert all the images in the training data set images from the red, green, and blue color spaces to the brightness, blue chroma, and red chroma spaces, extract the brightness, and cut out 16-64×16-64 pixels from the brightness piece.
4)对每个像素块分别进行2倍、3倍、4倍双三次插值下采样,将下采样的结果再用相同的倍数上采样,恢复到原尺寸,所得到的结果为网络模型的输入,原来的像素块为网络模型的输出,得到训练集。4) Perform bicubic interpolation downsampling by 2 times, 3 times, and 4 times for each pixel block, and then upsample the downsampled result with the same multiple to restore it to the original size. The result obtained is the input of the network model , the original pixel block is the output of the network model, and the training set is obtained.
该步骤中,对测试集图像进行预处理步骤为:In this step, the preprocessing steps for the test set images are:
对所有测试集图像从红、绿、蓝色彩空间,转为亮度、蓝色色度、红色色度色彩空间,提取出亮度作为测试集。For all the test set images, the red, green, and blue color space is transformed into the luminance, blue chroma, and red chroma color spaces, and the luminance is extracted as the test set.
(3)构建全卷积神经网络(3) Build a fully convolutional neural network
全卷积神经网络包含原始图像特征提取模块、特征高维映射模块、残差提取模块,特征提取模块的输出与特征高维映射模块的输入相连,特征高维映射模块的输出与残差提取模块的输入相连,构建成全卷积神经网络。The fully convolutional neural network includes an original image feature extraction module, a feature high-dimensional mapping module, and a residual extraction module. The output of the feature extraction module is connected to the input of the feature high-dimensional mapping module, and the output of the feature high-dimensional mapping module is connected to the residual extraction module. The input is connected to build a fully convolutional neural network.
(4)训练全卷积神经网络(4) Training fully convolutional neural network
将步骤(2)中得到的训练集输入步骤(3)构建的全卷积神经网络中,用动态调整网络模型的学习率进行训练,得到训练好的全卷积神经网络。The training set obtained in step (2) is input into the fully convolutional neural network constructed in step (3), and the learning rate of the network model is dynamically adjusted for training to obtain a trained fully convolutional neural network.
该步骤中,动态调整网络模型的学习率进行训练为:用均方误差作为损失函数,每遍历10000个样本为一代,每过10代学习率降为当前的0.1,迭代次数为100代。In this step, the learning rate of the network model is dynamically adjusted for training: using the mean square error as the loss function, each iteration of 10,000 samples is a generation, and the learning rate is reduced to the current 0.1 every 10 generations, and the number of iterations is 100 generations.
(5)重建测试集图像的超分辨率图像(5) Reconstruct the super-resolution image of the test set image
将步骤(2)得到的测试集输入步骤(4)中训练好的全卷积神经网络,得到网络输出,根据网络输出重建出测试集图像的超分辨率图像。Input the test set obtained in step (2) into the fully convolutional neural network trained in step (4) to obtain the network output, and reconstruct the super-resolution image of the test set image according to the network output.
该步骤中,根据网络输出重建出测试集图像的超分辨率图像为:把测试集图像的色彩空间从红、绿、蓝转为亮度、蓝色色度、红色色度,用全卷积神经网络的对应输出替换原测试集图像的亮度层,把图像色彩空间从亮度、蓝色色度、红色色度,转回红、绿、蓝,得到基于全卷积网络的单张超分辨率图像。In this step, the super-resolution image of the test set image is reconstructed according to the network output: the color space of the test set image is converted from red, green, and blue to brightness, blue chroma, and red chroma, and a fully convolutional neural network is used to The corresponding output replaces the brightness layer of the original test set image, and converts the image color space from brightness, blue chroma, and red chroma to red, green, and blue, and obtains a single super-resolution image based on a fully convolutional network.
在本发明的构建全卷积神经网络步骤(3)中,本发明的原始图像特征提取模块为:第一联接线性整流单元的输出与第一压缩激励单元的输入相连,第一压缩激励单元的输出与第二联接线性整流单元的输入相连,第二联接线性整流单元的输出与第二压缩激励单元的输入相连,构建成原始图像特征提取模块。In the step (3) of constructing the full convolutional neural network of the present invention, the original image feature extraction module of the present invention is: the output of the first connection linear rectification unit is connected with the input of the first compression excitation unit, and the output of the first compression excitation unit The output is connected to the input of the second connection linear rectification unit, and the output of the second connection linear rectification unit is connected to the input of the second compression excitation unit to construct an original image feature extraction module.
本发明的特征高维映射模块为下式:The feature high-dimensional mapping module of the present invention is the following formula:
xn=[xn-1,C(xn-1)]xn =[xn-1 ,C(xn-1 )]
式中C为胶囊模块,xn-1为第n-1个胶囊模块的输入,C(xn-1)为第n-1个胶囊模块的输出,x0为原始图像特征提取模块的输出,xn为特征高维映射模块的输出,n为胶囊模块的个数,取值为有限的正整数。where C is the capsule module, xn-1 is the input of the n-1th capsule module, C(xn-1 ) is the output of the n-1th capsule module, and x0 is the output of the original image feature extraction module , xn is the output of the feature high-dimensional mapping module, n is the number of capsule modules, and the value is a limited positive integer.
本发明的残差提取模块如下:The residual extraction module of the present invention is as follows:
1)提取原始特征残差1) Extract the original feature residual
将特征高维映射模块的输出用一层卷积核大小为128×1×1的卷积层降维到与原始图像特征提取模块的输出相同维度,得到原始特征残差。The output of the feature high-dimensional mapping module is reduced to the same dimension as the output of the original image feature extraction module by a convolution layer with a convolution kernel size of 128×1×1 to obtain the original feature residual.
2)提取全局残差2) Extract the global residual
将原始特征残差与原始图像特征提取模块的输出相加,用一层卷积核大小为1×3×3的卷积层降维到与全卷积神经网络的输入相同维度,得到全局残差。The original feature residual is added to the output of the original image feature extraction module, and a convolutional layer with a convolution kernel size of 1×3×3 is used to reduce the dimension to the same dimension as the input of the full convolutional neural network to obtain the global residual Difference.
3)构建残差提取模块3) Build a residual extraction module
将全局残差与全卷积神经网络的输入相加,构建成残差提取模块。The global residual is added to the input of the fully convolutional neural network to build a residual extraction module.
在本发明的构建全卷积神经网络步骤(3)中,本发明的胶囊模块为:第一卷积单元的输出与第二卷积单元的输入相连,第二卷积单元的输出与第三卷积单元的输入相连,第三卷积单元的输出与第三压缩激励单元的输入相连。In the step (3) of constructing a full convolutional neural network of the present invention, the capsule module of the present invention is: the output of the first convolution unit is connected to the input of the second convolution unit, and the output of the second convolution unit is connected to the third convolution unit. The input of the convolution unit is connected, and the output of the third convolution unit is connected with the input of the third compression excitation unit.
在本发明的构建全卷积神经网络步骤(3)中,本发明的第一卷积单元由批量正则层、线性整流单元层、卷积层组成,批量正则层的输出与线性整流单元层的输入相连,线性整流单元层输出与卷积层的输入相连,第二卷积单元、第三卷积单元与第一卷积单元的结构相同。In the step (3) of constructing a full convolutional neural network of the present invention, the first convolution unit of the present invention is composed of a batch regularization layer, a linear rectification unit layer, and a convolution layer, and the output of the batch regularization layer and the linear rectification unit layer The input is connected, the output of the linear rectification unit layer is connected to the input of the convolution layer, and the structure of the second convolution unit and the third convolution unit is the same as that of the first convolution unit.
在本发明的构建全卷积神经网络步骤(3)中,本发明的第一联接线性整流单元中卷积核大小为64×3×3,偏移量为1,第二联接线性整流单元与第一联接线性整流单元相同。In the step (3) of constructing a fully convolutional neural network of the present invention, the size of the convolution kernel in the first connected linear rectification unit of the present invention is 64×3×3, and the offset is 1, and the second connected linear rectifier unit and The first connected linear rectification unit is the same.
在本发明的构建全卷积神经网络步骤(3)中,本发明的第一压缩激励单元由全局平均池化层、第一全连接层、第二全连接层、Sigmoid激活层组成,全局平均池化层的输出维度为128×1×1,第一全连接层的输出维度为8×1×1,第二全连接层的输出维度为128×1×1,Sigmoid激活层的输出维度为128×1×1,全局平均池化层的输出与第一全连接层的输入相连,第一全连接层的输出与第二全连接层的输入相连,第二全连接层的输出与Sigmoid激活层的输入相连;所述的第二压缩激励单元与第一压缩激励单元相同。In the step (3) of constructing a fully convolutional neural network of the present invention, the first compressed excitation unit of the present invention is composed of a global average pooling layer, a first fully connected layer, a second fully connected layer, and a Sigmoid activation layer, and the global average The output dimension of the pooling layer is 128×1×1, the output dimension of the first fully connected layer is 8×1×1, the output dimension of the second fully connected layer is 128×1×1, and the output dimension of the Sigmoid activation layer is 128×1×1, the output of the global average pooling layer is connected to the input of the first fully connected layer, the output of the first fully connected layer is connected to the input of the second fully connected layer, and the output of the second fully connected layer is connected to the Sigmoid activation The inputs of the layers are connected; the second compressed excitation unit is the same as the first compressed excitation unit.
在本发明的构建全卷积神经网络步骤(3)中,本发明的第三压缩激励单元由全局平均池化层、第一全连接层、第二全连接层、Sigmoid激活层组成,全局平均池化层的输出维度为128×1×1,第一全连接层的输出维度为8×1×1,第二全连接层的输出维度为128×1×1,Sigmoid激活层的输出维度为128×1×1,全局平均池化层的输出与第一全连接层的输入相连,第一全连接层的输出与第二全连接层的输入相连,第二全连接层的输出与Sigmoid激活层的输入相连。In the step (3) of constructing a fully convolutional neural network of the present invention, the third compressed excitation unit of the present invention is composed of a global average pooling layer, a first fully connected layer, a second fully connected layer, and a Sigmoid activation layer, and the global average The output dimension of the pooling layer is 128×1×1, the output dimension of the first fully connected layer is 8×1×1, the output dimension of the second fully connected layer is 128×1×1, and the output dimension of the Sigmoid activation layer is 128×1×1, the output of the global average pooling layer is connected to the input of the first fully connected layer, the output of the first fully connected layer is connected to the input of the second fully connected layer, and the output of the second fully connected layer is connected to the Sigmoid activation The input of the layer is connected.
在本发明的构建全卷积神经网络步骤(3)中,本发明的胶囊模块的第一卷积单元、第二卷积单元、第三卷积单元的卷积核大小分别为128×1×1、128×3×3、128×3×3。In the step (3) of constructing a fully convolutional neural network of the present invention, the convolution kernel sizes of the first convolution unit, the second convolution unit, and the third convolution unit of the capsule module of the present invention are 128×1× 1. 128×3×3, 128×3×3.
本发明与现有的技术相比具有以下优点:Compared with the prior art, the present invention has the following advantages:
由于本发明采用构建了一个由原始图像特征提取模块、特征高维映射模块、残差提取模块组成的全卷积神经网络模型,提高了重建图像的质量并丰富了图像的细节。本发明可以在仅拥有低分辨率图像的情况下,通过该全卷积神经网络模型重建出高质量的超分辨率图像。Since the present invention constructs a fully convolutional neural network model composed of an original image feature extraction module, a feature high-dimensional mapping module, and a residual extraction module, the quality of the reconstructed image is improved and the details of the image are enriched. The present invention can reconstruct high-quality super-resolution images through the fully convolutional neural network model under the condition of only having low-resolution images.
附图说明Description of drawings
图1是本发明实施例1的流程图。Fig. 1 is a flowchart of Embodiment 1 of the present invention.
图2是图1中构建全卷积神经网络中胶囊模块的流程图。Fig. 2 is a flowchart of the capsule module in constructing the fully convolutional neural network in Fig. 1.
图3是一张测试集图像经双三次插值法处理后的结果图。Figure 3 is a result of a test set image processed by the bicubic interpolation method.
图4是一张测试集图像经基于全卷积神经网络单张图像的超分辨率图像重建方法处理后的结果图。Fig. 4 is a result diagram of a test set image processed by a super-resolution image reconstruction method based on a single image of a fully convolutional neural network.
具体实施方式Detailed ways
下面结合附图和实施例对本发明进一步详细说明,但本发明不限于下述的实施例。The present invention will be described in further detail below in conjunction with the accompanying drawings and embodiments, but the present invention is not limited to the following embodiments.
实施例1Example 1
以在VOC2012图片数据集中选取292张彩色图像为例,本实施例的基于全卷积神经网络单张图像的超分辨率图像重建方法如图1所示,由以下步骤组成:Taking the selection of 292 color images in the VOC2012 image data set as an example, the super-resolution image reconstruction method based on a single image of a fully convolutional neural network in this embodiment is shown in Figure 1 and consists of the following steps:
(1)对采集的图像进行分割(1) Segment the collected image
在VOC2012图片数据集中选取获取292张彩色图像,按照3:1的比例分割出训练数据集图像和测试集图像,即练数据集219张图像,测试集73张图像。Select and acquire 292 color images in the VOC2012 image data set, and divide the training data set image and the test set image according to the ratio of 3:1, that is, the training data set has 219 images, and the test set has 73 images.
(2)制作网络训练集和测试集图像进行预处理(2) Make network training set and test set images for preprocessing
使用数据增强方法扩充训练数据集图像并制作网络训练集,并对测试集图像进行预处理。Use data augmentation methods to augment training dataset images and make network training sets, and preprocess test set images.
数据增强方法扩充训练数据集图像并制作网络训练集的步骤为:The steps of the data enhancement method to expand the training data set image and make the network training set are as follows:
1)对训练数据集219张图像中的所有图像依次顺时针旋转90°、180°、270°,将每次旋转得到的旋转图像加入训练数据集图像中。1) Rotate all the images in the 219 images in the training data set clockwise by 90°, 180°, and 270° in turn, and add the rotated images obtained by each rotation to the images in the training data set.
2)对训练数据集图像中所有图像进行水平翻转。2) Horizontally flip all the images in the training dataset images.
3)对训练数据集图像中所有图像从红、绿、蓝色彩空间,转为亮度、蓝色色度、红色色度空间,提取出亮度,从亮度中切割出32×32的像素块。3) Convert all the images in the training data set from the red, green, and blue color spaces to the brightness, blue chroma, and red chroma spaces, extract the luminance, and cut out 32×32 pixel blocks from the luminance.
4)对每个像素块分别进行2倍、3倍、4倍双三次插值下采样,将下采样的结果再用相同的倍数上采样,恢复到原尺寸,所得到的结果为网络模型的输入,原来的像素块为网络模型的输出,得到训练集。4) Perform bicubic interpolation downsampling by 2 times, 3 times, and 4 times for each pixel block, and then upsample the downsampled result with the same multiple to restore it to the original size. The result obtained is the input of the network model , the original pixel block is the output of the network model, and the training set is obtained.
对测试集图像进行预处理步骤为:对所有测试集图像从红、绿、蓝色彩空间,转为亮度、蓝色色度、红色色度色彩空间,提取出的亮度作为测试集。The preprocessing steps for the test set images are as follows: convert all the test set images from the red, green, and blue color spaces to the brightness, blue chroma, and red chroma color spaces, and the extracted luminance is used as the test set.
(3)构建全卷积神经网络(3) Build a fully convolutional neural network
全卷积神经网络包含原始图像特征提取模块、特征高维映射模块、残差提取模块,特征提取模块的输出与特征高维映射模块的输入相连,特征高维映射模块的输出与残差提取模块的输入相连,构建成全卷积神经网络。The fully convolutional neural network includes an original image feature extraction module, a feature high-dimensional mapping module, and a residual extraction module. The output of the feature extraction module is connected to the input of the feature high-dimensional mapping module, and the output of the feature high-dimensional mapping module is connected to the residual extraction module. The input is connected to build a fully convolutional neural network.
原始图像特征提取模块为:第一联接线性整流单元的输出与第一压缩激励单元的输入相连,第一压缩激励单元的输出与第二联接线性整流单元的输入相连,第二联接线性整流单元的输出与第二压缩激励单元的输入相连,构建成原始图像特征提取模块。第一联接线性整流单元已在ICML2016会议公开,第一联接线性整流单元中卷积核大小为64×3×3,偏移量为1,第二联接线性整流单元与第一联接线性整流单元相同。第一压缩激励单元由全局平均池化层、第一全连接层、第二全连接层、Sigmoid激活层组成,全局平均池化层的输出维度为128×1×1,第一全连接层的输出维度为8×1×1,第二全连接层的输出维度为128×1×1,Sigmoid激活层的输出维度为128×1×1,全局平均池化层的输出与第一全连接层的输入相连,第一全连接层的输出与第二全连接层的输入相连,第二全连接层的输出与Sigmoid激活层的输入相连;上述的第二压缩激励单元与第一压缩激励单元相同。The original image feature extraction module is as follows: the output of the first connection linear rectification unit is connected to the input of the first compression excitation unit, the output of the first compression excitation unit is connected to the input of the second connection linear rectification unit, and the output of the second connection linear rectification unit The output is connected with the input of the second compressed excitation unit to construct an original image feature extraction module. The first connection linear rectification unit has been disclosed at the ICML2016 conference. The convolution kernel size in the first connection linear rectification unit is 64×3×3, and the offset is 1. The second connection linear rectification unit is the same as the first connection linear rectification unit. . The first compressed excitation unit is composed of a global average pooling layer, a first fully connected layer, a second fully connected layer, and a Sigmoid activation layer. The output dimension of the global average pooling layer is 128×1×1, and the output dimension of the first fully connected layer is The output dimension is 8×1×1, the output dimension of the second fully connected layer is 128×1×1, the output dimension of the Sigmoid activation layer is 128×1×1, the output of the global average pooling layer is the same as that of the first fully connected layer The input of the first fully connected layer is connected to the input of the second fully connected layer, and the output of the second fully connected layer is connected to the input of the Sigmoid activation layer; the above-mentioned second compression excitation unit is the same as the first compression excitation unit .
特征高维映射模块为下式:The feature high-dimensional mapping module is as follows:
xn=[xn-1,C(xn-1)]xn =[xn-1 ,C(xn-1 )]
式中C为胶囊模块,xn-1为第n-1个胶囊模块的输入,C(xn-1)为第n-1个胶囊模块的输出,x0为原始图像特征提取模块的输出,xn为特征高维映射模块的输出,n为胶囊模块的个数,取值为25。where C is the capsule module, xn-1 is the input of the n-1th capsule module, C(xn-1 ) is the output of the n-1th capsule module, and x0 is the output of the original image feature extraction module , xn is the output of the feature high-dimensional mapping module, n is the number of capsule modules, and the value is 25.
在图2中,本实施例的胶囊模块为:第一卷积单元的输出与第二卷积单元的输入相连,第二卷积单元的输出与第三卷积单元的输入相连,第三卷积单元的输出与第三压缩激励单元的输入相连。第一卷积单元由批量正则层、线性整流单元层、卷积层组成,批量正则层的输出与线性整流单元层的输入相连,线性整流单元层输出与卷积层的输入相连,第二卷积单元和第三卷积单元与第一卷积单元的结构相同。第一卷积单元、第二卷积单元、第三卷积单元的卷积核大小分别为128×1×1、128×3×3、128×3×3。本实施例的第三压缩激励单元由全局平均池化层、第一全连接层、第二全连接层、Sigmoid激活层组成,全局平均池化层的输出维度为128×1×1,第一全连接层的输出维度为8×1×1,第二全连接层的输出维度为128×1×1,Sigmoid激活层的输出维度为128×1×1,全局平均池化层的输出与第一全连接层的输入相连,第一全连接层的输出与第二全连接层的输入相连,第二全连接层的输出与Sigmoid激活层的输入相连。In Figure 2, the capsule module of this embodiment is: the output of the first convolution unit is connected to the input of the second convolution unit, the output of the second convolution unit is connected to the input of the third convolution unit, and the third convolution unit The output of the product unit is connected to the input of the third compressive excitation unit. The first convolutional unit consists of a batch regularization layer, a linear rectification unit layer, and a convolutional layer. The output of the batch regularization layer is connected to the input of the linear rectification unit layer, and the output of the linear rectification unit layer is connected to the input of the convolutional layer. The second volume The convolution unit and the third convolution unit have the same structure as the first convolution unit. The convolution kernel sizes of the first convolution unit, the second convolution unit, and the third convolution unit are 128×1×1, 128×3×3, and 128×3×3, respectively. The third compression excitation unit in this embodiment is composed of a global average pooling layer, a first fully connected layer, a second fully connected layer, and a Sigmoid activation layer. The output dimension of the global average pooling layer is 128×1×1, and the first The output dimension of the fully connected layer is 8×1×1, the output dimension of the second fully connected layer is 128×1×1, the output dimension of the Sigmoid activation layer is 128×1×1, and the output of the global average pooling layer is the same as that of the first The input of a fully connected layer is connected, the output of the first fully connected layer is connected with the input of the second fully connected layer, and the output of the second fully connected layer is connected with the input of the Sigmoid activation layer.
残差提取模块如下:The residual extraction module is as follows:
1)提取原始特征残差1) Extract the original feature residual
将特征高维映射模块的输出用一层卷积核大小为128×1×1的卷积层降维到与原始图像特征提取模块的输出相同维度,得到原始特征残差。The output of the feature high-dimensional mapping module is reduced to the same dimension as the output of the original image feature extraction module by a convolution layer with a convolution kernel size of 128×1×1 to obtain the original feature residual.
2)提取全局残差2) Extract the global residual
将原始特征残差与原始图像特征提取模块的输出相加,用一层卷积核大小为1×3×3的卷积层降维到与全卷积神经网络的输入相同维度,得到全局残差。The original feature residual is added to the output of the original image feature extraction module, and a convolutional layer with a convolution kernel size of 1×3×3 is used to reduce the dimension to the same dimension as the input of the full convolutional neural network to obtain the global residual Difference.
3)构建残差提取模块3) Build a residual extraction module
将全局残差与全卷积神经网络的输入相加,构建成残差提取模块。The global residual is added to the input of the fully convolutional neural network to build a residual extraction module.
(4)训练全卷积神经网络(4) Training fully convolutional neural network
将步骤(2)中得到的训练集输入步骤(3)构建的全卷积神经网络中,用动态调整网络模型的学习率进行训练,得到训练好的全卷积神经网络。The training set obtained in step (2) is input into the fully convolutional neural network constructed in step (3), and the learning rate of the network model is dynamically adjusted for training to obtain a trained fully convolutional neural network.
动态调整网络模型的学习率进行训练为:用均方误差作为损失函数,每遍历10000个样本为一代,每过10代学习率降为当前的0.1,迭代次数为100代。Dynamically adjust the learning rate of the network model for training: use the mean square error as the loss function, every time 10,000 samples are traversed as a generation, every 10 generations the learning rate is reduced to the current 0.1, and the number of iterations is 100 generations.
(5)重建测试集图像的超分辨率图像(5) Reconstruct the super-resolution image of the test set image
将步骤(2)得到的测试集输入步骤(4)中训练好的全卷积神经网络,得到网络输出,根据网络输出重建出测试集图像的超分辨率图像。Input the test set obtained in step (2) into the fully convolutional neural network trained in step (4) to obtain the network output, and reconstruct the super-resolution image of the test set image according to the network output.
根据网络输出重建出测试集图像的超分辨率图像为:把测试集图像的色彩空间从红、绿、蓝转为亮度、蓝色色度、红色色度,用全卷积神经网络的对应输出替换原测试集图像的亮度层,把图像色彩空间从亮度、蓝色色度、红色色度,转回红、绿、蓝,得到基于全卷积网络的单张超分辨率图像。从超分辨率图像中选取一张图像与传统的双三次插值图像进行比较,结果见图3、图4。由图3、图4可见,基于全卷积网络的单张超分辨率图像能够呈现出更清晰的纹理细节。According to the network output, the super-resolution image of the test set image is reconstructed as follows: the color space of the test set image is converted from red, green, and blue to brightness, blue chroma, and red chroma, and replaced with the corresponding output of the fully convolutional neural network. The luminance layer of the original test set image converts the image color space from luminance, blue chroma, and red chroma to red, green, and blue, and obtains a single super-resolution image based on a fully convolutional network. Select an image from the super-resolution image and compare it with the traditional bicubic interpolation image. The results are shown in Figure 3 and Figure 4. It can be seen from Figure 3 and Figure 4 that a single super-resolution image based on a fully convolutional network can present clearer texture details.
实施例2Example 2
以在VOC2012图片数据集中选取292张彩色图像为例,基于全卷积神经网络单张图像的超分辨率图像重建方法由以下步骤组成:Taking 292 color images in the VOC2012 image data set as an example, the super-resolution image reconstruction method based on a single image of a fully convolutional neural network consists of the following steps:
(1)对采集的图像进行分割(1) Segment the collected image
该步骤与实施例1相同。This step is the same as in Example 1.
(2)制作网络训练集和测试集图像进行预处理(2) Make network training set and test set images for preprocessing
使用数据增强方法扩充训练数据集图像并制作网络训练集,并对测试集图像进行预处理。Use data augmentation methods to augment training dataset images and make network training sets, and preprocess test set images.
数据增强方法扩充训练数据集图像并制作网络训练集的步骤为:The steps of the data enhancement method to expand the training data set image and make the network training set are as follows:
1)对训练数据集219张图像中的所有图像依次顺时针旋转90°、180°、270°,将每次旋转得到的旋转图像加入训练数据集图像中。1) Rotate all the images in the 219 images in the training data set clockwise by 90°, 180°, and 270° in turn, and add the rotated images obtained by each rotation to the images in the training data set.
2)对训练数据集图像中所有图像进行水平翻转。2) Horizontally flip all the images in the training dataset images.
3)对训练数据集图像中所有图像从红、绿、蓝色彩空间,转为亮度、蓝色色度、红色色度空间,提取出亮度,从亮度中切割出16×16的像素块。3) Convert all the images in the training data set from the red, green, and blue color spaces to the brightness, blue chroma, and red chroma spaces, extract the luminance, and cut out 16×16 pixel blocks from the luminance.
4)对每个像素块分别进行2倍、3倍、4倍双三次插值下采样,将下采样的结果再用相同的倍数上采样,恢复到原尺寸,所得到的结果为网络模型的输入,原来的像素块为网络模型的输出,得到训练集。4) Perform bicubic interpolation downsampling by 2 times, 3 times, and 4 times for each pixel block, and then upsample the downsampled result with the same multiple to restore it to the original size. The result obtained is the input of the network model , the original pixel block is the output of the network model, and the training set is obtained.
对测试集图像进行预处理步骤为:对所有测试集图像从红、绿、蓝色彩空间,转为亮度、蓝色色度、红色色度色彩空间,提取出的亮度作为测试集。The preprocessing steps for the test set images are as follows: convert all the test set images from the red, green, and blue color spaces to the brightness, blue chroma, and red chroma color spaces, and the extracted luminance is used as the test set.
其他步骤与实施例1相同。得到基于全卷积网络的单张超分辨率图像。Other steps are identical with embodiment 1. Obtain a single super-resolution image based on a fully convolutional network.
实施例3Example 3
以在VOC2012图片数据集中选取292张彩色图像为例,基于全卷积神经网络单张图像的超分辨率图像重建方法由以下步骤组成:Taking 292 color images in the VOC2012 image data set as an example, the super-resolution image reconstruction method based on a single image of a fully convolutional neural network consists of the following steps:
(1)对采集的图像进行分割(1) Segment the collected image
该步骤与实施例1相同。This step is the same as in Example 1.
(2)制作网络训练集和测试集图像进行预处理(2) Make network training set and test set images for preprocessing
使用数据增强方法扩充训练数据集图像并制作网络训练集,并对测试集图像进行预处理。Use data augmentation methods to augment training dataset images and make network training sets, and preprocess test set images.
数据增强方法扩充训练数据集图像并制作网络训练集的步骤为:The steps of the data enhancement method to expand the training data set image and make the network training set are as follows:
1)对训练数据集219张图像中的所有图像依次顺时针旋转90°、180°、270°,将每次旋转得到的旋转图像加入训练数据集图像中。1) Rotate all the images in the 219 images in the training data set clockwise by 90°, 180°, and 270° in turn, and add the rotated images obtained by each rotation to the images in the training data set.
2)对训练数据集图像中所有图像进行水平翻转。2) Horizontally flip all the images in the training dataset images.
3)对训练数据集图像中所有图像从红、绿、蓝色彩空间,转为亮度、蓝色色度、红色色度空间,提取出亮度,从亮度中切割出64×64的像素块。3) Convert all the images in the training data set from the red, green, and blue color spaces to the brightness, blue chroma, and red chroma spaces, extract the luminance, and cut out 64×64 pixel blocks from the luminance.
4)对每个像素块分别进行2倍、3倍、4倍双三次插值下采样,将下采样的结果再用相同的倍数上采样,恢复到原尺寸,所得到的结果为网络模型的输入,原来的像素块为网络模型的输出,得到训练集。4) Perform bicubic interpolation downsampling by 2 times, 3 times, and 4 times for each pixel block, and then upsample the downsampled result with the same multiple to restore it to the original size. The result obtained is the input of the network model , the original pixel block is the output of the network model, and the training set is obtained.
对测试集图像进行预处理步骤为:对所有测试集图像从红、绿、蓝色彩空间,转为亮度、蓝色色度、红色色度色彩空间,提取出的亮度作为测试集。The preprocessing steps for the test set images are as follows: convert all the test set images from the red, green, and blue color spaces to the brightness, blue chroma, and red chroma color spaces, and the extracted luminance is used as the test set.
其他步骤与实施例1相同。得到基于全卷积网络的单张超分辨率图像。Other steps are identical with embodiment 1. Obtain a single super-resolution image based on a fully convolutional network.
实施例4Example 4
以在VOC2012图片数据集中选取200张彩色图像为例,基于全卷积神经网络单张图像的超分辨率图像重建方法由以下步骤组成:Taking 200 color images in the VOC2012 image data set as an example, the super-resolution image reconstruction method based on a single image of a fully convolutional neural network consists of the following steps:
(1)对采集的图像进行分割(1) Segment the collected image
在VOC2012图片数据集中选取获取200张彩色图像,按照3:1的比例分割出训练数据集图像和测试集图像,即练数据集200张图像,测试集50张图像。Select and acquire 200 color images in the VOC2012 image data set, and divide the training data set image and the test set image according to the ratio of 3:1, that is, the training data set has 200 images, and the test set has 50 images.
其他步骤与实施例1相同。得到基于全卷积网络的单张超分辨率图像。Other steps are identical with embodiment 1. Obtain a single super-resolution image based on a fully convolutional network.
为了验证本发明的有益效果,发明人采用本发明实施例1的方法进行了仿真实验,实验情况如下:In order to verify the beneficial effects of the present invention, the inventor has carried out a simulation experiment using the method of Embodiment 1 of the present invention, and the experimental situation is as follows:
1、仿真条件1. Simulation conditions
硬件条件为:4块Nvidia 1080Ti显卡,128G内存。The hardware conditions are: 4 Nvidia 1080Ti graphics cards, 128G memory.
软件平台为:Caffe框架。The software platform is: Caffe framework.
2、仿真内容与结果2. Simulation content and results
用本发明方法在上述仿真条件下进行实验。图3为一张随机选取2倍下采样后的测试集图像经过双三次插值的结果,图4为将图3作为网络模型输入后所得到的网络输出结果。如图4所示,本发明方法的重建效果好,细节还原度高。Experiments were carried out under the above-mentioned simulation conditions with the method of the present invention. Figure 3 is the result of bicubic interpolation of a randomly selected test set image after 2 times downsampling, and Figure 4 is the network output result obtained after inputting Figure 3 as the network model. As shown in FIG. 4 , the reconstruction effect of the method of the present invention is good, and the degree of detail restoration is high.
本发明重建图像的效果和细节都得到了明显的提高与改进。The effect and details of the reconstructed image of the present invention are obviously improved and improved.
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201810376429.9ACN108647775B (en) | 2018-04-25 | 2018-04-25 | Super-resolution image reconstruction method based on full convolution neural network single image |
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201810376429.9ACN108647775B (en) | 2018-04-25 | 2018-04-25 | Super-resolution image reconstruction method based on full convolution neural network single image |
| Publication Number | Publication Date |
|---|---|
| CN108647775Atrue CN108647775A (en) | 2018-10-12 |
| CN108647775B CN108647775B (en) | 2022-03-29 |
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN201810376429.9AExpired - Fee RelatedCN108647775B (en) | 2018-04-25 | 2018-04-25 | Super-resolution image reconstruction method based on full convolution neural network single image |
| Country | Link |
|---|---|
| CN (1) | CN108647775B (en) |
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN109362066A (en)* | 2018-11-01 | 2019-02-19 | 山东大学 | A real-time behavior recognition system based on low-power wide-area Internet of things and capsule network and its working method |
| CN109727197A (en)* | 2019-01-03 | 2019-05-07 | 云南大学 | A medical image super-resolution reconstruction method |
| CN109784242A (en)* | 2018-12-31 | 2019-05-21 | 陕西师范大学 | EEG Noise Cancellation based on one-dimensional residual error convolutional neural networks |
| CN109886875A (en)* | 2019-01-31 | 2019-06-14 | 深圳市商汤科技有限公司 | Image super-resolution reconstruction method and device, storage medium |
| CN109903219A (en)* | 2019-02-28 | 2019-06-18 | 深圳市商汤科技有限公司 | Image processing method and apparatus, electronic device, and computer-readable storage medium |
| CN109903226A (en)* | 2019-01-30 | 2019-06-18 | 天津城建大学 | Image super-resolution reconstruction method based on symmetric residual convolutional neural network |
| CN109922346A (en)* | 2019-02-28 | 2019-06-21 | 兰州交通大学 | A kind of convolutional neural networks for the reconstruct of compressed sensing picture signal |
| CN110111257A (en)* | 2019-05-08 | 2019-08-09 | 哈尔滨工程大学 | A kind of super resolution image reconstruction method based on the weighting of feature channel adaptive |
| CN110136135A (en)* | 2019-05-17 | 2019-08-16 | 深圳大学 | Segmentation method, device, equipment and storage medium |
| CN110211057A (en)* | 2019-05-15 | 2019-09-06 | 武汉Tcl集团工业研究院有限公司 | A kind of image processing method based on full convolutional network, device and computer equipment |
| CN110766063A (en)* | 2019-10-17 | 2020-02-07 | 南京信息工程大学 | Image Classification Method Based on Compressed Excitation and Tightly Connected Convolutional Neural Networks |
| CN111354442A (en)* | 2018-12-20 | 2020-06-30 | 中国医药大学附设医院 | Tumor imaging deep learning-assisted prognosis prediction system and method for cervical cancer patients |
| CN113902662A (en)* | 2021-09-30 | 2022-01-07 | 苏州智加科技有限公司 | Camera and laser radar synchronous fusion method and device based on high-order features |
| CN118505503A (en)* | 2024-04-19 | 2024-08-16 | 中国矿业大学(北京) | A remote sensing image downscaling method based on super-resolution image reconstruction |
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN105069825A (en)* | 2015-08-14 | 2015-11-18 | 厦门大学 | Image super resolution reconstruction method based on deep belief network |
| CN106910161A (en)* | 2017-01-24 | 2017-06-30 | 华南理工大学 | A kind of single image super resolution ratio reconstruction method based on depth convolutional neural networks |
| CN106952229A (en)* | 2017-03-15 | 2017-07-14 | 桂林电子科技大学 | Image super-resolution reconstruction method based on improved convolutional network with data augmentation |
| CN107481188A (en)* | 2017-06-23 | 2017-12-15 | 珠海经济特区远宏科技有限公司 | A kind of image super-resolution reconstructing method |
| CN107507134A (en)* | 2017-09-21 | 2017-12-22 | 大连理工大学 | Super-resolution method based on convolutional neural networks |
| CN107578377A (en)* | 2017-08-31 | 2018-01-12 | 北京飞搜科技有限公司 | A kind of super-resolution image reconstruction method and system based on deep learning |
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN105069825A (en)* | 2015-08-14 | 2015-11-18 | 厦门大学 | Image super resolution reconstruction method based on deep belief network |
| CN106910161A (en)* | 2017-01-24 | 2017-06-30 | 华南理工大学 | A kind of single image super resolution ratio reconstruction method based on depth convolutional neural networks |
| CN106952229A (en)* | 2017-03-15 | 2017-07-14 | 桂林电子科技大学 | Image super-resolution reconstruction method based on improved convolutional network with data augmentation |
| CN107481188A (en)* | 2017-06-23 | 2017-12-15 | 珠海经济特区远宏科技有限公司 | A kind of image super-resolution reconstructing method |
| CN107578377A (en)* | 2017-08-31 | 2018-01-12 | 北京飞搜科技有限公司 | A kind of super-resolution image reconstruction method and system based on deep learning |
| CN107507134A (en)* | 2017-09-21 | 2017-12-22 | 大连理工大学 | Super-resolution method based on convolutional neural networks |
| Title |
|---|
| CHAO DONG 等: "Image Super-Resolution Using Deep Convolutional Networks", 《IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE》* |
| SAMUEL SCHULTER 等: "Fast and Accurate Image Upscaling with Super-Resolution Forests", 《CVPR 2015》* |
| XI CHENG 等: "SESR: Single Image Super Resolution with Recursive Squeeze and Excitation Networks", 《ARXIV》* |
| 彭亚丽 等: "基于深度反卷积神经网络的图像超分辨率算法", 《软件学报》* |
| 李伟 等: "基于卷积神经网络的深度图像超分辨率重建方法", 《电子测量与仪器学报》* |
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN109362066A (en)* | 2018-11-01 | 2019-02-19 | 山东大学 | A real-time behavior recognition system based on low-power wide-area Internet of things and capsule network and its working method |
| CN109362066B (en)* | 2018-11-01 | 2021-06-25 | 山东大学 | A real-time behavior recognition system based on low-power wide-area Internet of things and capsule network and its working method |
| CN111354442A (en)* | 2018-12-20 | 2020-06-30 | 中国医药大学附设医院 | Tumor imaging deep learning-assisted prognosis prediction system and method for cervical cancer patients |
| CN109784242A (en)* | 2018-12-31 | 2019-05-21 | 陕西师范大学 | EEG Noise Cancellation based on one-dimensional residual error convolutional neural networks |
| CN109784242B (en)* | 2018-12-31 | 2022-10-25 | 陕西师范大学 | EEG signal denoising method based on one-dimensional residual convolutional neural network |
| CN109727197A (en)* | 2019-01-03 | 2019-05-07 | 云南大学 | A medical image super-resolution reconstruction method |
| CN109727197B (en)* | 2019-01-03 | 2023-03-14 | 云南大学 | Medical image super-resolution reconstruction method |
| CN109903226A (en)* | 2019-01-30 | 2019-06-18 | 天津城建大学 | Image super-resolution reconstruction method based on symmetric residual convolutional neural network |
| CN109903226B (en)* | 2019-01-30 | 2023-08-15 | 天津城建大学 | Image super-resolution reconstruction method based on symmetric residual convolution neural network |
| CN109886875A (en)* | 2019-01-31 | 2019-06-14 | 深圳市商汤科技有限公司 | Image super-resolution reconstruction method and device, storage medium |
| CN109903219A (en)* | 2019-02-28 | 2019-06-18 | 深圳市商汤科技有限公司 | Image processing method and apparatus, electronic device, and computer-readable storage medium |
| CN109903219B (en)* | 2019-02-28 | 2023-06-30 | 深圳市商汤科技有限公司 | Image processing method and device, electronic equipment and computer readable storage medium |
| CN109922346A (en)* | 2019-02-28 | 2019-06-21 | 兰州交通大学 | A kind of convolutional neural networks for the reconstruct of compressed sensing picture signal |
| CN110111257A (en)* | 2019-05-08 | 2019-08-09 | 哈尔滨工程大学 | A kind of super resolution image reconstruction method based on the weighting of feature channel adaptive |
| CN110111257B (en)* | 2019-05-08 | 2023-01-03 | 哈尔滨工程大学 | Super-resolution image reconstruction method based on characteristic channel adaptive weighting |
| CN110211057A (en)* | 2019-05-15 | 2019-09-06 | 武汉Tcl集团工业研究院有限公司 | A kind of image processing method based on full convolutional network, device and computer equipment |
| CN110211057B (en)* | 2019-05-15 | 2023-08-29 | 武汉Tcl集团工业研究院有限公司 | Image processing method and device based on full convolution network and computer equipment |
| CN110136135B (en)* | 2019-05-17 | 2021-07-06 | 深圳大学 | Segmentation method, apparatus, device and storage medium |
| CN110136135A (en)* | 2019-05-17 | 2019-08-16 | 深圳大学 | Segmentation method, device, equipment and storage medium |
| CN110766063B (en)* | 2019-10-17 | 2023-04-28 | 南京信息工程大学 | Image classification method based on compressed excitation and tightly connected convolutional neural network |
| CN110766063A (en)* | 2019-10-17 | 2020-02-07 | 南京信息工程大学 | Image Classification Method Based on Compressed Excitation and Tightly Connected Convolutional Neural Networks |
| CN113902662A (en)* | 2021-09-30 | 2022-01-07 | 苏州智加科技有限公司 | Camera and laser radar synchronous fusion method and device based on high-order features |
| CN118505503A (en)* | 2024-04-19 | 2024-08-16 | 中国矿业大学(北京) | A remote sensing image downscaling method based on super-resolution image reconstruction |
| Publication number | Publication date |
|---|---|
| CN108647775B (en) | 2022-03-29 |
| Publication | Publication Date | Title |
|---|---|---|
| CN108647775A (en) | Super-resolution image reconstruction method based on full convolutional neural networks single image | |
| Li et al. | MDCN: Multi-scale dense cross network for image super-resolution | |
| CN109544448B (en) | Group network super-resolution image reconstruction method of Laplacian pyramid structure | |
| CN106204449B (en) | A kind of single image super resolution ratio reconstruction method based on symmetrical depth network | |
| CN110033410B (en) | Image reconstruction model training method, image super-resolution reconstruction method and device | |
| CN109741260B (en) | Efficient super-resolution method based on depth back projection network | |
| CN110120011B (en) | A video super-resolution method based on convolutional neural network and mixed resolution | |
| CN111932461B (en) | A self-learning image super-resolution reconstruction method and system based on convolutional neural network | |
| CN115222601A (en) | Image super-resolution reconstruction model and method based on residual mixed attention network | |
| CN111681166A (en) | An Image Super-Resolution Reconstruction Method with Stacked Attention Encoder-Decoder Units | |
| CN107240066A (en) | Image super-resolution rebuilding algorithm based on shallow-layer and deep layer convolutional neural networks | |
| CN108765296A (en) | A kind of image super-resolution rebuilding method based on recurrence residual error attention network | |
| CN108537733A (en) | Super resolution ratio reconstruction method based on multipath depth convolutional neural networks | |
| CN111402128A (en) | Image super-resolution reconstruction method based on multi-scale pyramid network | |
| CN108805814A (en) | Image Super-resolution Reconstruction method based on multiband depth convolutional neural networks | |
| CN112001843B (en) | A deep learning-based infrared image super-resolution reconstruction method | |
| CN107155110A (en) | A kind of picture compression method based on super-resolution technique | |
| CN111768340B (en) | Super-resolution image reconstruction method and system based on dense multipath network | |
| CN112200724A (en) | Single-image super-resolution reconstruction system and method based on feedback mechanism | |
| CN111476745A (en) | Multi-branch network and method for motion blur super-resolution | |
| CN109934771A (en) | Super-resolution reconstruction method of unsupervised remote sensing image based on recurrent neural network | |
| CN114418850A (en) | Super-resolution reconstruction method with reference image and fusion image convolution | |
| CN105447840A (en) | Image super-resolution method based on active sampling and Gaussian process regression | |
| CN115018708A (en) | Airborne remote sensing image super-resolution reconstruction method based on multi-scale feature fusion | |
| CN117745541A (en) | Image super-resolution reconstruction method based on lightweight mixed attention network |
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| GR01 | Patent grant | ||
| GR01 | Patent grant | ||
| CF01 | Termination of patent right due to non-payment of annual fee | Granted publication date:20220329 | |
| CF01 | Termination of patent right due to non-payment of annual fee |