Movatterモバイル変換


[0]ホーム

URL:


CN115564655A - Video super-resolution reconstruction method, system and medium based on deep learning - Google Patents

Video super-resolution reconstruction method, system and medium based on deep learning
Download PDF

Info

Publication number
CN115564655A
CN115564655ACN202211392882.1ACN202211392882ACN115564655ACN 115564655 ACN115564655 ACN 115564655ACN 202211392882 ACN202211392882 ACN 202211392882ACN 115564655 ACN115564655 ACN 115564655A
Authority
CN
China
Prior art keywords
module
video
super
resolution
branch
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202211392882.1A
Other languages
Chinese (zh)
Other versions
CN115564655B (en
Inventor
季栋浩
潘金山
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University of Science and Technology
Original Assignee
Nanjing University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University of Science and TechnologyfiledCriticalNanjing University of Science and Technology
Priority to CN202211392882.1ApriorityCriticalpatent/CN115564655B/en
Priority claimed from CN202211392882.1Aexternal-prioritypatent/CN115564655B/en
Publication of CN115564655ApublicationCriticalpatent/CN115564655A/en
Application grantedgrantedCritical
Publication of CN115564655BpublicationCriticalpatent/CN115564655B/en
Activelegal-statusCriticalCurrent
Anticipated expirationlegal-statusCritical

Links

Images

Classifications

Landscapes

Abstract

Translated fromChinese

本发明涉及一种基于深度学习的视频超分辨率重建方法、系统及介质,特别涉及视频处理技术领域。所述方法包括:将待处理视频的各帧输入超分模型得到所述待处理视频各帧对应的超分辨率图像;根据所述待处理视频各帧图像对应的超分辨率图像得到所述待处理视频对应的超分辨率视频,超分模型为以待训练视频为输入,以所述待训练视频对应的超分辨率视频为输出,以频率损失函数最小为目标对BasicVSR模型进行训练得到的;所述BasicVSR模型的前向分支和后向分支均包括GDFN模块。本发明可提高高分辨率视频图像的质量。

Figure 202211392882

The present invention relates to a video super-resolution reconstruction method, system and medium based on deep learning, in particular to the technical field of video processing. The method includes: inputting each frame of the video to be processed into a super-resolution model to obtain a super-resolution image corresponding to each frame of the video to be processed; obtaining the super-resolution image corresponding to each frame of the video to be processed Processing the super-resolution video corresponding to the video, the super-resolution model is input with the video to be trained, the super-resolution video corresponding to the video to be trained is output, and the BasicVSR model is obtained by training the BasicVSR model with the minimum frequency loss function; Both the forward branch and the backward branch of the BasicVSR model include a GDFN module. The invention improves the quality of high-resolution video images.

Figure 202211392882

Description

Translated fromChinese
基于深度学习的视频超分辨率重建方法、系统及介质Method, system and medium for video super-resolution reconstruction based on deep learning

技术领域technical field

本发明涉及视频处理技术领域,特别是涉及一种基于深度学习的视频超分辨率重建方法、系统及介质。The present invention relates to the technical field of video processing, in particular to a video super-resolution reconstruction method, system and medium based on deep learning.

背景技术Background technique

分辨率是一组用于评估图像中蕴含细节信息丰富程度的性能参数,包括时间分辨率、空间分辨率及色阶分辨率等,体现了成像系统实际所能反映物体细节信息的能力。相较于低分辨率图像,高分辨率图像通常包含更大的像素密度、更丰富的纹理细节及更高的可信赖度。但在实际上情况中,受采集设备与环境、网络传输介质与带宽、视频退化模型本身等诸多因素的约束,通常并不能直接得到具有边缘锐化、无成块模糊的理想高分辨率图像。提升图像分辨率的最直接的做法是对采集系统中的光学硬件进行改进,但是由于制造工艺难以大幅改进并且制造成本十分高昂,因此物理上解决图像低分辨率问题往往代价太大。Resolution is a set of performance parameters used to evaluate the richness of detailed information contained in an image, including temporal resolution, spatial resolution, and color scale resolution, etc., reflecting the ability of the imaging system to actually reflect the detailed information of an object. Compared with low-resolution images, high-resolution images usually contain greater pixel density, richer texture details, and higher reliability. However, in actual situations, due to the constraints of many factors such as acquisition equipment and environment, network transmission medium and bandwidth, and video degradation model itself, it is usually not possible to directly obtain ideal high-resolution images with edge sharpening and no block blur. The most direct way to improve the image resolution is to improve the optical hardware in the acquisition system, but because the manufacturing process is difficult to greatly improve and the manufacturing cost is very high, it is often too expensive to physically solve the problem of low image resolution.

视频的超分辨率重建技术指的是将给定的低分辨率图像通过特定的算法恢复成相应的高分辨率视频。与图像超分相比,视频超分可以利用相邻多帧的信息,以达到更好的超分效果。传统的超分算法,如插值等会使得高分辨率的视频图像出现边缘模糊,效果不佳。Video super-resolution reconstruction technology refers to restoring a given low-resolution image to a corresponding high-resolution video through a specific algorithm. Compared with image super-resolution, video super-resolution can use the information of adjacent multiple frames to achieve better super-resolution effects. Traditional super-resolution algorithms, such as interpolation, will blur the edges of high-resolution video images, and the effect is not good.

发明内容Contents of the invention

本发明的目的是提供一种基于深度学习的视频超分辨率重建方法、系统及介质,可提高高分辨率视频图像的质量。The purpose of the present invention is to provide a video super-resolution reconstruction method, system and medium based on deep learning, which can improve the quality of high-resolution video images.

为实现上述目的,本发明提供了如下方案:To achieve the above object, the present invention provides the following scheme:

一种基于深度学习的视频超分辨率重建方法,包括:A video super-resolution reconstruction method based on deep learning, comprising:

构建超分模型;所述超分模型为以待训练视频各帧对应的图像为输入,以所述待训练视频各帧对应的超分辨率图像为输出,以频率损失函数最小为目标对BasicVSR模型进行训练得到的;所述BasicVSR模型的前向分支和后向分支均包括GDFN模块;Build a super-resolution model; the super-resolution model is to input the image corresponding to each frame of the video to be trained, to output the super-resolution image corresponding to each frame of the video to be trained, and to target the BasicVSR model with the minimum frequency loss function Obtained by training; the forward branch and the backward branch of the BasicVSR model both include the GDFN module;

获取待处理视频;Get the video to be processed;

将所述待处理视频的各帧图像输入所述超分模型得到所述待处理视频各帧图像对应的超分辨率图像;Inputting each frame image of the video to be processed into the super-resolution model to obtain a super-resolution image corresponding to each frame image of the video to be processed;

根据所述待处理视频各帧图像对应的超分辨率图像得到所述待处理视频对应的超分辨率视频。The super-resolution video corresponding to the video to be processed is obtained according to the super-resolution image corresponding to each frame image of the video to be processed.

可选的,所述BasicVSR模型包括前向分支、后向分支和上采样分支;所述前向分支的输出端与所述后向分支的输出端均与所述上采样分支的输入端连接。Optionally, the BasicVSR model includes a forward branch, a backward branch and an upsampling branch; the output end of the forward branch and the output end of the backward branch are both connected to the input end of the upsampling branch.

可选的,所述前向分支包括N个前向传播模块;所述后向分支包括N个后向传播模块;所述上采样分支包括N个上采样传播模块;N为大于1的正整数;Optionally, the forward branch includes N forward propagation modules; the backward branch includes N backward propagation modules; the upsampling branch includes N upsampling propagation modules; N is a positive integer greater than 1 ;

第i个前向传播模块的第一输入端与第i-1个前向传播模块的第一输出端连接;所述第i个前向传播模块的第二输入端用于输入所述待处理视频的第i帧图像和第i-1帧图像;所述第i个前向传播模块的第一输出端与所述第i+1个前向传播模块的第一输入端连接;所述第i个前向传播模块的第二输出端与第i个上采样模块的第一输入端连接;The first input end of the i-th forward propagation module is connected to the first output end of the i-1th forward propagation module; the second input end of the i-th forward propagation module is used to input the to-be-processed The i-th frame image and the i-1th frame image of the video; the first output end of the i-th forward propagation module is connected to the first input end of the i+1-th forward propagation module; the first The second output end of the i forward propagation module is connected to the first input end of the i th upsampling module;

第i个后向传播模块的第一输入端与第i+1个后向传播模块的第一输出端连接;所述第i个后向传播模块的第二输入端用于输入所述待处理视频的第i帧图像和第i-1帧图像;所述第i个后向传播模块的第一输出端与所述第i-1个后向传播模块的第一输入端连接;所述第i个后向传播模块的第二输出端与第i个上采样模块的第二输入端连接。The first input end of the i-th backpropagation module is connected to the first output end of the i+1th backpropagation module; the second input end of the i-th backpropagation module is used to input the to-be-processed The i-th frame image and the i-1th frame image of the video; the first output end of the i-th backward propagation module is connected to the first input end of the i-1th backward propagation module; the first The second output end of the i backpropagation module is connected to the second input end of the i th upsampling module.

可选的,所述前向传播模块和所述后向传播模块均包括光流估计模块、空间扭曲模块和深度残差块,所述光流估计模块、所述空间扭曲模块、所述GDFN模块和所述深度残差块依次连接。Optionally, both the forward propagation module and the backward propagation module include an optical flow estimation module, a space warping module and a depth residual block, the optical flow estimation module, the space warping module, and the GDFN module and the depth residual block are sequentially connected.

可选的,所述频率损失函数具体为:Optionally, the frequency loss function is specifically:

Figure BDA0003931932320000021
Figure BDA0003931932320000021

其中,

Figure BDA0003931932320000031
表示频率损失函数,
Figure BDA0003931932320000032
表示将所述待训练视频输入BasicVSR模型生成的图像,I表示所述待训练视频对应的超分辨率图像,∈表示第一常数,α表示第二常数,
Figure BDA0003931932320000033
表示对
Figure BDA0003931932320000034
进行快速傅立叶变换,
Figure BDA0003931932320000035
表示对I进行快速傅立叶变换。in,
Figure BDA0003931932320000031
Denotes the frequency loss function,
Figure BDA0003931932320000032
Represents the image generated by inputting the video to be trained into the BasicVSR model, I represents the super-resolution image corresponding to the video to be trained, ∈ represents the first constant, and α represents the second constant,
Figure BDA0003931932320000033
express yes
Figure BDA0003931932320000034
Perform a fast Fourier transform,
Figure BDA0003931932320000035
Indicates the fast Fourier transform of I.

一种基于深度学习的视频超分辨率重建系统,包括:A video super-resolution reconstruction system based on deep learning, including:

构建模块,用于构建超分模型;所述超分模型为以待训练视频各帧对应的图像为输入,以所述待训练视频各帧对应的超分辨率图像为输出,以频率损失函数最小为目标对BasicVSR模型进行训练得到的;所述BasicVSR模型的前向分支和后向分支均包括GDFN模块;A building block for constructing a super-resolution model; the super-resolution model is to input the image corresponding to each frame of the video to be trained, to output the super-resolution image corresponding to each frame of the video to be trained, and to minimize the frequency loss function Obtained by training the BasicVSR model for the target; the forward branch and the backward branch of the BasicVSR model both include the GDFN module;

获取模块,用于获取待处理视频;The acquisition module is used to acquire the video to be processed;

超分辨率图像确定模块,用于将所述待处理视频的各帧图像输入所述超分模型得到所述待处理视频各帧图像对应的超分辨率图像;A super-resolution image determination module, configured to input each frame image of the video to be processed into the super-resolution model to obtain a super-resolution image corresponding to each frame image of the video to be processed;

超分辨率视频确定模块;根据所述待处理视频各帧图像对应的超分辨率图像得到所述待处理视频对应的超分辨率视频。A super-resolution video determining module; obtain the super-resolution video corresponding to the video to be processed according to the super-resolution image corresponding to each frame image of the video to be processed.

可选的,所述BasicVSR模型包括前向分支、后向分支和上采样分支;所述前向分支的输出端与所述后向分支的输出端均与所述上采样分支的输入端连接。Optionally, the BasicVSR model includes a forward branch, a backward branch and an upsampling branch; the output end of the forward branch and the output end of the backward branch are both connected to the input end of the upsampling branch.

可选的,所述前向分支包括N个前向传播模块;所述后向分支包括N个后向传播模块;所述上采样分支包括N个上采样传播模块;N为大于1的正整数;Optionally, the forward branch includes N forward propagation modules; the backward branch includes N backward propagation modules; the upsampling branch includes N upsampling propagation modules; N is a positive integer greater than 1 ;

第i个前向传播模块的第一输入端与第i-1个前向传播模块的第一输出端连接;所述第i个前向传播模块的第二输入端用于输入所述待处理视频的第i帧图像和第i-1帧图像;所述第i个前向传播模块的第一输出端与所述第i+1个前向传播模块的第一输入端连接;所述第i个前向传播模块的第二输出端与第i个上采样模块的第一输入端连接;The first input end of the i-th forward propagation module is connected to the first output end of the i-1th forward propagation module; the second input end of the i-th forward propagation module is used to input the to-be-processed The i-th frame image and the i-1th frame image of the video; the first output end of the i-th forward propagation module is connected to the first input end of the i+1-th forward propagation module; the first The second output end of the i forward propagation module is connected to the first input end of the i th upsampling module;

第i个后向传播模块的第一输入端与第i+1个后向传播模块的第一输出端连接;所述第i个后向传播模块的第二输入端用于输入所述待处理视频的第i帧图像和第i-1帧图像;所述第i个后向传播模块的第一输出端与所述第i-1个后向传播模块的第一输入端连接;所述第i个后向传播模块的第二输出端与第i个上采样模块的第二输入端连接。The first input end of the i-th backpropagation module is connected to the first output end of the i+1th backpropagation module; the second input end of the i-th backpropagation module is used to input the to-be-processed The i-th frame image and the i-1th frame image of the video; the first output end of the i-th backward propagation module is connected to the first input end of the i-1th backward propagation module; the first The second output end of the i backpropagation module is connected to the second input end of the i th upsampling module.

可选的,所述前向传播模块和所述后向传播模块均包括光流估计模块、空间扭曲模块和深度残差块,所述光流估计模块、所述空间扭曲模块、所述GDFN模块和所述深度残差块依次连接。Optionally, both the forward propagation module and the backward propagation module include an optical flow estimation module, a space warping module and a depth residual block, the optical flow estimation module, the space warping module, and the GDFN module and the depth residual block are sequentially connected.

可选的,一种计算机可读存储介质,其存储有计算机程序,所述计算机程序被处理器执行时实现如上述所述的基于深度学习的视频超分辨率重建方法。Optionally, a computer-readable storage medium stores a computer program, and when the computer program is executed by a processor, implements the method for video super-resolution reconstruction based on deep learning as described above.

根据本发明提供的具体实施例,本发明公开了以下技术效果:本发明使用GDFN模块来达到更好的特征融合效果,可提高高分辨率视频图像的质量。According to the specific embodiments provided by the present invention, the present invention discloses the following technical effects: the present invention uses the GDFN module to achieve a better feature fusion effect, and can improve the quality of high-resolution video images.

附图说明Description of drawings

为了更清楚地说明本发明实施例或现有技术中的技术方案,下面将对实施例中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本发明的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动性的前提下,还可以根据这些附图获得其他的附图。In order to more clearly illustrate the technical solutions in the embodiments of the present invention or the prior art, the following will briefly introduce the accompanying drawings required in the embodiments. Obviously, the accompanying drawings in the following description are only some of the present invention. Embodiments, for those of ordinary skill in the art, other drawings can also be obtained according to these drawings without paying creative labor.

图1为本发明实施例提供的一种基于深度学习的视频超分辨率重建方法的流程图;Fig. 1 is a flow chart of a deep learning-based video super-resolution reconstruction method provided by an embodiment of the present invention;

图2为BasicVSR模型的具体架构图;Figure 2 is a specific architecture diagram of the BasicVSR model;

图3为前向传播模块的具体结构图;Fig. 3 is the specific structural diagram of forward propagation module;

图4为后向传播模块的具体结构图;Fig. 4 is the specific structural diagram of backward propagation module;

图5为GDFN模块的具体结构图;Fig. 5 is the specific structural diagram of GDFN module;

图6为视频超分系统的具体结构图。FIG. 6 is a specific structural diagram of the video super-resolution system.

具体实施方式detailed description

下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本发明一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。The following will clearly and completely describe the technical solutions in the embodiments of the present invention with reference to the accompanying drawings in the embodiments of the present invention. Obviously, the described embodiments are only some of the embodiments of the present invention, not all of them. Based on the embodiments of the present invention, all other embodiments obtained by persons of ordinary skill in the art without making creative efforts belong to the protection scope of the present invention.

为使本发明的上述目的、特征和优点能够更加明显易懂,下面结合附图和具体实施方式对本发明作进一步详细的说明。In order to make the above objects, features and advantages of the present invention more comprehensible, the present invention will be further described in detail below in conjunction with the accompanying drawings and specific embodiments.

随着深度学习的兴起,基于深度学习的视频超分辨率技术发展越来越迅速,本发明提出了一种基于深度学习的视频超分辨率重建方法,本发明中的超分模型使用循环网络架构进行视频帧信息间的信息传播,同时使用GDFN模块提升特征融合的效果,加入频率损失函数优化网络使超分模型同时具有良好的性能、低参数量和计算效率高的优点。With the rise of deep learning, the video super-resolution technology based on deep learning is developing more and more rapidly. The present invention proposes a video super-resolution reconstruction method based on deep learning. The super-resolution model in the present invention uses a recurrent network architecture Carry out information dissemination between video frame information, and use the GDFN module to improve the effect of feature fusion, and add frequency loss function to optimize the network so that the super-resolution model has the advantages of good performance, low parameter amount and high computational efficiency.

本发明实施例提供了一种基于深度学习的视频超分辨率重建方法,如图1所示,包括:The embodiment of the present invention provides a video super-resolution reconstruction method based on deep learning, as shown in Figure 1, including:

步骤101:构建超分模型;所述超分模型为以待训练视频各帧对应的图像为输入,以所述待训练视频各帧对应的超分辨率图像为输出,以频率损失函数最小为目标对BasicVSR模型进行训练得到的;所述BasicVSR模型的前向分支和后向分支均包括GDFN模块。Step 101: Construct a super-resolution model; the super-resolution model takes the image corresponding to each frame of the video to be trained as input, takes the super-resolution image corresponding to each frame of the video to be trained as output, and takes the minimum frequency loss function as the goal Obtained by training the BasicVSR model; both the forward branch and the backward branch of the BasicVSR model include a GDFN module.

步骤102:获取待处理视频。Step 102: Obtain the video to be processed.

步骤103:将所述待处理视频的各帧图像输入所述超分模型得到所述待处理视频各帧图像对应的超分辨率图像。Step 103: Input each frame image of the video to be processed into the super-resolution model to obtain a super-resolution image corresponding to each frame image of the video to be processed.

步骤104:根据所述待处理视频各帧图像对应的超分辨率图像得到所述待处理视频对应的超分辨率视频。Step 104: Obtain the super-resolution video corresponding to the video to be processed according to the super-resolution image corresponding to each frame image of the video to be processed.

在实际应用中,所述BasicVSR模型包括前向分支、后向分支和上采样分支;所述前向分支的输出端与所述后向分支的输出端均与所述上采样分支的输入端连接。In practical applications, the BasicVSR model includes a forward branch, a backward branch, and an upsampling branch; the output of the forward branch and the output of the backward branch are connected to the input of the upsampling branch .

在实际应用中,如图2所示,所述前向分支包括N个前向传播模块;所述后向分支包括N个后向传播模块;所述上采样分支包括N个上采样传播模块;N为大于1的正整数。In practical applications, as shown in Figure 2, the forward branch includes N forward propagation modules; the backward branch includes N backward propagation modules; the upsampling branch includes N upsampling propagation modules; N is a positive integer greater than 1.

第i个前向传播模块的第一输入端与第i-1个前向传播模块的第一输出端连接,用于输入第i-1个前向传播模块输出的第i-1帧图像的前向传播特征

Figure BDA0003931932320000051
所述第i个前向传播模块的第二输入端用于输入所述待处理视频的第i帧图像xi和第i-1帧图像xi-1;所述第i个前向传播模块的第一输出端与所述第i+1个前向传播模块的第一输入端连接,用于输出第i个前向传播模块输出的第i帧图像的前向传播特征
Figure BDA0003931932320000061
所述第i个前向传播模块的第二输出端与第i个上采样模块的第一输入端连接
Figure BDA0003931932320000062
The first input end of the i-th forward propagation module is connected to the first output end of the i-1th forward propagation module for inputting the i-1th frame image output by the i-1th forward propagation module forward propagating features
Figure BDA0003931932320000051
The second input terminal of the i-th forward propagation module is used to input the i-th frame image xi and the i-1th frame image xi-1 of the video to be processed; the i-th forward propagation module The first output terminal of is connected to the first input terminal of the i+1th forward propagation module, and is used to output the forward propagation characteristics of the i-th frame image output by the i-th forward propagation module
Figure BDA0003931932320000061
The second output end of the i-th forward propagation module is connected to the first input end of the i-th up-sampling module
Figure BDA0003931932320000062

第i个后向传播模块的第一输入端与第i+1个后向传播模块的第一输出端连接,用于输入第i+1个后向传播模块输出的第i+1帧图像的后向传播特征

Figure BDA0003931932320000063
所述第i个后向传播模块的第二输入端用于输入所述待处理视频的第i帧图像xi和第i-1帧图像xi-1;所述第i个后向传播模块的第一输出端与所述第i-1个后向传播模块的第一输入端连接,用于输出第i个后向传播模块输出的第i帧图像的后向传播特征
Figure BDA0003931932320000064
所述第i个后向传播模块的第二输出端与第i个上采样模块的第二输入端连接,用于输出
Figure BDA0003931932320000065
第i个上采样模块的输出端输出第i帧图像对应的超分辨率图像hri。The first input terminal of the i-th backpropagation module is connected to the first output end of the i+1th backpropagation module for inputting the i+1th frame image output by the i+1th backpropagation module Backpropagation features
Figure BDA0003931932320000063
The second input terminal of the ith backward propagation module is used to input the i-th frame image xi and the i-1th frame image xi-1 of the video to be processed; the i-th backward propagation module The first output terminal of is connected to the first input terminal of the i-1th backpropagation module, and is used to output the backpropagation feature of the i-th frame image output by the i-th backpropagation module
Figure BDA0003931932320000064
The second output terminal of the i-th backpropagation module is connected to the second input end of the i-th up-sampling module for outputting
Figure BDA0003931932320000065
The output terminal of the i-th up-sampling module outputs the super-resolution image hri corresponding to the i-th frame image.

在实际应用中,如图3和图4所示,所述前向传播模块和所述后向传播模块均包括光流估计模块、空间扭曲模块和深度残差块,所述光流估计模块、所述空间扭曲模块、所述GDFN模块和所述深度残差块依次连接。In practical applications, as shown in Figure 3 and Figure 4, the forward propagation module and the backward propagation module both include an optical flow estimation module, a space warping module and a depth residual block, the optical flow estimation module, The space warping module, the GDFN module and the depth residual block are sequentially connected.

以第i个前向传播模块为例对前向传播的具体工作流程进行介绍:首先通过光流估计模块计算xi-1与xi的前向光流信息

Figure BDA0003931932320000066
Figure BDA0003931932320000067
Figure BDA0003931932320000068
进行空间扭曲对齐得到与第i帧图像对齐的第i-1帧图像的前向传播特征
Figure BDA0003931932320000069
之后通过GDFN模块将
Figure BDA00039319323200000610
与xi进行融合得到融合后的特征
Figure BDA00039319323200000611
Figure BDA00039319323200000612
送入深度残差块计算得到
Figure BDA00039319323200000613
Take the i-th forward propagation module as an example to introduce the specific workflow of forward propagation: first, calculate the forward optical flow information of xi-1 and xi through the optical flow estimation module
Figure BDA0003931932320000066
use
Figure BDA0003931932320000067
right
Figure BDA0003931932320000068
Perform space warping alignment to obtain the forward propagation features of the i-1th frame image aligned with the i-th frame image
Figure BDA0003931932320000069
Then through the GDFN module will
Figure BDA00039319323200000610
Fusion withxi to get the fused features
Figure BDA00039319323200000611
Will
Figure BDA00039319323200000612
Sending to the depth residual block to calculate
Figure BDA00039319323200000613

以第i个后向传播模块为例对后向传播的具体工作流进行介绍:首先通过光流估计模块计算xi-1与xi的后向光流信息

Figure BDA00039319323200000614
Figure BDA00039319323200000615
Figure BDA00039319323200000616
进行空间扭曲对齐得到与第i帧图像对齐的第i+1帧图像的后向传播特征
Figure BDA00039319323200000617
之后通过GDFN模块将
Figure BDA00039319323200000618
与xi进行融合得到融合后的特征
Figure BDA00039319323200000619
Figure BDA00039319323200000620
送入深度残差块计算得到
Figure BDA00039319323200000621
Taking the i-th backpropagation module as an example to introduce the specific workflow of backpropagation: First, calculate the backward optical flow information of xi-1 and xi through the optical flow estimation module
Figure BDA00039319323200000614
use
Figure BDA00039319323200000615
right
Figure BDA00039319323200000616
Perform space warping alignment to obtain the backpropagation features of the i+1th frame image aligned with the ith frame image
Figure BDA00039319323200000617
Then through the GDFN module will
Figure BDA00039319323200000618
Fusion withxi to get the fused features
Figure BDA00039319323200000619
Will
Figure BDA00039319323200000620
Sending to the depth residual block to calculate
Figure BDA00039319323200000621

之后将

Figure BDA00039319323200000622
Figure BDA00039319323200000623
进行融合,获得最终的特征图,再通过pixel-shuffle技术进行上采样,在经过重建网络,得到最终的高分辨率视频。will later
Figure BDA00039319323200000622
and
Figure BDA00039319323200000623
Perform fusion to obtain the final feature map, and then perform upsampling through pixel-shuffle technology, and then reconstruct the network to obtain the final high-resolution video.

在实际应用中,GDFN模块具体结构如图5所示,特征融合模块GDFN使用depth-wiseconvolution从空间相邻的像素位置编码信息,可用于学习有效地融合特征。输入的特征在经过归一化操作(Norm)后,按通道均分为两部分,各自通过1x1卷积和3x3卷积,其中一个分支通过GELU激活函数激活后,与另一分支进行元素积,在通过一个1x1卷积恢复通道后和原始输入相加,得到最终的结果。In practical applications, the specific structure of the GDFN module is shown in Figure 5. The feature fusion module GDFN uses depth-wise convolution to encode information from spatially adjacent pixel positions, which can be used to learn to fuse features effectively. After the normalization operation (Norm), the input feature is divided into two parts according to the channel, each of which passes through 1x1 convolution and 3x3 convolution. After one branch is activated by the GELU activation function, it performs an element product with the other branch. After restoring the channel through a 1x1 convolution, it is added to the original input to get the final result.

在实际应用中,本发明使用频率损失函数帮助获取更多的图像细节信息。使用一些常用的损失函数时,为了减小损失值,模型往往倾向于使得视频图像变得更加平滑。这些细节部分往往对应频率信号的高频部分,因此通过频率损失函数减少频率空间的差异,获得更加清晰锐利的视频。所述频率损失函数具体为:In practical applications, the present invention uses a frequency loss function to help obtain more image detail information. When using some commonly used loss functions, in order to reduce the loss value, the model tends to make the video image smoother. These details often correspond to the high-frequency part of the frequency signal, so the difference in frequency space is reduced through the frequency loss function to obtain a clearer and sharper video. The frequency loss function is specifically:

Figure BDA0003931932320000071
Figure BDA0003931932320000071

其中,

Figure BDA0003931932320000072
表示频率损失函数,
Figure BDA0003931932320000073
表示将所述待训练视频输入BasicVSR模型生成的图像,I表示所述待训练视频对应的超分辨率图像,∈表示第一常数,α表示第二常数,
Figure BDA0003931932320000074
表示对
Figure BDA0003931932320000075
进行快速傅立叶变换,
Figure BDA0003931932320000076
表示对I进行快速傅立叶变换。in,
Figure BDA0003931932320000072
Denotes the frequency loss function,
Figure BDA0003931932320000073
Represents the image generated by inputting the video to be trained into the BasicVSR model, I represents the super-resolution image corresponding to the video to be trained, ∈ represents the first constant, and α represents the second constant,
Figure BDA0003931932320000074
express yes
Figure BDA0003931932320000075
Perform a fast Fourier transform,
Figure BDA0003931932320000076
Indicates the fast Fourier transform of I.

本发明实施例针对上述方法还提供了一种基于深度学习的视频超分辨率重建系统,包括:The embodiment of the present invention also provides a deep learning-based video super-resolution reconstruction system for the above method, including:

构建模块,用于构建超分模型;所述超分模型为以待训练视频各帧对应的图像为输入,以所述待训练视频各帧对应的超分辨率图像为输出,以频率损失函数最小为目标对BasicVSR模型进行训练得到的;所述BasicVSR模型的前向分支和后向分支均包括GDFN模块。A building block for constructing a super-resolution model; the super-resolution model is to input the image corresponding to each frame of the video to be trained, to output the super-resolution image corresponding to each frame of the video to be trained, and to minimize the frequency loss function It is obtained by training the BasicVSR model for the target; both the forward branch and the backward branch of the BasicVSR model include a GDFN module.

获取模块,用于获取待处理视频。The acquisition module is used to acquire the video to be processed.

超分辨率图像确定模块,用于将所述待处理视频的各帧图像输入所述超分模型得到所述待处理视频各帧图像对应的超分辨率图像。A super-resolution image determining module, configured to input each frame image of the video to be processed into the super-resolution model to obtain a super-resolution image corresponding to each frame image of the video to be processed.

超分辨率视频确定模块;根据所述待处理视频各帧图像对应的超分辨率图像得到所述待处理视频对应的超分辨率视频。A super-resolution video determining module; obtain the super-resolution video corresponding to the video to be processed according to the super-resolution image corresponding to each frame image of the video to be processed.

在实际应用中,所述BasicVSR模型包括前向分支、后向分支和上采样分支;所述前向分支的输出端与所述后向分支的输出端均与所述上采样分支的输入端连接。In practical applications, the BasicVSR model includes a forward branch, a backward branch, and an upsampling branch; the output of the forward branch and the output of the backward branch are connected to the input of the upsampling branch .

在实际应用中,所述前向分支包括N个前向传播模块;所述后向分支包括N个后向传播模块;所述上采样分支包括N个上采样传播模块;N为大于1的正整数。In practical applications, the forward branch includes N forward propagation modules; the backward branch includes N backward propagation modules; the upsampling branch includes N upsampling propagation modules; N is a positive value greater than 1 integer.

第i个前向传播模块的第一输入端与第i-1个前向传播模块的第一输出端连接;所述第i个前向传播模块的第二输入端用于输入所述待处理视频的第i帧图像和第i-1帧图像;所述第i个前向传播模块的第一输出端与所述第i+1个前向传播模块的第一输入端连接;所述第i个前向传播模块的第二输出端与第i个上采样模块的第一输入端连接。The first input end of the i-th forward propagation module is connected to the first output end of the i-1th forward propagation module; the second input end of the i-th forward propagation module is used to input the to-be-processed The i-th frame image and the i-1th frame image of the video; the first output end of the i-th forward propagation module is connected to the first input end of the i+1-th forward propagation module; the first The second output end of the i forward propagation module is connected to the first input end of the i th upsampling module.

第i个后向传播模块的第一输入端与第i+1个后向传播模块的第一输出端连接;所述第i个后向传播模块的第二输入端用于输入所述待处理视频的第i帧图像和第i-1帧图像;所述第i个后向传播模块的第一输出端与所述第i-1个后向传播模块的第一输入端连接;所述第i个后向传播模块的第二输出端与第i个上采样模块的第二输入端连接。The first input end of the i-th backpropagation module is connected to the first output end of the i+1th backpropagation module; the second input end of the i-th backpropagation module is used to input the to-be-processed The i-th frame image and the i-1th frame image of the video; the first output end of the i-th backward propagation module is connected to the first input end of the i-1th backward propagation module; the first The second output end of the i backpropagation module is connected to the second input end of the i th upsampling module.

在实际应用中,所述前向传播模块和所述后向传播模块均包括光流估计模块、空间扭曲模块和深度残差块,所述光流估计模块、所述空间扭曲模块、所述GDFN模块和所述深度残差块依次连接。In practical applications, both the forward propagation module and the backward propagation module include an optical flow estimation module, a space warping module and a depth residual block, the optical flow estimation module, the space warping module, the GDFN Modules and the deep residual block are sequentially connected.

本发明实施例还提供了一种计算机可读存储介质,其存储有计算机程序,所述计算机程序被处理器执行时实现上述实施例所述的基于深度学习的视频超分辨率重建方法。An embodiment of the present invention also provides a computer-readable storage medium, which stores a computer program, and when the computer program is executed by a processor, implements the video super-resolution reconstruction method based on deep learning described in the above-mentioned embodiments.

本发明实施例还提供了一种视频超分系统:The embodiment of the present invention also provides a video super-resolution system:

为了更好地展示模型的超分性能,本发明使用开放神经网络交换(ONNX)将模型进行转换,从而可以在未安装模型依赖库的环境中进行视频超分任务。使用pyqt搭建去系统界面。如图6所示,图中select为视频选择按钮,点击select选择需要处理的视频文件,OriginalVideo为原始视频播放器,Modified Video为超分视频播放器,model a、modelb和model c为超分算法选择按钮,按下按钮将使用对应的超分算法进行视频处理并播放,即可得到对应的视频。In order to better demonstrate the super-resolution performance of the model, the present invention uses the Open Neural Network Exchange (ONNX) to convert the model, so that the video super-resolution task can be performed in an environment where the model-dependent library is not installed. Use pyqt to build the system interface. As shown in Figure 6, select in the figure is the video selection button, click select to select the video file to be processed, OriginalVideo is the original video player, Modified Video is the super-resolution video player, model a, modelb and model c are super-resolution algorithms Select the button, press the button to use the corresponding super-resolution algorithm for video processing and playback, and the corresponding video can be obtained.

本发明基于BasicVSR模型改进,相比现有的BasicVSR,本发明模型使用GDFN模块来达到更好的特征融合效果。使用频率损失函数减少超分结果高频分量的损失。The present invention is based on the improvement of the BasicVSR model. Compared with the existing BasicVSR, the model of the present invention uses the GDFN module to achieve a better feature fusion effect. Use the frequency loss function to reduce the loss of high-frequency components of super-resolution results.

本说明书中各个实施例采用递进的方式描述,每个实施例重点说明的都是与其他实施例的不同之处,各个实施例之间相同相似部分互相参见即可。对于实施例公开的系统而言,由于其与实施例公开的方法相对应,所以描述的比较简单,相关之处参见方法部分说明即可。Each embodiment in this specification is described in a progressive manner, each embodiment focuses on the difference from other embodiments, and the same and similar parts of each embodiment can be referred to each other. As for the system disclosed in the embodiment, since it corresponds to the method disclosed in the embodiment, the description is relatively simple, and for relevant details, please refer to the description of the method part.

本文中应用了具体个例对本发明的原理及实施方式进行了阐述,以上实施例的说明只是用于帮助理解本发明的方法及其核心思想;同时,对于本领域的一般技术人员,依据本发明的思想,在具体实施方式及应用范围上均会有改变之处。综上所述,本说明书内容不应理解为对本发明的限制。In this paper, specific examples have been used to illustrate the principle and implementation of the present invention. The description of the above embodiments is only used to help understand the method of the present invention and its core idea; meanwhile, for those of ordinary skill in the art, according to the present invention Thoughts, there will be changes in specific implementation methods and application ranges. In summary, the contents of this specification should not be construed as limiting the present invention.

Claims (10)

Translated fromChinese
1.一种基于深度学习的视频超分辨率重建方法,其特征在于,包括:1. A video super-resolution reconstruction method based on deep learning, characterized in that, comprising:构建超分模型;所述超分模型为以待训练视频各帧对应的图像为输入,以所述待训练视频各帧对应的超分辨率图像为输出,以频率损失函数最小为目标对BasicVSR模型进行训练得到的;所述BasicVSR模型的前向分支和后向分支均包括GDFN模块;Build a super-resolution model; the super-resolution model is to input the image corresponding to each frame of the video to be trained, to output the super-resolution image corresponding to each frame of the video to be trained, and to target the BasicVSR model with the minimum frequency loss function Obtained by training; the forward branch and the backward branch of the BasicVSR model both include the GDFN module;获取待处理视频;Get the video to be processed;将所述待处理视频的各帧图像输入所述超分模型得到所述待处理视频各帧图像对应的超分辨率图像;Inputting each frame image of the video to be processed into the super-resolution model to obtain a super-resolution image corresponding to each frame image of the video to be processed;根据所述待处理视频各帧图像对应的超分辨率图像得到所述待处理视频对应的超分辨率视频。The super-resolution video corresponding to the video to be processed is obtained according to the super-resolution image corresponding to each frame image of the video to be processed.2.根据权利要求1所述的一种基于深度学习的视频超分辨率重建方法,其特征在于,所述BasicVSR模型包括前向分支、后向分支和上采样分支;所述前向分支的输出端与所述后向分支的输出端均与所述上采样分支的输入端连接。2. a kind of video super-resolution reconstruction method based on deep learning according to claim 1, is characterized in that, described BasicVSR model comprises forward branch, backward branch and upsampling branch; The output of described forward branch The terminal and the output terminal of the backward branch are both connected to the input terminal of the up-sampling branch.3.根据权利要求2所述的一种基于深度学习的视频超分辨率重建方法,其特征在于,所述前向分支包括N个前向传播模块;所述后向分支包括N个后向传播模块;所述上采样分支包括N个上采样传播模块;N为大于1的正整数;3. A kind of video super-resolution reconstruction method based on deep learning according to claim 2, is characterized in that, described forward branch comprises N forward propagation modules; Described backward branch comprises N backward propagation module; the upsampling branch includes N upsampling propagation modules; N is a positive integer greater than 1;第i个前向传播模块的第一输入端与第i-1个前向传播模块的第一输出端连接;所述第i个前向传播模块的第二输入端用于输入所述待处理视频的第i帧图像和第i-1帧图像;所述第i个前向传播模块的第一输出端与所述第i+1个前向传播模块的第一输入端连接;所述第i个前向传播模块的第二输出端与第i个上采样模块的第一输入端连接;The first input end of the i-th forward propagation module is connected to the first output end of the i-1th forward propagation module; the second input end of the i-th forward propagation module is used to input the to-be-processed The i-th frame image and the i-1th frame image of the video; the first output end of the i-th forward propagation module is connected to the first input end of the i+1-th forward propagation module; the first The second output end of the i forward propagation module is connected to the first input end of the i th upsampling module;第i个后向传播模块的第一输入端与第i+1个后向传播模块的第一输出端连接;所述第i个后向传播模块的第二输入端用于输入所述待处理视频的第i帧图像和第i-1帧图像;所述第i个后向传播模块的第一输出端与所述第i-1个后向传播模块的第一输入端连接;所述第i个后向传播模块的第二输出端与第i个上采样模块的第二输入端连接。The first input end of the i-th backpropagation module is connected to the first output end of the i+1th backpropagation module; the second input end of the i-th backpropagation module is used to input the to-be-processed The i-th frame image and the i-1th frame image of the video; the first output end of the i-th backward propagation module is connected to the first input end of the i-1th backward propagation module; the first The second output end of the i backpropagation module is connected to the second input end of the i th upsampling module.4.据权利要求3所述的一种基于深度学习的视频超分辨率重建方法,其特征在于,所述前向传播模块和所述后向传播模块均包括光流估计模块、空间扭曲模块和深度残差块,所述光流估计模块、所述空间扭曲模块、所述GDFN模块和所述深度残差块依次连接。4. a kind of video super-resolution reconstruction method based on deep learning according to claim 3, is characterized in that, described forward propagation module and described backward propagation module all comprise optical flow estimation module, space distortion module and A depth residual block, the optical flow estimation module, the space warping module, the GDFN module and the depth residual block are sequentially connected.5.根据权利要求1所述的一种基于深度学习的视频超分辨率重建方法,其特征在于,所述频率损失函数具体为:5. a kind of video super-resolution reconstruction method based on deep learning according to claim 1, is characterized in that, described frequency loss function is specifically:
Figure FDA0003931932310000021
Figure FDA0003931932310000021
其中,
Figure FDA0003931932310000022
表示频率损失函数,
Figure FDA0003931932310000023
表示将所述待训练视频输入BasicVSR模型生成的图像,I表示所述待训练视频对应的超分辨率图像,∈表示第一常数,α表示第二常数,
Figure FDA0003931932310000024
表示对
Figure FDA0003931932310000025
进行快速傅立叶变换,
Figure FDA0003931932310000026
表示对I进行快速傅立叶变换。
in,
Figure FDA0003931932310000022
Denotes the frequency loss function,
Figure FDA0003931932310000023
Represents the image generated by inputting the video to be trained into the BasicVSR model, I represents the super-resolution image corresponding to the video to be trained, ∈ represents the first constant, and α represents the second constant,
Figure FDA0003931932310000024
express yes
Figure FDA0003931932310000025
Perform a fast Fourier transform,
Figure FDA0003931932310000026
Indicates the fast Fourier transform of I.
6.一种基于深度学习的视频超分辨率重建系统,其特征在于,包括:6. A video super-resolution reconstruction system based on deep learning, characterized in that, comprising:构建模块,用于构建超分模型;所述超分模型为以待训练视频各帧对应的图像为输入,以所述待训练视频各帧对应的超分辨率图像为输出,以频率损失函数最小为目标对BasicVSR模型进行训练得到的;所述BasicVSR模型的前向分支和后向分支均包括GDFN模块;A building block for constructing a super-resolution model; the super-resolution model is to input the image corresponding to each frame of the video to be trained, to output the super-resolution image corresponding to each frame of the video to be trained, and to minimize the frequency loss function Obtained by training the BasicVSR model for the target; the forward branch and the backward branch of the BasicVSR model both include the GDFN module;获取模块,用于获取待处理视频;The acquisition module is used to acquire the video to be processed;超分辨率图像确定模块,用于将所述待处理视频的各帧图像输入所述超分模型得到所述待处理视频各帧图像对应的超分辨率图像;A super-resolution image determination module, configured to input each frame image of the video to be processed into the super-resolution model to obtain a super-resolution image corresponding to each frame image of the video to be processed;超分辨率视频确定模块;根据所述待处理视频各帧图像对应的超分辨率图像得到所述待处理视频对应的超分辨率视频。A super-resolution video determining module; obtain the super-resolution video corresponding to the video to be processed according to the super-resolution image corresponding to each frame image of the video to be processed.7.根据权利要求6所述的一种基于深度学习的视频超分辨率重建系统,其特征在于,所述BasicVSR模型包括前向分支、后向分支和上采样分支;所述前向分支的输出端与所述后向分支的输出端均与所述上采样分支的输入端连接。7. a kind of video super-resolution reconstruction system based on deep learning according to claim 6, is characterized in that, described BasicVSR model comprises forward branch, backward branch and upsampling branch; The output of described forward branch The terminal and the output terminal of the backward branch are both connected to the input terminal of the up-sampling branch.8.根据权利要求7所述的一种基于深度学习的视频超分辨率重建系统,其特征在于,所述前向分支包括N个前向传播模块;所述后向分支包括N个后向传播模块;所述上采样分支包括N个上采样传播模块;N为大于1的正整数;8. A video super-resolution reconstruction system based on deep learning according to claim 7, wherein the forward branch includes N forward propagation modules; the backward branch includes N backward propagation modules module; the upsampling branch includes N upsampling propagation modules; N is a positive integer greater than 1;第i个前向传播模块的第一输入端与第i-1个前向传播模块的第一输出端连接;所述第i个前向传播模块的第二输入端用于输入所述待处理视频的第i帧图像和第i-1帧图像;所述第i个前向传播模块的第一输出端与所述第i+1个前向传播模块的第一输入端连接;所述第i个前向传播模块的第二输出端与第i个上采样模块的第一输入端连接;The first input end of the i-th forward propagation module is connected to the first output end of the i-1th forward propagation module; the second input end of the i-th forward propagation module is used to input the to-be-processed The i-th frame image and the i-1th frame image of the video; the first output end of the i-th forward propagation module is connected to the first input end of the i+1-th forward propagation module; the first The second output end of the i forward propagation module is connected to the first input end of the i th upsampling module;第i个后向传播模块的第一输入端与第i+1个后向传播模块的第一输出端连接;所述第i个后向传播模块的第二输入端用于输入所述待处理视频的第i帧图像和第i-1帧图像;所述第i个后向传播模块的第一输出端与所述第i-1个后向传播模块的第一输入端连接;所述第i个后向传播模块的第二输出端与第i个上采样模块的第二输入端连接。The first input end of the i-th backpropagation module is connected to the first output end of the i+1th backpropagation module; the second input end of the i-th backpropagation module is used to input the to-be-processed The i-th frame image and the i-1th frame image of the video; the first output end of the i-th backward propagation module is connected to the first input end of the i-1th backward propagation module; the first The second output end of the i backpropagation module is connected to the second input end of the i th upsampling module.9.根据权利要求8所述的一种基于深度学习的视频超分辨率重建系统,其特征在于,所述前向传播模块和所述后向传播模块均包括光流估计模块、空间扭曲模块和深度残差块,所述光流估计模块、所述空间扭曲模块、所述GDFN模块和所述深度残差块依次连接。9. A kind of video super-resolution reconstruction system based on deep learning according to claim 8, it is characterized in that, described forward propagation module and described backward propagation module all comprise optical flow estimation module, space distortion module and A depth residual block, the optical flow estimation module, the space warping module, the GDFN module and the depth residual block are sequentially connected.10.一种计算机可读存储介质,其特征在于,其存储有计算机程序,所述计算机程序被处理器执行时实现如权利要求1至5中任一项所述的基于深度学习的视频超分辨率重建方法。10. A computer-readable storage medium, characterized in that it stores a computer program, and when the computer program is executed by a processor, it realizes the video super-resolution based on deep learning according to any one of claims 1 to 5 rate reconstruction method.
CN202211392882.1A2022-11-08 Video super-resolution reconstruction method, system and medium based on deep learningActiveCN115564655B (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
CN202211392882.1ACN115564655B (en)2022-11-08 Video super-resolution reconstruction method, system and medium based on deep learning

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN202211392882.1ACN115564655B (en)2022-11-08 Video super-resolution reconstruction method, system and medium based on deep learning

Publications (2)

Publication NumberPublication Date
CN115564655Atrue CN115564655A (en)2023-01-03
CN115564655B CN115564655B (en)2025-10-10

Family

ID=

Cited By (1)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN118037549A (en)*2024-04-112024-05-14华南理工大学Video enhancement method and system based on video content understanding

Citations (5)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN111583112A (en)*2020-04-292020-08-25华南理工大学 Method, system, device and storage medium for video super-resolution
CN112767250A (en)*2021-01-192021-05-07南京理工大学Video blind super-resolution reconstruction method and system based on self-supervision learning
CN112991183A (en)*2021-04-092021-06-18华南理工大学Video super-resolution method based on multi-frame attention mechanism progressive fusion
CN114332561A (en)*2021-12-242022-04-12腾讯科技(深圳)有限公司 Training method, device, equipment and medium for super-resolution model
CN114418845A (en)*2021-12-282022-04-29北京欧珀通信有限公司Image resolution improving method and device, storage medium and electronic equipment

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN111583112A (en)*2020-04-292020-08-25华南理工大学 Method, system, device and storage medium for video super-resolution
CN112767250A (en)*2021-01-192021-05-07南京理工大学Video blind super-resolution reconstruction method and system based on self-supervision learning
CN112991183A (en)*2021-04-092021-06-18华南理工大学Video super-resolution method based on multi-frame attention mechanism progressive fusion
CN114332561A (en)*2021-12-242022-04-12腾讯科技(深圳)有限公司 Training method, device, equipment and medium for super-resolution model
CN114418845A (en)*2021-12-282022-04-29北京欧珀通信有限公司Image resolution improving method and device, storage medium and electronic equipment

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
KELVIN C.K ET AL.: "BasicVSR: The search for essential components in video super-resolution and beyond", 《ARXIV》, 7 April 2021 (2021-04-07), pages 1 - 5*

Cited By (1)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN118037549A (en)*2024-04-112024-05-14华南理工大学Video enhancement method and system based on video content understanding

Similar Documents

PublicationPublication DateTitle
WO2024178979A1 (en)Single-image defogging method based on detail restoration
CN113727141B (en)Interpolation device and method for video frames
CN114881921B (en) De-occlusion imaging method and device based on event and video fusion
CN110889809B9 (en)Image processing method and device, electronic equipment and storage medium
CN111985281B (en)Image generation model generation method and device and image generation method and device
CN111062883B (en)Image processing method and device, computer readable medium and electronic device
CN111860363A (en) A video image processing method and device, electronic device, and storage medium
CN115065796B (en)Method and device for generating video intermediate frame
US20240362747A1 (en)Methods for generating image super-resolution data set, image super-resolution model and training method
CN113538287A (en)Video enhancement network training method, video enhancement method and related device
CN113992920A (en)Video compressed sensing reconstruction method based on deep expansion network
WO2023010750A1 (en)Image color mapping method and apparatus, electronic device, and storage medium
Xin et al.Video face super-resolution with motion-adaptive feedback cell
CN113012073A (en)Training method and device for video quality improvement model
WO2023160426A1 (en)Video frame interpolation method and apparatus, training method and apparatus, and electronic device
CN114245117A (en)Multi-sampling rate multiplexing network reconstruction method, device, equipment and storage medium
CN113658128A (en)Image blurring degree determining method, data set constructing method and deblurring method
CN115564655A (en)Video super-resolution reconstruction method, system and medium based on deep learning
CN115564655B (en) Video super-resolution reconstruction method, system and medium based on deep learning
CN114240750B (en) Video resolution enhancement method and device, storage medium and electronic device
CN112598578B (en)Super-resolution reconstruction system and method for nuclear magnetic resonance image
CN109862299A (en) Resolution processing method and device
CN115861048A (en) Image super-resolution method, device, equipment and storage medium
CN115103118B (en)High dynamic range image generation method, device, equipment and readable storage medium
Dar et al.Modular admm-based strategies for optimized compression, restoration, and distributed representations of visual data

Legal Events

DateCodeTitleDescription
PB01Publication
PB01Publication
SE01Entry into force of request for substantive examination
SE01Entry into force of request for substantive examination
GR01Patent grant

[8]ページ先頭

©2009-2025 Movatter.jp