Movatterモバイル変換


[0]ホーム

URL:


CN107483960B - Motion compensation frame rate up-conversion method based on spatial prediction - Google Patents

Motion compensation frame rate up-conversion method based on spatial prediction
Download PDF

Info

Publication number
CN107483960B
CN107483960BCN201710831783.1ACN201710831783ACN107483960BCN 107483960 BCN107483960 BCN 107483960BCN 201710831783 ACN201710831783 ACN 201710831783ACN 107483960 BCN107483960 BCN 107483960B
Authority
CN
China
Prior art keywords
block
frame
blocks
motion
type
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710831783.1A
Other languages
Chinese (zh)
Other versions
CN107483960A (en
Inventor
李然
吉秉彧
沈克琳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xinyang Normal University
Original Assignee
Xinyang Normal University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xinyang Normal UniversityfiledCriticalXinyang Normal University
Priority to CN201710831783.1ApriorityCriticalpatent/CN107483960B/en
Publication of CN107483960ApublicationCriticalpatent/CN107483960A/en
Application grantedgrantedCritical
Publication of CN107483960BpublicationCriticalpatent/CN107483960B/en
Activelegal-statusCriticalCurrent
Anticipated expirationlegal-statusCritical

Links

Images

Classifications

Landscapes

Abstract

Translated fromChinese

本发明公开了一种基于空间预测的运动补偿帧率上转换方法,涉及视频处理技术领域。该方法包括:对待插帧按照模板进行块分割,划分为A类块和B类块;对A类块进行全搜索运动估计得到运动矢量,其中,采用连续消除方法降低计算复杂度;结合A类块的运动矢量信息,依据空间相关性和最小误差匹配原则,计算B类块的运动矢量;将所求的A、B两类块的运动矢量组合成待插帧的运动矢量场、并依据参考帧ft与ft+1的信息,采用重叠块运动补偿技术插值得到待插帧ft+0.5。本发明有效利用全搜索运动估计和空间相关性各自的优点,既保证了运动估计的准确程度,又有效降低了运算复杂度,节约了计算成本。

Figure 201710831783

The invention discloses a motion compensation frame rate up-conversion method based on spatial prediction, and relates to the technical field of video processing. The method includes: dividing a frame to be inserted into blocks according to a template, and dividing it into A-type blocks and B-type blocks; performing full search motion estimation on the A-type blocks to obtain a motion vector, wherein a continuous elimination method is used to reduce the computational complexity; The motion vector information of the block, according to the spatial correlation and the minimum error matching principle, calculate the motion vector of the B-type block; combine the required motion vectors of the A and B types of blocks into the motion vector field to be inserted into the frame, and according to the reference The information of the frameft and ft+1 is obtained by interpolation of the overlapping block motion compensation technology to obtain the frame ft+0.5 to be inserted. The present invention effectively utilizes the respective advantages of full search motion estimation and spatial correlation, which not only ensures the accuracy of motion estimation, but also effectively reduces computational complexity and computational cost.

Figure 201710831783

Description

Motion compensation frame rate up-conversion method based on spatial prediction
Technical Field
The invention relates to the technical field of video processing, in particular to a motion compensation frame rate up-conversion method based on spatial prediction.
Background
With the development of multimedia technology and the update of hardware devices, and in order to obtain better visual experience, people put higher demands on the resolution and frame rate of videos. However, due to the limitation of bandwidth, a frame skipping strategy is adopted before video transmission, so that fast transmission is realized. Therefore, at the receiving end, it is necessary to recover the discarded frame by using the existing frame information, and to restore the quality of the original video as much as possible. Under such a demand, Frame Rate Up-Conversion (FRUC) has received attention from researchers in the field of video processing. This is because FRUC, as a post-processing technique, can up-convert video from a lower frame rate to a higher frame rate by inserting intermediate frames between two decoded frames.
There are various methods for frame rate up-conversion, which can be classified into a non-motion compensation frame interpolation method and a motion compensation frame interpolation method depending on whether the object motion is considered. The former is simple and mainly includes frame copying and frame averaging, and these two modes do not consider the motion of the inter-frame object, and directly utilize two adjacent frames of information to make compensation interpolation by means of copying or averaging. In contrast, the motion compensation frame interpolation method needs to consider the motion of the object in the image: and calculating the motion vector of the object, calculating the position of the object in the frame to be interpolated according to the motion track, and then compensating the pixel value. Comparing the two methods, when the video sequence contains less motion, frame duplication or frame averaging is fast and effective, but in the video sequence with more object motion, the non-motion compensation frame interpolation method can cause the image to be jittered and blurred. In this case, a motion compensation frame interpolation method needs to be adopted, and the motion blur phenomenon is effectively reduced by considering the motion condition between frames. In real life, most of video sequences contain a lot of motion or the combination of dynamic and static pictures, so the research and application of the motion compensation frame interpolation method are very important.
The motion compensation frame rate up-conversion method mainly comprises two steps: motion estimation and motion compensated interpolation. The motion estimation is used for calculating a motion vector field between adjacent frames, and the motion compensation interpolation is to interpolate an intermediate frame according to the motion vector field. It can be seen that the accuracy of motion estimation directly affects the quality of the restored video. Therefore, research on FRUC technology focuses on the efficient motion estimation method. In order to suppress the blurring effect, some classical methods commonly employ overlapped block compensation techniques in the motion compensation interpolation process. In the existing literature, a bidirectional motion estimation method is mostly adopted, and a motion vector field to two frames before and after is calculated by taking a frame to be interpolated as a starting point, for example, for the problem of "hole" and "overlapped block" generated by unidirectional motion estimation in the literature "Dual motion estimation for frame rate-conversion" (Suk-Ju Kang, Sungjoo Yoo and Young Hwan Kim, IEEEtransactions on Circuits and Systems for Video Technology, vol.20, No.12, pp.1909-1914,2010), a method for directly calculating the motion vector field of the frame to be interpolated is proposed. The method causes the pixels without motion vectors or with a plurality of motion vectors to no longer exist in the frame to be interpolated, thereby improving the efficiency and the reliability of motion estimation. However, this method does not feature spatial correlation, and thus greatly increases computational complexity. In order to further improve the accuracy of Motion vectors, some researchers have proposed a hybrid Motion Estimation method based on bidirectional Motion Estimation, for example, the literature "Direction-Select Motion Estimation for Motion-Compensated Frame Rate Up-Conversion" (bamboo Dong-Gon, Kang Suk-Ju, and KimYoung Hwan, Journal of Display Technology, vol.9, No.10, pp.840-850,2013) first calculates a unidirectional Motion vector field of a reference Frame, and then estimates a bidirectional Motion vector field of a Frame to be interpolated, which increases the accuracy of Motion Estimation. However, this method only uses the information of the neighboring blocks to calculate the motion vector of the block to be interpolated, which results in the transmission of incorrect motion vector information block by block, and this transmission of errors reduces the accuracy of motion estimation. In order to reduce the calculation amount, in the document "a Multilevel successful evaluation Algorithm for block Matching Motion Estimation" (x.q.gao, c.j.dunmu and c.r.zuo, ieee transactions on Image Processing, vol.9, No.3, pp.501-504,2000), a continuous Elimination method is proposed for solving the problem that the calculation amount is large due to excessive candidate Matching blocks, and the calculation time is effectively reduced by calculating the sum of the matched luminance values and setting a threshold value to greatly delete the candidate Matching blocks. However, the problem of spatial correlation is not considered in this document, and the amount of calculation cannot be further reduced.
The existing frame rate up-conversion technology needs to seek balance between calculation precision and calculation complexity, and some methods improve the estimation precision of motion vectors by adopting a full search strategy, but consume a large amount of running time; although some methods use the information of the neighboring blocks to calculate the motion vector of the block to be inserted, the inaccuracy of the initial motion vector may cause the transmission of errors, thereby reducing the estimation accuracy of the motion vector.
In summary, the motion compensation frame rate up-conversion method in the prior art has a problem that both the calculation accuracy and the calculation complexity cannot be considered.
Disclosure of Invention
The embodiment of the invention provides a motion compensation frame rate up-conversion method based on spatial prediction, which is used for solving the problem that the prior art cannot take both calculation precision and calculation complexity into consideration.
The embodiment of the invention provides a motion compensation frame rate up-conversion method based on spatial prediction, which comprises the following steps:
a, dividing a frame to be inserted into blocks, and dividing the blocks into A-type blocks and B-type blocks;
b, carrying out full search motion estimation based on a continuous elimination method on the A-type block, and determining a motion vector of the A-type block;
step c, combining the motion vector information of the A-type block, and calculating the motion vector of the B-type block according to the spatial correlation and the minimum error matching principle;
d, combining the motion vector of the A-type block and the motion vector of the B-type block into a motion vector field of the frame to be inserted, and according to the reference frame ftAnd ft+1The frame f to be interpolated is obtained by interpolation by adopting an overlapped block motion compensation methodt+0.5(ii) a Wherein f ist、ft+1And ft+0.5The luminance values of the t-th frame, the t + 1-th frame and the t + 0.5-th frame, respectively.
Preferably, the step a specifically includes:
setting frame f to be insertedt+0.5The spatial resolution of the frame to be interpolated is MxN, the block size is s, and each frame to be interpolated contains MxN/s2A standard block; the crossed blocks of the odd rows and the odd columns and the even rows and the even columns are divided into A-type blocks, the rest are B-type blocks, and M and N are divided by s.
Preferably, step b specifically includes:
and calculating the coordinate of the t frame and the t +1 frame as the luminance accumulated sum of the (i, j) block by taking the coordinate (i, j) of the upper left corner of the block as a reference as follows:
Figure GDA0002317136580000041
Figure GDA0002317136580000042
wherein f ist(i+m,j+n)、ft+1(i + m, j + n) are the brightness values of the t frame and the t +1 frame at coordinates (i + m, j + n), respectively, and (m, n) are the coordinates of pixels in the block;
and (3) traversing candidate blocks in a search window by setting the upper left pixel coordinate p of the current A-type block as (i, j), and referring to a frame ft+1Offset v 'of nth candidate block from (i, j)'n(x, y), reference frame ftThe offset of the nth candidate block to (i, j) is-v'n(ii) a Setting the radius of the search window as r, x, y E [ -r, r](ii) a Let initial offset v'0(r, r), calculate the initial difference D of the current class a block0The formula is as follows:
D0=||Bt(p-v'0)-Bt+1(p+v'0)||1
wherein, Bt(i-r, j-r) is a vector formed by arranging all pixels in a block in a row in the coordinate of the upper left corner of the t-th frame, | | · |, a1Is a vector of1A norm; will be provided with
Figure GDA0002317136580000043
Updating the coordinate offset of the next candidate block in the search window if the following inequality is satisfied
|Pt(i-x,j-y)-Pt+1(i+x,j+y)|<D0
The difference D of the nth candidate block is calculatednThe following were used:
Dn=||Bt(p-v'n)-Bt+1(p+v'n)||1
updating the motion vector v of the current class a block as followss
vs=v'n=(x,y)
And update D0=min{Dn,D0}; on the contrary, vsKeeping the same; and according to the process until all candidate blocks in the search window are traversed.
Preferably, step c specifically includes:
after the motion vector of the A-type block is calculated according to the full search motion estimation, the motion vectors v of four A-type blocks adjacent to the B-type block are selecteda1、va2、va3And va4As candidate vectors, a set V of candidate vectors is composedc
Vc={va1,va2,va3,va4}
And if the coordinate of the upper left pixel of the current B type block is p, the motion vector v of the B type blockpCalculating according to the minimum error matching principle:
Figure GDA0002317136580000051
wherein, Bt(p-v) is a vector formed by arranging all pixels in a block with the coordinate of p-v at the upper left corner of the t-th frame in a row, | · | | survival1Is a vector of1Norm, v is the candidate vector.
Preferably, step d specifically includes:
integrating the motion vectors of all the A-type blocks and B-type blocks into a frame f to be interpolatedt+0.5Of a motion vector field Vt+0.5(ii) a Calculating the frame f to be interpolated by using the following formulat+0.5The value at pixel position p ═ i, j):
Figure GDA0002317136580000052
wherein v isi,jIs a Vt+0.5A motion vector at p; k represents the type of the block, and takes 1 as a non-overlapping part, 2 as an overlapping part of two blocks and 3 as an overlapping part of four blocks; the coefficient omega takes a corresponding value according to the value of k.
In the embodiment of the present invention, a motion compensation frame rate up-conversion method based on spatial prediction is provided, and compared with the prior art, the method has the following beneficial effects: the invention provides a low-complexity motion compensation method based on spatial prediction, which carries out full search motion estimation on a specified block to be inserted according to a set template, obtains motion vectors of other blocks by spatial prediction, namely, in order to establish balance between precision and complexity, the advantages of a full search strategy and spatial correlation need to be considered at the same time. The invention effectively utilizes the respective advantages of full-search motion estimation and spatial correlation, not only ensures the accuracy of motion estimation, obviously reduces the operation complexity and reduces the calculation cost, but also improves 9 sets of CIF format video sequences, and estimates the interpolation quality by Peak Signal-to-Noise Ratio (PSNR) and Structural SIMilarity (Structural SIMilarity, SSIM).
Drawings
Fig. 1 is a flowchart of a method for motion-compensated frame rate up-conversion based on spatial prediction according to an embodiment of the present invention;
fig. 2 illustrates a dividing manner of two types of blocks in a motion compensation frame rate up-conversion method based on spatial prediction according to an embodiment of the present invention;
fig. 3 is a simplified flowchart of a method for motion-compensated frame rate up-conversion based on spatial prediction according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Fig. 1 is a flowchart of a method for motion-compensated frame rate up-conversion based on spatial prediction according to an embodiment of the present invention; fig. 3 is a simplified flowchart of a method for motion-compensated frame rate up-conversion based on spatial prediction according to an embodiment of the present invention. As shown in fig. 1 and 3, the method of the embodiment of the present invention includes:
step a, inputting a video sequence, extracting two adjacent frames, and performing block segmentation and block classification on a frame to be interpolated.
Setting frame f to be insertedt+0.5With a spatial resolution of mxn and a block size of s (both M and N must be divisible by s), the frame to be interpolated will contain mxn/s2And (5) standard blocks. As shown in fig. 2, the crossing blocks of odd rows and odd columns and even rows and european columns are classified into a class a, and the rest are B classes.
Step b, obtaining the motion vector of the A-type block by adopting full-search motion estimation based on a continuous elimination method, which specifically comprises the following steps:
step b 1: and calculating the coordinate of the t frame and the t +1 frame as the luminance accumulated sum of the (i, j) block by taking the coordinate (i, j) of the upper left corner of the block as a reference as follows:
Figure GDA0002317136580000071
Figure GDA0002317136580000072
wherein f ist(i+m,j+n)、ft-1(i + m, j + n) are the luminance values at coordinates (i + m, j + n) of the t-th frame and the t + 1-th frame, respectively, and (m, n) are the coordinates of the pixels in the block.
Step b 2: and (3) traversing candidate blocks in a search window by setting the upper left pixel coordinate p of the current A-type block as (i, j), and referring to a frame ft+1Offset v 'of nth candidate block from (i, j)'n(x, y), reference frame ftThe offset of the nth candidate block to (i, j) is-v'n. Setting the radius of the search window as r, x, y E [ -r, r]. Let initial offset v'0(r, r), calculate the initial difference D of the current class a block0The formula is as follows:
D0=||Bt(p-v'0)-Bt+1(p+v'0)||1(3)
wherein, Bt(i-r, j-r) is a vector formed by arranging all pixels in a block in a row in the coordinate of the upper left corner of the t-th frame, | | · |, a1Is a vector of1And (4) norm.
Step b 3: will be provided with
Figure GDA0002317136580000073
Updating the coordinate offset of the next candidate block in the search window if the following inequality is satisfied
|Pt(i-x,j-y)-Pt+1(i+x,j+y)|<D0(4)
The difference D of the nth candidate block is calculatednThe following were used:
Dn=||Bt(p-v'n)-Bt+1(p+v'n)||1(5)
next, the motion vector v of the current class A block is updated as followss
vs=v'n=(x,y) (6)
And update D0=min{Dn,D0}. If inequality (4) does not hold, vsRemain unchanged.
Step b 4: go to step b3 until all candidate blocks within the search window have been traversed.
Step c, calculating a B-type block motion vector; the specific process comprises the following steps:
step c 1: setting the coordinate of the upper left pixel of the current B type block as p, selecting the motion vectors v of four A type blocks adjacent to the B type blocka1、va2、va3And va4As candidate vectors, a set V of candidate vectors is composedc
Vc={va1,va2,va3,va4} (7)
Step c 2: calculating motion vector v of B-type block according to minimum error matching principlep
Figure GDA0002317136580000081
And d, interpolating the intermediate frame by adopting an overlapped block technology.
Step d 1: integrating the motion vectors of all the A-type blocks and B-type blocks into a frame f to be interpolatedt+0.5Of a motion vector field Vt+0.5:;
Step d 2: calculating the frame f to be interpolated by using the following formulat+0.5The value at pixel location (i, j):
Figure GDA0002317136580000082
wherein p ═ i, j, vi,jIs a Vt+0.5Motion vector at p, superscript k represents the type of block: when taking 1, the part is the non-overlapping part; when taking 2, the overlapping part of the two blocks is formed; taking 3, it is the overlapping portion of the four blocks. The coefficient omega takes a corresponding value according to the value of k.
Simulation result
The invention provides a method for evaluating 9 groups of test video sequences in a CIF format. The comparison method comprises the following steps:
1) the two-way motion estimation frame rate up-conversion technique proposed by the document "Dual motion estimation for frame up-conversion" (Suk-JuKang, Sungjoo Yoo and Young Hwan Kim, IEEE Transactions on Circuits and System for Video Technology, vol.20, No.12, pp.1909-1914,2010), abbreviated as the Dual _ ME method; 2) the DS _ ME method is a hybrid Motion Estimation Frame Rate Up-Conversion technique proposed by "Direction-Select Motion Estimation for Motion-Compensated Frame Rate Up-Conversion" (Dong-Gon Yoo, Suk-Ju Kang, and Young Hwan Kim, Journal of Display Technology, vol.9, No.10, pp.840-850,2013). The evaluation performance index adopts peak signal-to-noise ratio, structural similarity and average single-frame processing time which can reflect objective quality. The hardware platform is a core i7CPU computer with a main frequency of 3.60GHz and an internal memory of 8GB, and the software platform is a Windows 764 bit operating system and Matlab R2014b simulation experiment software.
TABLE 1 PSNR value comparison for different frame rate upconversion techniques
Figure GDA0002317136580000091
TABLE 2 SSIM value comparison for different frame rate up-conversion techniques
Figure GDA0002317136580000092
TABLE 3 comparison of time (in s/frame) required to interpolate a frame for different frame rate upconversion techniques
Figure GDA0002317136580000101
Table 1 lists PSNR values for different frame rate upconversion techniques. Aiming at 9 groups of test sequences, compared with a Dual _ ME method, the method provided by the invention obviously improves the PSNR value which can reach 2.72dB at most, and achieves the purpose of improving the quality of the recovered video; compared with the DS _ ME method, the method is relatively better for video sequences which are relatively static or contain a small amount of motion, such as foreman and mothers, but for videos which contain a large amount of motion, such as bus, city, football, mobile and stepan, the motion track of the object can be estimated by adopting the method of the invention, and the PSNR value is improved by 3.21dB to the maximum. Table 2 lists the SSIM values of the different frame rate upconversion techniques, but by contrast, the method proposed by the present invention is significantly better than the two comparison methods, and only slightly lower in video processing with less motion than the DS _ ME method. As shown in Table 3, the running time of the present invention is lower than that of the DS _ ME method and the Dual _ ME method, which represents lower operation complexity. Therefore, compared with the comparison technology, the computing resource configuration of the invention is more effective, and the computing time is saved on the premise of ensuring the accuracy by dividing the block types and combining the advantages of full-search motion estimation and spatial correlation.
The above disclosure is only a few specific embodiments of the present invention, and those skilled in the art can make various modifications and variations of the present invention without departing from the spirit and scope of the present invention, and it is intended that the present invention encompass these modifications and variations as well as others within the scope of the appended claims and their equivalents.

Claims (1)

Translated fromChinese
1.一种基于空间预测的运动补偿帧率上转换方法,其特征在于,包括:1. a motion compensation frame rate up-conversion method based on spatial prediction, is characterized in that, comprises:步骤a,对待插帧进行块分割,划分为A类块和B类块;Step a, the frame to be inserted is divided into blocks and divided into A-type blocks and B-type blocks;步骤b,对A类块进行基于连续消除方法的全搜索运动估计,确定A类块的运动矢量;In step b, the full search motion estimation based on the continuous elimination method is carried out to the A-type block, and the motion vector of the A-type block is determined;步骤c,结合A类块的运动矢量信息,根据空间相关性和最小误差匹配原则,计算B类块的运动矢量;Step c, in combination with the motion vector information of the A-type block, according to the spatial correlation and the minimum error matching principle, calculate the motion vector of the B-type block;步骤d,将所求的A类块的运动矢量和B类块的运动矢量组合成待插帧的运动矢量场,并依据参考帧ft与ft+1的信息,采用重叠块运动补偿方法插值得到待插帧ft+0.5;其中,ft、ft+1和ft+0.5分别为第t帧、第t+1帧和第t+0.5帧的亮度值;Step d, combine the motion vector of the required A-type block and the motion vector of the B-type block into the motion vector field to be inserted into the frame, and according to the information of the reference frame ft and ft+1 , adopt the overlapping block motion compensation method Interpolation obtains the frame to be inserted ft+0.5 ; wherein, ft , ft+1 and ft+0.5 are the luminance values of the t frame, the t+1 frame and the t+0.5 frame respectively;步骤a,具体包括:Step a, specifically includes:设待插帧ft+0.5的空间分辨率为M×N,分块尺寸为s,则每一帧待插帧包含M×N/s2个标准块;其中,奇数行与奇数列、偶数行与偶数列的交叉块划分为A类块,其余为B类块,且M和N被s整除;Suppose the spatial resolution of the frame to be inserted ft+0.5 is M×N, and the block size is s, then each frame to be inserted contains M×N/s2 standard blocks; The intersection blocks of rows and even columns are divided into A-type blocks, the rest are B-type blocks, and M and N are divisible by s;步骤b,具体包括:Step b, specifically includes:以分块的左上角像素坐标(i,j)为参照,计算第t帧和第t+1帧的坐标为(i,j)块的亮度累加和如下:Taking the pixel coordinates (i, j) of the upper left corner of the block as a reference, the coordinates of the t-th frame and the t+1-th frame are calculated as the cumulative sum of the brightness of the (i, j) block as follows:
Figure FDA0002317136570000011
Figure FDA0002317136570000011
Figure FDA0002317136570000012
Figure FDA0002317136570000012
其中,ft(i+m,j+n)、ft+1(i+m,j+n)分别为第t帧、第t+1帧在坐标(i+m,j+n)的亮度值,(m,n)为块内像素坐标;Among them, ft (i+m, j+n), ft+1 (i+m, j+n) are the t-th and t+1-th frames at coordinates (i+m, j+n), respectively Intensity value, (m,n) is the pixel coordinate in the block;设当前A类块的左上像素坐标p=(i,j),遍历搜索窗口内的候选块,参考帧ft+1第n个候选块相对(i,j)的偏移量v′n=(x,y),参考帧ft第n个候选块相对(i,j)的偏移量为-v′n;设定搜索窗口半径为r,则x,y∈[-r,r];令初始偏移量v′0=(r,r),计算当前A类块的初始差值D0,公式如下:Set the upper left pixel coordinate p=(i, j) of the current class A block, traverse the candidate blocks in the search window, and refer to the offset v′n of the nth candidate block of the reference frame ft+1 relative to (i, j) (x, y), the offset of the nth candidate block of the reference frame ft relative to (i, j) is -v′n ; set the search window radius to r, then x, y∈[-r,r] ; Let the initial offset v′0 =(r,r), calculate the initial difference D0 of the current Class A block, the formula is as follows:D0=||Bt(p-v'0)-Bt+1(p+v'0)||1D0 =||Bt (p-v'0 )-Bt+1 (p+v'0 )||1其中,Bt(i-r,j-r)为第t帧左上角坐标为(i-r,j-r)块内所有像素按行排列而成的向量,||·||1为向量的l1范数;将v′n更新为搜索窗口内的下一个候选块的坐标偏移量,若满足下列不等式Among them, Bt (ir, jr) is the vector in which all the pixels in the block (ir, jr) are arranged in rows with the coordinates of the upper left corner of the t-th frame, || · ||1 is the l1 norm of the vector; 'n is updated to the coordinate offset of the next candidate block in the search window, if the following inequalities are satisfied|Pt(i-x,j-y)-Pt+1(i+x,j+y)|<D0|Pt (ix,jy)-Pt+1 (i+x,j+y)|<D0则计算第n个候选块的差值Dn如下:Then the difference Dn of the nth candidate block is calculated as follows:Dn=||Bt(p-v'n)-Bt+1(p+v'n)||1Dn =||Bt (p-v'n )-Bt+1 (p+v'n )||1按下式更新当前A类块的运动矢量vsUpdate the motion vector vs of the current class A block as follows:vs=v'n=(x,y)vs =v'n =(x,y)并更新D0=min{Dn,D0};反之,vs保持不变;按照上述过程直至遍历完搜索窗口内的所有候选块为止;And update D0 =min{Dn ,D0 }; on the contrary, vs remains unchanged; follow the above process until all candidate blocks in the search window are traversed;步骤c,具体包括:Step c, specifically includes:依照全搜索运动估计求出A类块的运动矢量后,选取与B类块相邻的四个A类块的运动矢量va1、va2、va3与va4作为候选矢量,组成如下候选矢量集合VcAfter the motion vector of the A-type block is obtained according to the full search motion estimation, the motion vectors va1 , va2 , va3 and va4 of the four A-type blocks adjacent to the B-type block are selected as candidate vectors to form the following candidate vectors Set Vc :Vc={va1,va2,va3,va4}Vc ={va1 ,va2 ,va3 ,va4 }设当前B类块左上像素坐标为p,则B类块的运动矢量vp依据最小误差匹配原则计算:Assuming that the upper left pixel coordinate of the current B-type block is p, the motion vector vp of the B-type block is calculated according to the minimum error matching principle:
Figure FDA0002317136570000021
Figure FDA0002317136570000021
其中,Bt(p-v)为第t帧左上角坐标为p-v的块内所有像素按行排列而成的向量,||·||1为向量的l1范数,v为候选矢量;Among them, Bt (pv) is the vector of all pixels in the block whose upper left corner coordinate is pv of the t-th frame arranged in rows, ||·||1 is the l1 norm of the vector, and v is the candidate vector;步骤d,具体包括:Step d, specifically includes:将所有A类块和B类块的运动矢量集合为待插帧ft+0.5的运动向量场Vt+0.5;采用下式计算待插帧ft+0.5在像素位置p=(i,j)处的取值:Set the motion vectors of all A-type blocks and B-type blocks as the motion vector field Vt+0.5 of the frame to be inserted ft +0.5; adopt the following formula to calculate the frame to be inserted ft+0.5 at the pixel position p=(i,j ) at the value:
Figure FDA0002317136570000031
Figure FDA0002317136570000031
其中,vi,j为Vt+0.5在p处的运动矢量;k代表块的类型,取1时为不重叠部分,取2时为两个块的重叠部分,取3时,为四个块的重叠部分;系数ω则根据k的值取对应值。Among them, vi,j is the motion vector of Vt+0.5 at p; k represents the type of block, when 1 is taken as the non-overlapping part, when 2 is taken as the overlapping part of two blocks, when 3 is taken, it is four The overlapping part of the block; the coefficient ω takes the corresponding value according to the value of k.
CN201710831783.1A2017-09-152017-09-15Motion compensation frame rate up-conversion method based on spatial predictionActiveCN107483960B (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
CN201710831783.1ACN107483960B (en)2017-09-152017-09-15Motion compensation frame rate up-conversion method based on spatial prediction

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN201710831783.1ACN107483960B (en)2017-09-152017-09-15Motion compensation frame rate up-conversion method based on spatial prediction

Publications (2)

Publication NumberPublication Date
CN107483960A CN107483960A (en)2017-12-15
CN107483960Btrue CN107483960B (en)2020-06-02

Family

ID=60584535

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN201710831783.1AActiveCN107483960B (en)2017-09-152017-09-15Motion compensation frame rate up-conversion method based on spatial prediction

Country Status (1)

CountryLink
CN (1)CN107483960B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
WO2019234671A1 (en)2018-06-072019-12-12Beijing Bytedance Network Technology Co., Ltd.Improved pmmvd
TWI719519B (en)2018-07-022021-02-21大陸商北京字節跳動網絡技術有限公司Block size restrictions for dmvr
CN109756778B (en)*2018-12-062021-09-14中国人民解放军陆军工程大学Frame rate conversion method based on self-adaptive motion compensation
CN113630621B (en)2020-05-082022-07-19腾讯科技(深圳)有限公司Video processing method, related device and storage medium
CN112995677B (en)*2021-02-082022-05-31信阳师范学院 A Video Frame Rate Up-Conversion Method Based on Pixel Semantic Matching

Citations (3)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN103702128A (en)*2013-12-242014-04-02浙江工商大学Interpolation frame generating method applied to up-conversion of video frame rate
CN104718756A (en)*2013-01-302015-06-17英特尔公司 Content-adaptive predictive pictures and functional predictive pictures utilizing modified references for next-generation video coding
CN105872559A (en)*2016-03-202016-08-17信阳师范学院Frame rate up-conversion method based on mixed matching of chromaticity

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN104718756A (en)*2013-01-302015-06-17英特尔公司 Content-adaptive predictive pictures and functional predictive pictures utilizing modified references for next-generation video coding
CN103702128A (en)*2013-12-242014-04-02浙江工商大学Interpolation frame generating method applied to up-conversion of video frame rate
CN105872559A (en)*2016-03-202016-08-17信阳师范学院Frame rate up-conversion method based on mixed matching of chromaticity

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Multi-Channel Mixed-Pattern Based Frame Rate Up-Conversion Using Spatio-Temporal Motion Vector Refinement and Dual-Weighted Overlapped Block Motion;Ran Li et al.;《Journal of Display Technology》;20141231;第10卷(第12期);全文*
帧频提升关键技术的研究及实现;李真真;《中国优秀硕士学位论文全文数据库信息科技辑,2016年第6期》;20160615;全文*

Also Published As

Publication numberPublication date
CN107483960A (en)2017-12-15

Similar Documents

PublicationPublication DateTitle
CN107483960B (en)Motion compensation frame rate up-conversion method based on spatial prediction
CN105847804B (en)A kind of up-conversion method of video frame rate based on sparse redundant representation model
Kang et al.Motion compensated frame rate up-conversion using extended bilateral motion estimation
CN103702128B (en) An Interpolation Frame Generation Method Applied to Video Frame Rate Up-conversion
CN104219533B (en)A kind of bi-directional motion estimation method and up-conversion method of video frame rate and system
US8736767B2 (en)Efficient motion vector field estimation
Jeong et al.Motion-compensated frame interpolation based on multihypothesis motion estimation and texture optimization
US8711938B2 (en)Methods and systems for motion estimation with nonlinear motion-field smoothing
CN106254885B (en)Data processing system, method of performing motion estimation
JP2007181674A (en)Method of forming image using block matching and motion compensated interpolation
TW201929521A (en)Method, apparatus, and circuitry of noise reduction
CN102665061A (en)Motion vector processing-based frame rate up-conversion method and device
US20110058106A1 (en)Sparse geometry for super resolution video processing
CN106210448B (en)Video image jitter elimination processing method
WO2013100791A1 (en)Method of and apparatus for scalable frame rate up-conversion
CN108574844B (en)Multi-strategy video frame rate improving method for space-time significant perception
US7110453B1 (en)Motion or depth estimation by prioritizing candidate motion vectors according to more reliable texture information
Veselov et al.Iterative hierarchical true motion estimation for temporal frame interpolation
Kim et al.An efficient motion-compensated frame interpolation method using temporal information for high-resolution videos
KR20150090454A (en)Method for searching bidirectional motion using multiple frame and image apparatus with the same technique
CN109788297B (en)Video frame rate up-conversion method based on cellular automaton
CN112532907A (en)Video frame frequency improving method, device, equipment and medium
CN109756778B (en)Frame rate conversion method based on self-adaptive motion compensation
Shimano et al.Video temporal super-resolution based on self-similarity
WO2016131270A1 (en)Error concealment method and apparatus

Legal Events

DateCodeTitleDescription
PB01Publication
PB01Publication
SE01Entry into force of request for substantive examination
SE01Entry into force of request for substantive examination
GR01Patent grant
GR01Patent grant

[8]ページ先頭

©2009-2025 Movatter.jp