Movatterモバイル変換


[0]ホーム

URL:


CN102547265B - Interframe prediction method and device - Google Patents

Interframe prediction method and device
Download PDF

Info

Publication number
CN102547265B
CN102547265BCN201010610022.1ACN201010610022ACN102547265BCN 102547265 BCN102547265 BCN 102547265BCN 201010610022 ACN201010610022 ACN 201010610022ACN 102547265 BCN102547265 BCN 102547265B
Authority
CN
China
Prior art keywords
prime
reference frame
ref
width
height
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201010610022.1A
Other languages
Chinese (zh)
Other versions
CN102547265A (en
Inventor
舒倩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Yunzhou Multimedia Technology Co., Ltd.
Original Assignee
SHENZHEN YUNZHOU MULTIMEDIA TECHNOLOGY Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by SHENZHEN YUNZHOU MULTIMEDIA TECHNOLOGY Co LtdfiledCriticalSHENZHEN YUNZHOU MULTIMEDIA TECHNOLOGY Co Ltd
Priority to CN201010610022.1ApriorityCriticalpatent/CN102547265B/en
Priority to PCT/CN2011/076246prioritypatent/WO2012088848A1/en
Publication of CN102547265ApublicationCriticalpatent/CN102547265A/en
Application grantedgrantedCritical
Publication of CN102547265BpublicationCriticalpatent/CN102547265B/en
Expired - Fee Relatedlegal-statusCriticalCurrent
Anticipated expirationlegal-statusCritical

Links

Classifications

Landscapes

Abstract

The invention provides an interframe prediction method which comprises the following steps: step 1: determining the relationship between a first reference frame and the current coding frame; step 2: if a lens zooms in, processing the first reference frame to obtain a second reference frame, and setting the current reference frame as the second reference frame; entering into the step 3; if the lens zooms out, processing the first reference frame to obtain a fourth reference frame, and setting the current reference frame as the fourth reference frame; entering into the step 3: if the current reference frame is the first reference frame, entering into the step 3; step 3: carrying out interframe prediction on the current coding frame by adopting the current reference frame.

Description

A kind of inter-frame prediction method, device
Technical field
The present invention relates to field of video encoding, relate in particular to a kind of inter-frame prediction method, device.
Background technology
At present, in video coding process, conventionally carry out the spatial redundancies of removal of images with intra-frame prediction method, eliminate temporal redundancy with inter-frame prediction method.Due to the temporal redundancy of the interframe of video source, to be compared to spatial redundancies in frame much bigger, and this just makes inter-frame prediction method in Video coding, seem very important so.
Inter prediction is divided into according to the difference of prediction direction: the prediction of P frame and the prediction of B frame.The frame that the forward direction of the P frame Forecasting Methodology of main flow employing has at present been encoded, as the reference frame of current encoded frame, utilizes similitude between the two, the information of compression current encoded frame.This,, in the time that reference frame and current encoded frame have the high degree of approximation, have good effect, but along with the reduction of the degree of approximation between the two, compression effectiveness also can sharply decline.Especially in the time that the film sources such as low frame per second, overall track in are encoded, this problem will significantly show especially.
Summary of the invention
The object of the embodiment of the present invention is to propose a kind of inter-frame prediction method, be intended to solve the reference frame and the current encoded frame degree of approximation that in prior art, run into less, especially in the time that the film sources such as low frame per second, overall track in are encoded, cause compressing the problem that present encoding effect frame is lower.
The invention provides a kind inter-frame prediction method,, described method comprises:
Step 1: the relation of determining the first reference frame and current encoded frame;
Step 2: the first reference frame is processed if camera lens furthers, obtained the second reference frame, and current reference frame is set to the second reference frame; Enter step 3;
If camera lens zooms out the first reference frame is processed, obtain the 4th reference frame, and current reference frame is set to the 4th reference frame; Enter step 3;
If current reference frame is the first reference frame, enter step 3;
Step 3: adopt current reference frame to carry out inter prediction to current encoded frame.
The present invention also provides a kind of inter prediction device, and this device comprises:
Judging unit, for determining the relation of the first reference frame and current encoded frame;
The camera lens unit that furthers, for the first reference frame being processed, obtain the second reference frame in the time that camera lens furthers, and current reference frame is set to the second reference frame;
Camera lens extension unit, for the first reference frame being processed, obtain the 4th reference frame in the time zooming out for camera lens, and current reference frame is set to the 4th reference frame;
Predicting unit, for adopting current reference frame to carry out inter prediction to current encoded frame.
The present invention proposes a kind of inter-frame prediction method and device.The method and device are by determining the relation of current reference frame and current encoded frame, the mode zooming out according to camera lens respectively or further is carried out upper and lower sampling processing, improve the similarity of current reference frame and coded frame, thereby reached the compression effectiveness of optimizing current encoded frame.In the time of low frame per second, the shot change between frame and frame is larger, now adopts frame Forecasting Methodology of the present invention, and the lifting of compression performance will be more remarkable.
Brief description of the drawings
Fig. 1 is the method flow diagram of the embodiment of the present invention 1;
Fig. 2 is the method flow diagram of the embodiment of the present invention 2.
Fig. 3 is the structural representation of the embodiment of the present invention 3.
Embodiment
In order to make object of the present invention, technical scheme and advantage clearer, below in conjunction with drawings and Examples, the present invention is further elaborated, for convenience of explanation, only show the part relevant to the embodiment of the present invention.Should be appreciated that the specific embodiment that this place is described, only for explaining the present invention, not in order to limit the present invention.
The present invention proposes a kind of inter-frame prediction method of new P frame.The method is by determining the relation of current reference frame and current encoded frame, zooms out respectively or the mode that furthers is carried out upper and lower sampling processing according to camera lens, improved the similarity of current reference frame and coded frame, thereby reached the compression effectiveness of optimization current encoded frame.In the time of low frame per second, the shot change between frame and frame is larger, now adopts frame Forecasting Methodology of the present invention, and the lifting of compression performance will be more remarkable.
Embodiment 1, referring to Fig. 1, the method is mainly used in the prediction of P frame, is specially:
Step 101: the relation of determining the first reference frame and current encoded frame; Namely determine i reference frame refiwith the relation of current encoded frame frame, select different P frame Forecasting Methodologies; If furthering, camera lens performs step 102; If zooming out, camera lens performs step 103; If do not exist camera lens further or zoom out, the first reference frame is that current reference frame performs step 104;
If (camera lens draws in) enters step 102;
Else if (camera lens pushes away far) enters step 103;
Else curr_refi=refi, enter 104;
Namely i reference frame ref of the first reference frame described hereini; Curr_refifor the current reference frame after upgrading;
Step 102: the first reference frame is processed if camera lens furthers, obtained the second reference frame, and current reference frame is set to the second reference frame; Enter step 104;
This step method is specially: to reficarry out up-sampling, obtain new reference frame the second reference frame refi'; Curr_refi=refi';
Step 103: if camera lens zooms out, the first reference frame is processed, obtained the 4th reference frame, and current reference frame is set to the 4th reference frame; Enter step 104;
This step method is specially: to reficarry out down-sampling, obtain new reference frame the 4th reference frame refi" ', curr_refi=refi" ';
Step 104: adopt current reference frame to carry out inter prediction to current encoded frame.
The method is by determining the relation of current reference frame and current encoded frame, zooms out respectively or the mode that furthers is carried out upper and lower sampling processing according to camera lens, improved the similarity of current reference frame and coded frame, thereby reached the compression effectiveness of optimization current encoded frame.In the time of low frame per second, the shot change between frame and frame is larger, now adopts frame Forecasting Methodology of the present invention, and the lifting of compression performance will be more remarkable.
Embodiment 2, referring to Fig. 2, the method for the present invention is mainly used in the prediction of P frame, is specially:
Step 201: the relation of determining the first reference frame and current encoded frame; If furthering, camera lens performs step 202; If zooming out, camera lens performs step 203; If do not exist camera lens further or zoom out, the first reference frame is that current reference frame performs step 204;
If (camera lens draws in) enters step 202;
Else if (camera lens pushes away far) enters step 203;
Else curr_refi=refi, enter 204;
Namely i reference frame ref of the first reference frame described hereini; Curr_refifor the current reference frame after upgrading;
Step 202: the first reference frame is processed if camera lens furthers, obtained the second reference frame, and the second reference frame is processed, obtain the 3rd reference frame; Current encoded frame is set to the 3rd reference frame;
This step is specially:
Step 2021: to the first reference frame reficarry out up-sampling, obtain new reference frame the second reference frame refi';
Step 2022: to the second reference frame refi' carry out boundary pixel deletion, obtain the 3rd reference frame refi"; Thereby make the 3rd reference frame refi" with the first reference frame refithere is identical resolution.(the second reference frame has different resolution from the 3rd reference frame, and the 3rd reference frame carries out after boundary pixel is deleted obtaining to the second reference frame herein)
To the second reference frame refi' carry out boundary pixel delet method and be:
The 3rd reference frame refi" (m, n)=refi' (m+d_heifht', n+d_width')
Wherein o_width, o_height are reficolumns and line number, m_width', m_height' are refi' columns and line number, m, n are line label and the row label of reference frame pixel;
d_width'=(m_width'-o_width)/2,
d_height'=(m_height'-o_height)/2
Step 2023:curr_refi=refi"
Step 203: if camera lens zooms out, the first reference frame is processed, obtained the 4th reference frame, and the 4th reference frame is processed, obtain the 5th reference frame; And current reference frame is set to the 5th reference frame.
This step implementation method is specially:
Step 2031: to the first reference frame reficarry out down-sampling, obtain new reference frame the 4th reference frame refi" ';
Step 2032: to the 4th reference frame refi" ' carry out boundary pixel is filled expansion, obtains the 5th reference frame refi" ", thus make the 5th reference frame refi" ' with the first reference frame refithere is identical resolution;
To the 4th reference frame refi" ' carry out boundary pixel is filled extended method:
Row are filled:
refi&prime;&prime;&prime;&prime;(m,n)=refi&prime;&prime;&prime;(m,0),0&le;n<d_width&prime;&prime;&prime;refi&prime;&prime;&prime;(m,n),d_width&prime;&prime;&prime;&le;n<o_width-d_width&prime;&prime;&prime;refi&prime;&prime;&prime;(m,o_width-1),o_width-d_width&prime;&prime;&prime;&le;n<o_width
Row is filled:
refi&prime;&prime;&prime;&prime;(m,n)=refi&prime;&prime;&prime;(0,n),0&le;m<d_height&prime;&prime;&prime;refi&prime;&prime;&prime;(m,n),d_height&prime;&prime;&prime;&le;n<o_height-d_height&prime;&prime;&prime;refi&prime;&prime;&prime;(o_height-1,n),o_height-d_height&prime;&prime;&prime;&le;n<o_height
refi&prime;&prime;&prime;&prime;(m,n)=refi&prime;&prime;&prime;(0,n),0&le;m<d_height&prime;&prime;&prime;refi&prime;&prime;&prime;(m,n),d_height&prime;&prime;&prime;&le;m<o_height-d_height&prime;&prime;&prime;refi&prime;&prime;&prime;(o_height-1,n),o_height-d_height&prime;&prime;&prime;&le;m<o_height
Wherein o_width, o_height are reficolumns and line number, m_width " ', m_height " ' is refi" ' columns and line number, m, n are line label and the row label of reference frame pixel;
d_width″′=(o_width-m_width″′)/2,
d_height″′=(o_height-m_height″′)/2
Step 2033:curr_refi=refi" "
Step 204: adopt current reference frame to carry out inter prediction to current encoded frame.
The method is by determining the relation of current reference frame and current encoded frame, the mode zooming out according to camera lens respectively or further is carried out upper and lower sampling processing, and further the second reference frame is carried out to pixel deletion, the 4th reference frame has been carried out to pixel-expansion, make current reference frame there is identical resolution with the first reference frame, avoid redistributing of internal memory, be convenient to the compatibility of code.Thereby further improve the similarity of current reference frame and coded frame, reached the compression effectiveness of optimizing current encoded frame.In the time of low frame per second, the shot change between frame and frame is larger, now adopts frame Forecasting Methodology of the present invention, and the lifting of compression performance will be more remarkable.
Embodiment 3, the present invention also provides a kind of inter prediction device of P frame corresponding to embodiment 1, and referring to Fig. 3, this device comprises:
Judging unit 301, for determining the relation of the first reference frame and current encoded frame;
The camera lens unit 302 that furthers, for the first reference frame being processed, obtain the second reference frame in the time that camera lens furthers, and current reference frame is set to the second reference frame;
Camera lens extension unit 303, for the first reference frame being processed, obtain the 4th reference frame in the time zooming out for camera lens, and current reference frame is set to the 4th reference frame;
Predicting unit 304, for adopting current reference frame to carry out inter prediction to current encoded frame.
Wherein camera lens furthers unit for the first reference frame is processed, and obtains the second reference frame and is specially: the first reference frame is carried out to up-sampling, obtain the second reference frame.
Corresponding to embodiment 2, the described camera lens unit that furthers is further used for obtaining, after described the second reference frame, further the second reference frame being processed, and obtains the 3rd reference frame, and current reference frame is set to the 3rd reference frame.
Wherein camera lens furthers unit for the second reference frame is processed, and obtains the 3rd reference frame and is specially:
The second reference frame is carried out to boundary pixel deletion, obtain the 3rd reference frame, make the 3rd reference frame there is identical resolution with the first reference frame;
Wherein, describedly the second reference frame carried out to boundary pixel delet method be specially:
refi″(m,n)=refi'(m+d_heifht',n+d_width')
Wherein refi' be the second reference frame, refi' ' be the 3rd reference frame, o_width, o_height are refithe columns of current reference frame and line number, m_width', m_height' are refi' columns and line number, m, n are line label and the row label of reference frame pixel;
d_width'=(m_width'-o_width)/2,
d_height'=(m_height'-o_height)/2
Wherein, described camera lens extension unit, for the first reference frame is processed, obtains the 4th reference frame and is specially: the first reference frame is carried out to down-sampling, obtain the 4th reference frame.
Corresponding to embodiment 2, described camera lens extension unit is further used for obtaining, after the 4th described reference frame, further the 4th reference frame being processed, and obtains the 5th reference frame; Accordingly, current reference frame is set to the 5th reference frame;
Wherein, described camera lens extension unit is for processing the 4th reference frame, obtaining the 5th reference frame is specially: the 4th reference frame is carried out to boundary pixel and fill expansion, obtain the 5th reference frame, make the 5th reference frame have identical resolution with the first reference frame;
This is specially wherein the 4th reference frame to be carried out to boundary pixel filling expansion:
Row are filled:
refi&prime;&prime;&prime;&prime;(m,n)=refi&prime;&prime;&prime;(m,0),0&le;n<d_width&prime;&prime;&prime;refi&prime;&prime;&prime;(m,n),d_width&prime;&prime;&prime;&le;n<o_width-d_width&prime;&prime;&prime;refi&prime;&prime;&prime;(m,o_width-1),o_width-d_width&prime;&prime;&prime;&le;n<o_width
Row is filled:
refi&prime;&prime;&prime;&prime;(m,n)=refi&prime;&prime;&prime;(0,n),0&le;m<d_height&prime;&prime;&prime;refi&prime;&prime;&prime;(m,n),d_height&prime;&prime;&prime;&le;m<o_height-d_height&prime;&prime;&prime;refi&prime;&prime;&prime;(o_height-1,n),o_height-d_height&prime;&prime;&prime;&le;m<o_height
Wherein refi" ' be the 4th reference frame, o_width, o_height are refithe columns of current reference frame and line number, m_width " ', m_height " ' is refi" columns of the ' the four reference frame and line number, m, n are line label and the row label of reference frame pixel;
d_width″′=(o_width-m_width″′)/2,
d_height″′=(o_height-m_height″′)/2
Those having ordinary skill in the art will appreciate that, the all or part of step realizing in above-described embodiment method can complete by program command related hardware, described program can be stored in a computer read/write memory medium, and described storage medium can be ROM, RAM, disk, CD etc.
The foregoing is only preferred embodiment of the present invention, not in order to limit the present invention, all any amendments of doing within the spirit and principles in the present invention, be equal to and replace and improvement etc., within all should being included in protection scope of the present invention.

Claims (14)

CN201010610022.1A2010-12-282010-12-28Interframe prediction method and deviceExpired - Fee RelatedCN102547265B (en)

Priority Applications (2)

Application NumberPriority DateFiling DateTitle
CN201010610022.1ACN102547265B (en)2010-12-282010-12-28Interframe prediction method and device
PCT/CN2011/076246WO2012088848A1 (en)2010-12-282011-06-24Interframe prediction method and device

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN201010610022.1ACN102547265B (en)2010-12-282010-12-28Interframe prediction method and device

Publications (2)

Publication NumberPublication Date
CN102547265A CN102547265A (en)2012-07-04
CN102547265Btrue CN102547265B (en)2014-09-03

Family

ID=46353072

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN201010610022.1AExpired - Fee RelatedCN102547265B (en)2010-12-282010-12-28Interframe prediction method and device

Country Status (2)

CountryLink
CN (1)CN102547265B (en)
WO (1)WO2012088848A1 (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN111510726B (en)*2019-01-302023-01-24杭州海康威视数字技术股份有限公司Coding and decoding method and equipment thereof

Citations (3)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN1288337A (en)*1999-09-102001-03-21株式会社Ntt杜可莫Method and device used for automatic data converting coding video frequency image data
CN101252692A (en)*2008-03-072008-08-27炬力集成电路设计有限公司Inter-frame prediction method and device and video coding and decoding equipment
CN101578879A (en)*2006-11-072009-11-11三星电子株式会社Method and apparatus for video inter-prediction encoding/decoding

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN101783949B (en)*2010-02-222012-08-01深圳市融创天下科技股份有限公司Mode selection method of skip block

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN1288337A (en)*1999-09-102001-03-21株式会社Ntt杜可莫Method and device used for automatic data converting coding video frequency image data
CN101578879A (en)*2006-11-072009-11-11三星电子株式会社Method and apparatus for video inter-prediction encoding/decoding
CN101252692A (en)*2008-03-072008-08-27炬力集成电路设计有限公司Inter-frame prediction method and device and video coding and decoding equipment

Also Published As

Publication numberPublication date
WO2012088848A1 (en)2012-07-05
CN102547265A (en)2012-07-04

Similar Documents

PublicationPublication DateTitle
KR101199498B1 (en)Apparatus for encoding or generation of multi-view video by using a camera parameter, and a method thereof, and a recording medium having a program to implement thereof
CN101052128B (en) Motion detection device and method, motion detection integrated circuit and image encoding device
CN105554506A (en)Panorama video coding, decoding method and device based on multimode boundary filling
US12256101B2 (en)Encoding device and encoding method
CN102611885B (en)Encoding and decoding method and device
WO2015188585A1 (en)Image encoding method and device, and image decoding method and device
US12323576B2 (en)Video decoding device and video decoding method
CN104869399A (en)Information processing method and electronic equipment.
TWI601075B (en)Motion compensation image processing apparatus and image processing method
CN102547265B (en)Interframe prediction method and device
CN102572419B (en)Interframe predicting method and device
TWI540883B (en) Dynamic image predictive decoding method, dynamic image predictive decoding device, dynamic image predictive decoding program, dynamic image predictive coding method, dynamic image predictive coding device and dynamic image predictive coding program
CN102595117B (en)Method and device for coding and decoding
KR101357755B1 (en)Apparatus for encoding or generation of multi-view video by using a camera parameter, and a method thereof, and a recording medium having a program to implement thereof
CN102595125B (en)A kind of bi-directional predicted method of P frame and device
WO2012120910A1 (en)Moving image coding device and moving image coding method
KR101313224B1 (en)Apparatus for encoding or generation of multi-view video by using a camera parameter, and a method thereof, and a recording medium having a program to implement thereof
US12256055B2 (en)Patch-based depth mapping method and apparatus for high-efficiency encoding/decoding of plenoptic video
CN103024327B (en)Video recording method and video recording device
CN119135923A (en) LTR frame update in video encoding
KR20120031494A (en)Apparatus for encoding or generation of multi-view video by using a camera parameter, and a method thereof, and a recording medium having a program to implement thereof
JP2011030034A (en)Video coding prefiltering method, and apparatus and program therefor
CN103379348A (en)Viewpoint synthetic method, device and encoder during depth information encoding

Legal Events

DateCodeTitleDescription
C06Publication
PB01Publication
C10Entry into substantive examination
SE01Entry into force of request for substantive examination
ASSSuccession or assignment of patent right

Owner name:SHENZHEN YUNZHOU MULTIMEDIA TECHNOLOGY CO., LTD.

Free format text:FORMER OWNER: SHENZHEN TEMOBI SCIENCE + TECHNOLOGY CO., LTD.

Effective date:20140801

C41Transfer of patent application or patent right or utility model
TA01Transfer of patent application right

Effective date of registration:20140801

Address after:Unit B4 9 building 518057 Guangdong city of Shenzhen province Nanshan District high in the four EVOC Technology Building No. 31

Applicant after:Shenzhen Yunzhou Multimedia Technology Co., Ltd.

Address before:19, building 18, Changhong technology building, 518057 South twelve Road, South tech Zone, Nanshan District hi tech Zone, Guangdong, Shenzhen

Applicant before:Shenzhen Temobi Science & Tech Development Co.,Ltd.

C14Grant of patent or utility model
GR01Patent grant
C56Change in the name or address of the patentee
CP02Change in the address of a patent holder

Address after:The central Shenzhen city of Guangdong Province, 518057 Keyuan Road, Nanshan District science and Technology Park No. 15 Science Park Sinovac A Building 1 unit 403, No. 405 unit

Patentee after:Shenzhen Yunzhou Multimedia Technology Co., Ltd.

Address before:Unit B4 9 building 518057 Guangdong city of Shenzhen province Nanshan District high in the four EVOC Technology Building No. 31

Patentee before:Shenzhen Yunzhou Multimedia Technology Co., Ltd.

CF01Termination of patent right due to non-payment of annual fee
CF01Termination of patent right due to non-payment of annual fee

Granted publication date:20140903

Termination date:20191228


[8]ページ先頭

©2009-2025 Movatter.jp