RELATED APPLICATION INFORMATION The following co-pending U.S. patent applications relate to the present application and are hereby incorporated herein by reference: 1) U.S. patent application Ser. No. ______, entitled, “Advanced Bi-Directional Predictive Coding of Video Frames,” filed concurrently herewith; 2) U.S. patent application Ser. No. ______, entitled, “Intraframe and Interframe Interlace Coding and Decoding,” filed concurrently herewith; 3) U.S. patent application Ser. No. 10/321,415, entitled, “Skip Macroblock Coding,” filed Dec. 16, 2002; and 4) U.S. patent application Ser. No. 10/379,615, entitled “Chrominance Motion Vector Rounding,” filed Mar. 4, 2003.
COPYRIGHT AUTHORIZATION A portion of the disclosure of this patent document contains material which is subject to copyright protection. The copyright owner has no objection to the facsimile reproduction by any one of the patent disclosure, as it appears in the Patent and Trademark Office patent files or records, but otherwise reserves all copyright rights whatsoever.
TECHNICAL FIELD Techniques and tools for coding and decoding motion vector information are described. A video encoder uses an extended motion vector in a motion vector syntax for encoding predicted video frames.
BACKGROUND Digital video consumes large amounts of storage and transmission capacity. A typical raw digital video sequence includes 15 or 30 frames per second. Each frame can include tens or hundreds of thousands of pixels (also called pels). Each pixel represents a tiny element of the picture. In raw form, a computer commonly represents a pixel with 24 bits. Thus, the number of bits per second, or bit rate, of a typical raw digital video sequence can be 5 million bits/second or more.
Most computers and computer networks lack the resources to process raw digital video. For this reason, engineers use compression (also called coding or encoding) to reduce the bit rate of digital video. Compression can be lossless, in which quality of the video does not suffer but decreases in bit rate are limited by the complexity of the video. Or, compression can be lossy, in which quality of the video suffers but decreases in bit rate are more dramatic. Decompression reverses compression.
In general, video compression techniques include intraframe compression and interframe compression. Intraframe compression techniques compress individual frames, typically called I-frames or key frames. Interframe compression techniques compress frames with reference to preceding and/or following frames, which are typically called predicted frames, P-frames, or B-frames.
Microsoft Corporation's Windows Media Video, Version 8 [“WMV8”] includes a video encoder and a video decoder. The WMV8 encoder uses intraframe and interframe compression, and the WMV8 decoder uses intraframe and interframe decompression.
A. Intraframe Compression in WMV8
FIG. 1 illustrates block-basedintraframe compression100 of ablock105 of pixels in a key frame in the WMV8 encoder. A block is a set of pixels, for example, an 8×8 arrangement of pixels. The WMV8 encoder splits a key video frame into 8×8 blocks of pixels and applies an 8×8 Discrete Cosine Transform [“DCT”]110 to individual blocks such as theblock105. A DCT is a type of frequency transform that converts the 8×8 block of pixels (spatial information) into an 8×8 block ofDCT coefficients115, which are frequency information. The DCT operation itself is lossless or nearly lossless.
The encoder then quantizes120 the DCT coefficients, resulting in an 8×8 block of quantizedDCT coefficients125. For example, the encoder applies a uniform, scalar quantization step size to each coefficient. Quantization is lossy. The encoder then prepares the 8×8 block of quantizedDCT coefficients125 for entropy encoding, which is a form of lossless compression. The exact type of entropy encoding can vary depending on whether a coefficient is a DC coefficient (lowest frequency), an AC coefficient (other frequencies) in the top row or left column, or another AC coefficient.
The encoder encodes theDC coefficient126 as a differential from theDC coefficient136 of a neighboring 8×8 block, which is a previously encoded neighbor (e.g., top or left) of the block being encoded. (FIG. 1 shows aneighbor block135 that is situated to the left of the block being encoded in the frame.) The encoder entropy encodes140 the differential.
The entropy encoder can encode the left column or top row of AC coefficients as a differential from a corresponding column or row of the neighboring 8×8 block.FIG. 1 shows theleft column127 of AC coefficients encoded as adifferential147 from theleft column137 of the neighboring (to the left)block135. The differential coding increases the chance that the differential coefficients have zero values. The remaining AC coefficients are from theblock125 of quantized DCT coefficients.
Theencoder scans150 the 8×8block145 of predicted, quantized AC DCT coefficients into a one-dimensional array155 and then entropy encodes the scanned AC coefficients using a variation ofrun length coding160. The encoder selects an entropy code from one or more run/level/last tables165 and outputs the entropy code.
B. Interframe Compression in WMV8
Interframe compression in the WMV8 encoder uses block-based motion compensated prediction coding followed by transform coding of the residual error.FIGS. 2 and 3 illustrate the block-based interframe compression for a predicted frame in the WMV8 encoder. In particular,FIG. 2 illustrates motion estimation for a predictedframe210 andFIG. 3 illustrates compression of a prediction residual for a motion-estimated block of a predicted frame.
For example, the WMV8 encoder splits a predicted frame into 8×8 blocks of pixels. Groups of four 8×8 blocks form macroblocks. For each macroblock, a motion estimation process is performed. The motion estimation approximates the motion of the macroblock of pixels relative to a reference frame, for example, a previously coded, preceding frame. InFIG. 2, the WMV8 encoder computes a motion vector for amacroblock215 in the predictedframe210. To compute the motion vector, the encoder searches in asearch area235 of areference frame230. Within thesearch area235, the encoder compares themacroblock215 from the predictedframe210 to various candidate macroblocks in order to find a candidate macroblock that is a good match. After the encoder finds a good matching macroblock, the encoder outputs information specifying the motion vector (entropy coded) for the matching macroblock so the decoder can find the matching macroblock during decoding. When decoding the predictedframe210 with motion compensation, a decoder uses the motion vector to compute a prediction macroblock for themacroblock215 using information from thereference frame230. The prediction for themacroblock215 is rarely perfect, so the encoder usually encodes 8×8 blocks of pixel differences (also called the error or residual blocks) between the prediction macroblock and themacroblock215 itself.
FIG. 3 illustrates an example of computation and encoding of anerror block335 in the WMV8 encoder. Theerror block335 is the difference between the predictedblock315 and the originalcurrent block325. The encoder applies aDCT340 to theerror block335, resulting in an 8×8block345 of coefficients. The encoder then quantizes350 the DCT coefficients, resulting in an 8×8 block of quantized DCT coefficients355. The quantization step size is adjustable. Quantization results in loss of precision, but not complete loss of the information for the coefficients.
The encoder then prepares the 8×8block355 of quantized DCT coefficients for entropy encoding. The encoder scans360 the 8×8block355 into a onedimensional array365 with 64 elements, such that coefficients are generally ordered from lowest frequency to highest frequency, which typically creates long runs of zero values.
The encoder entropy encodes the scanned coefficients using a variation ofrun length coding370. The encoder selects an entropy code from one or more run/level/last tables375 and outputs the entropy code.
FIG. 4 shows an example of acorresponding decoding process400 for an inter-coded block. Due to the quantization of the DCT coefficients, the reconstructed block475 is not identical to the corresponding original block. The compression is lossy.
In summary ofFIG. 4, a decoder decodes (410,420) entropy-coded information representing a prediction residual usingvariable length decoding410 with one or more run/level/last tables415 and runlength decoding420. The decoder inverse scans430 a one-dimensional array425 storing the entropy-decoded information into a two-dimensional block435. The decoder inverse quantizes and inverse discrete cosine transforms (together,440) the data, resulting in a reconstructederror block445. In a separate motion compensation path, the decoder computes a predictedblock465 usingmotion vector information455 for displacement from a reference frame. The decoder combines470 the predictedblock465 with the reconstructederror block445 to form the reconstructed block475.
The amount of change between the original and reconstructed frame is termed the distortion and the number of bits required to code the frame is termed the rate for the frame. The amount of distortion is roughly inversely proportional to the rate. In other words, coding a frame with fewer bits (greater compression) will result in greater distortion, and vice versa.
C. Bi-directional Prediction
Bi-directionally coded images (e.g., B-frames) use two images from the source video as reference (or anchor) images. For example, referring toFIG. 5, a B-frame510 in a video sequence has a temporallyprevious reference frame520 and a temporallyfuture reference frame530.
Some conventional encoders use five prediction modes (forward, backward, direct, interpolated and intra) to predict regions in a current B-frame. In intra mode, an encoder does not predict a macroblock from either reference image, and therefore calculates no motion vectors for the macroblock. In forward and backward modes, an encoder predicts a macroblock using either the previous or future reference frame, and therefore calculates one motion vector for the macroblock. In direct and interpolated modes, an encoder predicts a macroblock in a current frame using both reference frames. In interpolated mode, the encoder explicitly calculates two motion vectors for the macroblock. In direct mode, the encoder derives implied motion vectors by scaling the co-located motion vector in the future reference frame, and therefore does not explicitly calculate any motion vectors for the macroblock.
D. Interlace Coding
A typical interlace video frame consists of two fields scanned at different times. For example, referring toFIG. 6, aninterlace video frame600 includestop field610 andbottom field620. Typically, the odd-numbered lines (top field) are scanned at one time (e.g., time t) and the even-numbered lines (bottom field) are scanned at a different (typically later) time (e.g., time t+1). This arrangement can create jagged tooth-like features in regions of a frame where motion is present because the two fields are scanned at different times. On the other hand, in stationary regions, image structures in the frame may be preserved (i.e., the interlace artifacts visible in motion regions may not be visible in stationary regions). Macroblocks in interlace frames can be field-coded or frame-coded. In field-coded macroblocks, the top-field lines and bottom-field lines are rearranged, such that the top field lines appear at the top of the macroblock, and the bottom field lines appear at the bottom of the macroblock. Predicted field-coded macroblocks typically have one motion vector for each field in the macroblock. In frame-coded macroblocks, the field lines alternate between top-field lines and bottom-field lines. Predicted frame-coded macroblocks typically have one motion vector for the macroblock.
E. Standards for Video Compression and Decompression
Aside from WMV8, several international standards relate to video compression and decompression. These standards include the Motion Picture Experts Group [“MPEG”] 1, 2, and 4 standards and the H.261, H.262, and H.263 standards from the International Telecommunication Union [“ITU”]. Like WMV8, these standards use a combination of intraframe and interframe compression.
For example, advanced video compression or encoding techniques (including techniques in the MPEG, H.26x and WMV8 standards) are based on the exploitation of temporal coherence of typical video sequences. Image areas are tracked as they move over time, and information pertaining to the motion of these areas is compressed as part of the bit stream. Traditionally, a standard P-frame is encoded by computing and storing motion information in the form of two-dimensional displacement vectors corresponding to regularly-sized image tiles (e.g, macroblocks) For example, a macroblock may have one motion vector (a 1MV macroblock) for the macroblock or a motion vector for each of four blocks in the macroblock (a 4MV macroblock). Subsequently, the difference between the input frame and its motion compensated prediction is compressed, usually in a suitable transform domain, and added to an encoded bit stream. Typically, the motion vector component of the bitstream makes up between 10% and 30% of the size. Therefore, it can be appreciated that efficient motion vector coding is a key factor in efficient video compression.
Motion vector coding efficiency can be achieved in different ways. For example, motion vectors are often highly correlated between neighboring macroblocks. For efficiency, a motion vector of a given macroblock can be differentially coded from its prediction based on a causal neighborhood of adjacent macroblocks. A few exceptions to this general rule are observed in prior algorithms, such as those described in MPEG-4 and WMV8:
1 When the predicted motion vector lies outside a certain area (typically ±16 pixels from zero, for either component), the prediction is pulled back to the nearest point within this area.
2 When the vectors making up the causal neighborhood of the current macroblock are diverse (e.g., at motion discontinuities), the “Hybrid Motion Vector” mode is employed—the prediction is signaled by a codeword that indicates whether to use the motion vector to the top or to the left (or any other combination).
3 When a macroblock is essentially unchanged from its reference frame (i.e., a (0, 0) motion vector (no motion) and no residual components), it is indicated as being “skipped.”
4 A macroblock may be coded as intra (i.e., not differentially predicted from the previous frame). In this case, no motion vector is sent. (Otherwise, for non-skipped macroblocks that are not intra coded, a motion vector is always sent.)
5 Intra coded macroblocks are indicated by an “I/P switch”, which is jointly coded with a coded block pattern (or CBP). The CBP indicates which of the blocks making up a macroblock have attached residual information.
Given the critical importance of video compression and decompression to digital video, it is not surprising that video compression and decompression are richly developed fields. Whatever the benefits of previous video compression and decompression techniques, however, they do not have the advantages of the following techniques and tools.
SUMMARY In summary, the detailed description is directed to various techniques and tools for encoding and decoding motion vector information for video images. The various techniques and tools can be used in combination or independently.
In one aspect, a video encoder jointly codes for a set of pixels (e.g., block, macroblock, etc.) a switch code with motion vector information (e.g., a motion vector for an inter-coded block/macroblock, or a pseudo motion vector for an intra-coded block/macroblock). The switch code indicates whether a set of pixels is intra-coded.
In another aspect, a video encoder yields an extended motion vector code by jointly coding for a set of pixels a switch code, motion vector information, and a terminal symbol indicating whether subsequent data is encoded for the set of pixels. The subsequent data can include coded block pattern data and/or residual data for macroblocks. The extended motion vector code can be included in an alphabet or table of codes. In one aspect, the alphabet lacks a code that would represent a skip condition for the set of pixels.
In another aspect, an encoder/decoder selects motion vector predictors for current macroblocks (e.g., 1MV or mixed 1MV/4MV macroblocks) in a video image (e.g., an interlace or progressive P-frame or B-frame).
For example, an encoder/decoder selects a predictor from a set of candidates for a last macroblock of a macroblock row. The set of candidates comprises motion vectors from a set of macroblocks adjacent to the current macroblock. The set of macroblocks adjacent to the current macroblock consists of a top adjacent macroblock, a left adjacent macroblock, and a top-left adjacent macroblock. The predictor can be a motion vector for an individual block within a macroblock.
As another example, an encoder/decoder selects a predictor from a set of candidates comprising motion vectors from a set of blocks in macroblocks adjacent to a current macroblock. The set of blocks consists of a bottom-left block of a top adjacent macroblock, a top-right block of a left adjacent macroblock, and a bottom-right block of a top-left adjacent macroblock.
As another example, an encoder/decoder selects a predictor for a current top-left block in the first macroblock of a macroblock row from a set of candidates. The set of candidates comprises a zero-value motion vector and motion vectors from a set of blocks in an adjacent macroblock. The set of blocks consists of a bottom-left block of a top adjacent macroblock, and a bottom-right block of the top adjacent macroblock.
As another example, an encoder/decoder selects a predictor for a current top-right block of a current macroblock from a set of candidates. The current macroblock is the last macroblock of a macroblock row, and the set of candidates consists of a motion vector from the top-left block of the current macroblock, a motion vector from a bottom-left block of a top adjacent macroblock, and a motion vector from a bottom-right block of the top adjacent macroblock.
In another aspect, a video encoder/decoder calculates a motion vector predictor for a set of pixels (e.g., a 1MV or mixed 1MV/4MV macroblock) based on analysis of candidates, and compares the calculated predictor with one or more of the candidates (e.g., the left and top candidates). Based on the comparison, the encoder/decoder determines whether to replace the calculated motion vector predictor with a hybrid motion vector of one of the candidates. The set of pixels can be a skipped set of pixels (e.g., a skipped macroblock). The hybrid motion vector can be indicated by an indicator bit.
In another aspect, a video encoder/decoder selects a motion vector mode for a predicted image from a set of modes comprising a mixed one- and four-motion vector, quarter-pixel resolution, bicubic interpolation filter mode; a one-motion vector, quarter-pixel resolution, bicubic interpolation filter mode; a one-motion vector, half-pixel resolution, bicubic interpolation filter mode; and a one-motion vector, half-pixel resolution, bilinear interpolation filter mode. The mode can be signaled in a bit stream at various levels (e.g., frame-level, slice-level, group-of-pictures level, etc.). The set of modes also can include other modes, such as a four-motion vector, ⅛-pixel, six-tap interpolation filter mode.
In another aspect, for a set of pixels, a video encoder finds a motion vector component value and a motion vector predictor component value, each within a bounded range. The encoder calculates a differential motion vector component value (which is outside the bounded range) based on the motion vector component value and the motion vector predictor component value. The encoder represents the differential motion vector component value with a signed binary code in a bit stream. The signed binary code is operable to allow reconstruction of the differential motion vector component value. For example, the encoder performs rollover arithmetic to convert the differential motion vector component value into a signed binary code. The number of bits in the signed binary code can vary based on motion data (e.g., motion vector component direction (x or y), motion vector resolution, motion vector range.
In another aspect, a video decoder decodes a set of pixels in an encoded bit stream by receiving an extended motion vector code for the set of pixels. The extended motion vector code reflects joint encoding of motion information together with information indicating whether the set of pixels is intra-coded or inter-coded and with a terminal symbol. The decoder determines whether subsequent data for the set of pixels is included in the encoded bit stream based on the extended motion vector code (e.g., by the terminal symbol in the code). For a macroblocks (e.g., 4:2:0, 4:1:1, or 4:2:2 macroblocks), subsequent data can include a coded block pattern code and/or residual information for one or more blocks in the macroblock.
In the bit stream, the extended motion vector code can be preceded by, for example, header information or a modified coded block pattern code, and can be followed by other information for the set of pixels, such as a coded block pattern code. The decoder can receive more than one extended motion vector code for a set of pixels. For example, the decoder can receive two such codes for a bi-directionally predicted, or field-coded interlace macroblock. Or, the decoder can receive an extended motion vector code for each block in a macroblock.
In another aspect, a computer system includes means for decoding images, which comprises means for receiving an extended motion vector code and means for determining whether subsequent data for the set of pixels is included in the encoded bit stream based at least in part upon the received extended motion vector code.
In another aspect, a computer system includes means for encoding images, which comprises means for sending an extended motion vector code for a set of pixels as part of an encoded bit stream.
Additional features and advantages will be made apparent from the following detailed description of different embodiments that proceeds with reference to the accompanying drawings.
BRIEF DESCRIPTION OF THE DRAWINGSFIG. 1 is a diagram showing block-based intraframe compression of an 8×8 block of pixels according to the prior art.
FIG. 2 is a diagram showing motion estimation in a video encoder according to the prior art.
FIG. 3 is a diagram showing block-based interframe compression for an 8×8 block of prediction residuals in a video encoder according to the prior art.
FIG. 4 is a diagram showing block-based interframe decompression for an 8×8 block of prediction residuals in a video encoder according to the prior art.
FIG. 5 is a diagram showing a B-frame with past and future reference frames according to the prior art.
FIG. 6 is a diagram showing an interlaced video frame according to the prior art.
FIG. 7 is a block diagram of a suitable computing environment in which several described embodiments may be implemented.
FIG. 8 is a block diagram of a generalized video encoder system used in several described embodiments.
FIG. 9 is a block diagram of a generalized video decoder system used in several described embodiments.
FIG. 10 is a diagram showing a macroblock syntax with an extended motion vector symbol for use in coding progressive 1MV macroblocks in P-frames, forward/backward predicted macroblocks in B-frames, and interlace frame-type macroblocks.
FIG. 11 is a diagram showing a macroblock syntax with an extended motion vector symbol for use in coding progressive 4MV macroblocks in P-frames.
FIG. 12 is a diagram showing a macroblock syntax with extended motion vector symbols for use in coding progressive interpolated macroblocks in B-frames, forward/backward predicted macroblocks in B-frames, and interlace frame-type macroblocks.
FIG. 13 is a diagram showing a macroblock syntax with extended motion vector symbols for use in coding interlace macroblocks in P-frames and forward/backward predicted field-type macroblocks in B-frames.
FIG. 14 is a diagram showing a macroblock syntax with extended motion vector symbols for use in coding interlace interpolated field-type macroblocks in B-frames.
FIG. 15 is a diagram showing a macroblock comprising four blocks.
FIGS. 16A and 16B are diagrams showing candidate motion vector predictors for a 1MV macroblock in a P-frames.
FIGS. 17A and 17B are diagrams showing candidate motion vector predictors for a 1MV macroblock in a mixed 1MV/4MV P-frame.
FIGS. 18A and 18B are diagrams showing candidate motion vector predictors for a block atposition0 in a 4MV macroblock in a mixed 1MV/4MV P-frame.
FIGS. 19A and 19B are diagrams showing candidate motion vector predictors for a block atposition1 in a 4MV macroblock in a mixed 1MV/4MV P-frame.
FIG. 20 is a diagram showing candidate motion vector predictors for a block atposition2 in a 4MV macroblock in a mixed 1MV/4MV P-frame.
FIG. 21 is a diagram showing candidate motion vector predictors for a block atposition3 in a 4MV macroblock in a mixed 1MV/4MV P-frame.
FIGS. 22A and 22B are diagrams showing candidate motion vector predictors for a frame-type macroblock in an interlace P-frame.
FIGS. 23A and 23B are diagrams showing candidate motion vector predictors for a field-type macroblock in an interlace P-frame.
FIG. 24 is a flow chart showing a technique for performing a pull back for a motion vector predictor.
FIG. 25 is a flow chart showing a technique for determining whether to use a hybrid motion vector for a set of pixels.
FIG. 26 is a flow chart showing a technique for applying rollover arithmetic to a differential motion vector.
DETAILED DESCRIPTION The present application relates to techniques and tools for coding motion information in video image sequences. Bit stream formats or syntaxes include flags and other codes to incorporate the techniques. Different bit stream formats can comprise different layers or levels (e.g., sequence level, frame/picture/image level, macroblock level, and/or block level).
The various techniques and tools can be used in combination or independently. Different embodiments implement one or more of the described techniques and tools.
I. Computing Environment
FIG. 7 illustrates a generalized example of asuitable computing environment700 in which several of the described embodiments may be implemented. Thecomputing environment700 is not intended to suggest any limitation as to scope of use or functionality, as the techniques and tools may be implemented in diverse general-purpose or special-purpose computing environments.
With reference toFIG. 7, thecomputing environment700 includes at least oneprocessing unit710 andmemory720. InFIG. 7, this mostbasic configuration730 is included within a dashed line. Theprocessing unit710 executes computer-executable instructions and may be a real or a virtual processor. In a multi-processing system, multiple processing units execute computer-executable instructions to increase processing power. Thememory720 may be volatile memory (e.g., registers, cache, RAM), non-volatile memory (e.g., ROM, EEPROM, flash memory, etc.), or some combination of the two. Thememory720stores software780 implementing a video encoder or decoder.
A computing environment may have additional features. For example, thecomputing environment700 includesstorage740, one ormore input devices750, one ormore output devices760, and one ormore communication connections770. An interconnection mechanism (not shown) such as a bus, controller, or network interconnects the components of thecomputing environment700. Typically, operating system software (not shown) provides an operating environment for other software executing in thecomputing environment700, and coordinates activities of the components of thecomputing environment700.
Thestorage740 may be removable or non-removable, and includes magnetic disks, magnetic tapes or cassettes, CD-ROMs, DVDs, or any other medium which can be used to store information and which can be accessed within thecomputing environment700. Thestorage740 stores instructions for thesoftware780 implementing the video encoder or decoder.
The input device(s)750 may be a touch input device such as a keyboard, mouse, pen, or trackball, a voice input device, a scanning device, or another device that provides input to thecomputing environment700. For audio or video encoding, the input device(s)750 may be a sound card, video card, TV tuner card, or similar device that accepts audio or video input in analog or digital form, or a CD-ROM or CD-RW that reads audio or video samples into thecomputing environment700. The output device(s)760 may be a display, printer, speaker, CD-writer, or another device that provides output from thecomputing environment700.
The communication connection(s)770 enable communication over a communication medium to another computing entity. The communication medium conveys information such as computer-executable instructions, audio or video input or output, or other data in a modulated data signal. A modulated data signal is a signal that has one or more of its characteristics set or changed in such a manner as to encode information in the signal. By way of example, and not limitation, communication media include wired or wireless techniques implemented with an electrical, optical, RF, infrared, acoustic, or other carrier.
The techniques and tools can be described in the general context of computer-readable media. Computer-readable media are any available media that can be accessed within a computing environment. By way of example, and not limitation, with thecomputing environment700, computer-readable media includememory720,storage740, communication media, and combinations of any of the above.
The techniques and tools can be described in the general context of computer-executable instructions, such as those included in program modules, being executed in a computing environment on a target real or virtual processor. Generally, program modules include routines, programs, libraries, objects, classes, components, data structures, etc. that perform particular tasks or implement particular abstract data types. The functionality of the program modules may be combined or split between program modules as desired in various embodiments. Computer-executable instructions for program modules may be executed within a local or distributed computing environment.
For the sake of presentation, the detailed description uses terms like “predict,” “choose,” “compensate,” and “apply” to describe computer operations in a computing environment. These terms are high-level abstractions for operations performed by a computer, and should not be confused with acts performed by a human being. The actual computer operations corresponding to these terms vary depending on implementation.
II. Generalized Video Encoder and Decoder
FIG. 8 is a block diagram of ageneralized video encoder800 andFIG. 9 is a block diagram of ageneralized video decoder900.
The relationships shown between modules within the encoder and decoder indicate the main flow of information in the encoder and decoder; other relationships are not shown for the sake of simplicity. In particular,FIGS. 8 and 9 generally do not show side information indicating the encoder settings, modes, tables, etc. used for a video sequence, frame, macroblock, block, etc. Such side information is sent in the output bit stream, typically after entropy encoding of the side information. The format of the output bit stream can be a Windows Media Video format or another format.
Theencoder800 anddecoder900 are block-based and use a 4:2:0 macroblock format with each macroblock including four 8×8 luminance blocks and two 8×8 chrominance blocks, or a 4:1:1 macroblock format with each macroblock including four 8×8 luminance blocks and four 4×8 chrominance blocks. Alternatively, theencoder800 anddecoder900 are object-based, use a different macroblock or block format, or perform operations on sets of pixels of different size or configuration.
Depending on implementation and the type of compression desired, modules of the encoder or decoder can be added, omitted, split into multiple modules, combined with other modules, and/or replaced with like modules. In alternative embodiments, encoder or decoders with different modules and/or other configurations of modules perform one or more of the described techniques.
A. Video Encoder
FIG. 8 is a block diagram of a generalvideo encoder system800. Theencoder system800 receives a sequence of video frames including acurrent frame805, and produces compressedvideo information895 as output. Particular embodiments of video encoders typically use a variation or supplemented version of thegeneralized encoder800.
Theencoder system800 compresses predicted frames and key frames. For the sake of presentation,FIG. 8 shows a path for key frames through theencoder system800 and a path for predicted frames. Many of the components of theencoder system800 are used for compressing both key frames and predicted frames. The exact operations performed by those components can vary depending on the type of information being compressed.
A predicted frame (also called P-frame, B-frame, or inter-coded frame) is represented in terms of prediction (or difference) from one or more reference (or anchor) frames. A prediction residual is the difference between what was predicted and the original frame. In contrast, a key frame (also called I-frame, intra-coded frame) is compressed without reference to other frames.
If thecurrent frame805 is a forward-predicted frame, amotion estimator810 estimates motion of macroblocks or other sets of pixels of thecurrent frame805 with respect to a reference frame, which is the reconstructedprevious frame825 buffered in a frame store (e.g., frame store820). If thecurrent frame805 is a bi-directionally-predicted frame (a B-frame), amotion estimator810 estimates motion in thecurrent frame805 with respect to two reconstructed reference frames. Typically, a motion estimator estimates motion in a B-frame with respect to a temporally previous reference frame and a temporally future reference frame. Accordingly, theencoder system800 can compriseseparate stores820 and822 for backward and forward reference frames. For more information on bi-directionally predicted frames, see U.S. patent application Ser. No. ______, entitled, “Advanced Bi-Directional Predictive Coding of Video Frames,” filed concurrently herewith.
Themotion estimator810 can estimate motion by pixel, ½ pixel, ¼ pixel, or other increments, and can switch the resolution of the motion estimation on a frame-by-frame basis or other basis. The resolution of the motion estimation can be the same or different horizontally and vertically. Themotion estimator810 outputs as sideinformation motion information815 such as motion vectors. Amotion compensator830 applies themotion information815 to the reconstructed frame(s)825 to form a motion-compensatedcurrent frame835. The prediction is rarely perfect, however, and the difference between the motion-compensatedcurrent frame835 and the originalcurrent frame805 is the prediction residual845. Alternatively, a motion estimator and motion compensator apply another type of motion estimation/compensation.
Afrequency transformer860 converts the spatial domain video information into frequency domain (i.e., spectral) data. For block-based video frames, thefrequency transformer860 applies a discrete cosine transform [“DCT”] or variant of DCT to blocks of the pixel data or prediction residual data, producing blocks of DCT coefficients. Alternatively, thefrequency transformer860 applies another conventional frequency transform such as a Fourier transform or uses wavelet or subband analysis. If the encoder uses spatial extrapolation (not shown inFIG. 8) to encode blocks of key frames, thefrequency transformer860 can apply a re-oriented frequency transform such as a skewed DCT to blocks of prediction residuals for the key frame. In some embodiments, thefrequency transformer860 applies an 8×8, 8×4, 4×8, or other size frequency transforms (e.g., DCT) to prediction residuals for predicted frames.
Aquantizer870 then quantizes the blocks of spectral data coefficients. The quantizer applies uniform, scalar quantization to the spectral data with a step-size that varies on a frame-by-frame basis or other basis. Alternatively, the quantizer applies another type of quantization to the spectral data coefficients, for example, a non-uniform, vector, or non-adaptive quantization, or directly quantizes spatial domain data in an encoder system that does not use frequency transformations. In addition to adaptive quantization, theencoder800 can use frame dropping, adaptive filtering, or other techniques for rate control.
If a given macroblock in a predicted frame has no information of certain types (e.g., no motion information for the macroblock and/or no residual information), theencoder800 may encode the macroblock as a skipped macroblock. If so, the encoder signals the skipped macroblock in the output bit stream ofcompressed video information895.
When a reconstructed current frame is needed for subsequent motion estimation/compensation, aninverse quantizer876 performs inverse quantization on the quantized spectral data coefficients. Aninverse frequency transformer866 then performs the inverse of the operations of thefrequency transformer860, producing a reconstructed prediction residual (for a predicted frame) or a reconstructed key frame. If thecurrent frame805 was a key frame, the reconstructed key frame is taken as the reconstructed current frame (not shown). If thecurrent frame805 was a predicted frame, the reconstructed prediction residual is added to the motion-compensatedcurrent frame835 to form the reconstructed current frame. A frame store (e.g., frame store820) buffers the reconstructed current frame for use in predicting another frame. In some embodiments, the encoder applies a deblocking filter to the reconstructed frame to adaptively smooth discontinuities in the blocks of the frame.
Theentropy coder880 compresses the output of thequantizer870 as well as certain side information (e.g.,motion information815, spatial extrapolation modes, quantization step size). Typical entropy coding techniques include arithmetic coding, differential coding, Huffman coding, run length coding, LZ coding, dictionary coding, and combinations of the above. Theentropy coder880 typically uses different coding techniques for different kinds of information (e.g., DC coefficients, AC coefficients, different kinds of side information), and can choose from among multiple code tables within a particular coding technique.
Theentropy coder880 puts compressedvideo information895 in thebuffer890. A buffer level indicator is fed back to bit rate adaptive modules.
Thecompressed video information895 is depleted from thebuffer890 at a constant or relatively constant bit rate and stored for subsequent streaming at that bit rate. Therefore, the level of thebuffer890 is primarily a function of the entropy of the filtered, quantized video information, which affects the efficiency of the entropy coding. Alternatively, theencoder system800 streams compressed video information immediately following compression, and the level of thebuffer890 also depends on the rate at which information is depleted from thebuffer890 for transmission.
Before or after thebuffer890, thecompressed video information895 can be channel coded for transmission over the network. The channel coding can apply error detection and correction data to thecompressed video information895.
B. Video Decoder
FIG. 9 is a block diagram of a generalvideo decoder system900. Thedecoder system900 receivesinformation995 for a compressed sequence of video frames and produces output including a reconstructedframe905. Particular embodiments of video decoders typically use a variation or supplemented version of thegeneralized decoder900.
Thedecoder system900 decompresses predicted frames and key frames. For the sake of presentation,FIG. 9 shows a path for key frames through thedecoder system900 and a path for predicted frames. Many of the components of thedecoder system900 are used for decompressing both key frames and predicted frames. The exact operations performed by those components can vary depending on the type of information being decompressed.
Abuffer990 receives theinformation995 for the compressed video sequence and makes the received information available to theentropy decoder980. Thebuffer990 typically receives the information at a rate that is fairly constant over time, and includes a jitter buffer to smooth short-term variations in bandwidth or transmission. Thebuffer990 can include a playback buffer and other buffers as well. Alternatively, thebuffer990 receives information at a varying rate. Before or after thebuffer990, the compressed video information can be channel decoded and processed for error detection and correction.
Theentropy decoder980 entropy decodes entropy-coded quantized data as well as entropy-coded side information (e.g.,motion information915, spatial extrapolation modes, quantization step size), typically applying the inverse of the entropy encoding performed in the encoder. Entropy decoding techniques include arithmetic decoding, differential decoding, Huffman decoding, run length decoding, LZ decoding, dictionary decoding, and combinations of the above. Theentropy decoder980 frequently uses different decoding techniques for different kinds of information (e.g., DC coefficients, AC coefficients, different kinds of side information), and can choose from among multiple code tables within a particular decoding technique.
A motion compensator930 appliesmotion information915 to one ormore reference frames925 to form aprediction935 of theframe905 being reconstructed. For example, the motion compensator930 uses a macroblock motion vector to find a macroblock in areference frame925. A frame buffer (e.g., frame buffer920) stores previously reconstructed frames for use as reference frames. Typically, B-frames have more than one reference frame (e.g., a temporally previous reference frame and a temporally future reference frame). Accordingly, thedecoder system900 can compriseseparate frame buffers920 and922 for backward and forward reference frames.
The motion compensator930 can compensate for motion at pixel, ½ pixel, ¼ pixel, or other increments, and can switch the resolution of the motion compensation on a frame-by-frame basis or other basis. The resolution of the motion compensation can be the same or different horizontally and vertically. Alternatively, a motion compensator applies another type of motion compensation. The prediction by the motion compensator is rarely perfect, so thedecoder900 also reconstructs prediction residuals.
When the decoder needs a reconstructed frame for subsequent motion compensation, a frame buffer (e.g., frame buffer920) buffers the reconstructed frame for use in predicting another frame. In some embodiments, the decoder applies a deblocking filter to the reconstructed frame to adaptively smooth discontinuities in the blocks of the frame.
Aninverse quantizer970 inverse quantizes entropy-decoded data. In general, the inverse quantizer applies uniform, scalar inverse quantization to the entropy-decoded data with a step-size that varies on a frame-by-frame basis or other basis. Alternatively, the inverse quantizer applies another type of inverse quantization to the data, for example, a non-uniform, vector, or non-adaptive quantization, or directly inverse quantizes spatial domain data in a decoder system that does not use inverse frequency transformations.
Aninverse frequency transformer960 converts the quantized, frequency domain data into spatial domain video information. For block-based video frames, theinverse frequency transformer960 applies an inverse DCT [“IDCT”] or variant of IDCT to blocks of the DCT coefficients, producing pixel data or prediction residual data for key frames or predicted frames, respectively. Alternatively, thefrequency transformer960 applies another conventional inverse frequency transform such as a Fourier transform or uses wavelet or subband synthesis. If the decoder uses spatial extrapolation (not shown inFIG. 9) to decode blocks of key frames, theinverse frequency transformer960 can apply a re-oriented inverse frequency transform such as a skewed IDCT to blocks of prediction residuals for the key frame. In some embodiments, theinverse frequency transformer960 applies an 8×8, 8×4, 4×8, or other size inverse frequency transforms (e.g., IDCT) to prediction residuals for predicted frames.
When a skipped macroblock is signaled in the bit stream ofinformation995 for a compressed sequence of video frames, thedecoder900 reconstructs the skipped macroblock without using information (e.g., motion information and/or residual information) normally included in the bit stream for non-skipped macroblocks.
III. Overview of Motion Vector Coding
The described techniques and tools improve compression efficiency for predicted images (e.g., frames) in video sequences. Described techniques and tools apply to a one-motion-vector-per-macroblock (1MV) model of motion estimation and compensation for predicted frames (e.g., P-frames). Described techniques and tools also employ specialized mechanisms to encode motion vectors in certain situations (e.g., four-motion-vectors-per-macroblock (4MV) models, mixed 1MV and 4MV models, B-frames, and interlace coding) that give rise to data structures that are not homogeneous with the 1MV model. For more information on interlace video, see U.S. patent application Ser. No. ______, entitled, “Intraframe and Interframe Interlace Coding and Decoding,” filed concurrently herewith. Described techniques and tools are also extensible to future formats.
With an increased average number of motion vectors per frame (e.g., in 4MV and mixed 1MV and 4MV models), it is desirable to design a more efficient scheme to encode motion vector information. As in earlier standards, described techniques and tools use predictive coding to compress motion vector information. However, there are several key differences. The described techniques and tools, individually or in combination, include the following features:
1. An extended motion vector alphabet:
- a. The I/P switch is jointly coded with the motion vector. In other words, a bit code indicating that a macroblock (or block) is to be coded as an intra macroblock or intra block, respectively, is joint coded with a pseudo motion vector, the joint code indicating it is an intra macroblock/block.
- b. In addition to the I/P switch, a “terminal” symbol is coded jointly with the motion vector. The terminal symbol indicates whether there is any subsequent data pertaining to the object (macroblock, block, etc.) being coded. The joint symbol is referred to as an extended motion vector (“MV*”).
2. A sub-frame-level (e.g., macroblock level) syntax using an extended motion vector alphabet to efficiently code, e.g., progressive 1MV macroblocks, 4MV macroblocks and B-frames, and interlace 1MV macroblocks, 2MV macroblocks and B-frames.
3. Generation of motion vector predictors and differential motion vectors.
4. Hybrid motion vector encoding with different criteria for identifying hybrid motion vectors.
5. Efficient signaling of motion vector modes at frame level.
6. Differential coding of motion vector residuals based on rollover arithmetic, (similar to modulo arithmetic) to avoid need for pull-back of predictors.
These features are explained in detail in the following sections.
In some embodiments, an encoder derives motion vectors for chrominance planes from luminance motion vectors. However, the techniques and tools described herein are equally applicable to chrominance motion in other embodiments. For example, a video encoder may choose to explicitly send chrominance motion vectors as part of a bit stream, and can use techniques and tools similar to those described herein to encode/decode the chrominance motion vectors.
IV. Extended Motion Vector Alphabet
In some embodiments, an extended motion vector alphabet includes joint codes for jointly coding motion vector information with other information for a block, macroblock, or other set of pixels.
A. Signaling Intra Macroblocks and Blocks
The signaling of an intra-coded set of pixels (e.g., block, macroblock, etc.) can be achieved by extending the alphabet of motion vectors to allow for a symbol (e.g., an I/P switch) indicating an intra area. Intra macroblocks and blocks do not have a true motion vector associated with them. A motion vector (or in the case of an intra-coded set of pixels, a pseudo motion vector) can be appended to an intra symbol to yield a triple of the form <Intra, MVx, MVy> that indicates whether the set of pixels (e.g., macroblock or block) is coded as intra, and if not, what its motion vector should be When the intra flag is set, MVx and MVy are “don't care” conditions. When the intra flag is zero, MVx and MVy correspond to computed motion vector components.
Joint coding of an intra symbol with motion vectors allows an elegant yet efficient implementation with the ability to switch blocks to intra when four extended motion vectors are used in a macroblock.
B. Signaling Residual Information
In addition to the intra symbol, some embodiments jointly code the presence or absence of subsequent residual symbols with a motion vector. For example, a “last” (or terminal) symbol indicates whether the joint code containing the motion vector or pseudo motion vector is a terminal symbol of a given macroblock, block or field, or if residual data follows (e.g., when last=1 (i.e. last is true), no subsequent data pertains to the area). This joint code can be referred to as an extended motion vector, and is of the form <intra, MVx, MVy, last>. In the syntax diagrams below, an extended motion vector is represented as MV*.
In some embodiments, the extended motion vector symbol <inter, 0, 0, true> is an invalid symbol. The condition that would ordinarily lead to this symbol a special condition called a “skip” condition. Under the skip condition, the current set of pixels (e.g., macroblock) can be predicted (to within quantization error) from its motion vector. No additional data (e.g., residual data) is necessary to decode this area. For efficiency reasons, the skip condition can signaled at the frame level. Therefore, in some embodiments, this symbol is not present in the bit stream. For example, skipped macroblocks have a motion vector such that the differential motion vector is (0, 0) or have no motion at all. In other words, in skipped macroblocks where some motion is present, the skipped macroblocks use the same motion vector as the predicted motion vector. Skipped macroblocks are also defined for 4MV macroblocks, and other cases. For more information on skipped macroblocks, see U.S. patent application Ser. No. 10/321,415, entitled, “Skip Macroblock Coding,” filed Dec. 16, 2002.
The last symbol applies to both intra signals and inter motion vectors. The way this symbol is used in different embodiments depends on many factors, including whether a macroblock is a 1MV or 4MV macroblock, or an interlace macroblock (e.g., a field-coded, 2MV macroblock). Moreover, in some embodiments, the last symbol is interpreted differently for interpolated mode B-frames. These concepts are covered in detail below.
V. Syntax for Coding Motion Vector Information
In some embodiments, a video encoder encodes video images using a sub-frame-level syntax (e.g., a macroblock-level syntax) including extended motion vectors. For example, for macroblocks in a video sequence having progressive and interlace P-frames and B-frames, each macroblock is coded with zero, one, two or four associated extended motion vector symbols. The specific number of motion vectors depends on the specifics of the coding mode—(e.g., whether the frame is a P-frame or B-frame, progressive or interlace, 1MV or 4MV-coded, and/or skip coded). Coding modes also determine the order in which the motion vector information is sent. The following sections and correspondingFIGS. 10-14 cover these possibilities and map out the syntax or format for different situations. Although the figures show elements (e.g., extended motion vectors) in certain arrangements, the elements can be arranged in different ways.
In the following sections and the corresponding figures, the symbol MBH denotes a macroblock header—a placeholder for any macroblock level information other than a motion vector, I/P switch or coded block pattern (CBP)). Examples of elements in MBH are skip bit information, motion vector mode information, coding mode information for B-frames, and frame/field information for interlace frames.
A. 1MV Macroblock Syntax
FIG. 10 is a diagram showing anexemplary macroblock syntax1000 with an extended motion vector symbol for use in coding 1MV macroblocks. Examples of 1MV macroblocks include progressive P-frame macroblocks, interlace frame-coded P-frame macroblocks, progressive forward- or backward-predicted B-frame macroblocks, and interlace frame-coded forward- or backward-predicted B-frame macroblocks. InFIG. 10, MV* is sent after MBH and before CBP.
CBP indicates which of the blocks making up a macroblock have attached residual information. For example, for a 4:2:0 macroblock with four luminance blocks and two chrominance blocks, CBP includes six bits. A corresponding CBP bit indicates whether residual information exists for each block. In MV*, the terminal symbol “last” is set to 1 if CBP is all zero, indicating that there are no residuals for all six blocks in the macroblock. In this case, CBP is not sent. If CBP is not all zero (which under many circumstances is more likely to be the case), the terminal symbol is set to 1, and the CBP is sent, followed by the residual data for blocks that have residuals. For example, inFIG. 10, up to six residual blocks (e.g., luminance residual blocks Y0, Y1, Y2, and Y3, and chrominance residual blocks U and V) can be sent, depending on the value of CBP.
B. 4MV Macroblock Syntax
FIG. 11 is a diagram showing anexemplary macroblock syntax1100 with an extended motion vector symbol for use in coding progressive 4MV macroblocks in P-frames. For the code labeled CBP′, when four motion vectors are present in a macroblock, the first four components of the CBP (corresponding to the first four blocks) are reinterpreted to be the union of the events where MV*≠0, and where residuals are present. For example, inFIG. 11, the first four CBP components correspond to the luminance blocks. When a luminance block is intra-coded or inter-coded with a nonzero differential motion vector, or when there are residuals, the block pattern is set to true. There is no change to the chrominance components.
InFIG. 11, the CBP is sent right after MBH. Subsequently, the extended motion vectors for the four luminance blocks are sent only when the corresponding block pattern is nonzero. The terminal symbols of the extended motion vectors are used to send the original CBP information for the luminance blocks, flagging the presence of residuals. As an illustration, if block Y0 has no residuals but does have a nonzero differential motion vector, the first component of CBP would normally be set to true. Therefore, MV* is sent, with its last symbol being set to true. No further information is sent for block Y0.
C. 2MV Macroblock Syntax
FIG. 12 is a diagram showing anexemplary macroblock syntax1200 with extended motion vector symbols for use in coding 2MV macroblocks (e.g., progressive interpolated macroblocks in B-frames, forward/backward predicted macroblocks in B-frames, and interlace frame-type macroblocks). For example, in progressive sequences and in frame coded interlace sequences, B-frame macroblocks use zero, one or two motion vectors. When there are two motion vectors, thesyntax1200 shown inFIG. 12 is used. This is an extension of the1MV macroblock syntax1100 shown inFIG. 11.
InFIG. 12, the two extended motion vectors MV1* and MV2* are sent in a predetermined order. For example, in some embodiments, an encoder sends a backward differential motion vector followed by a forward differential motion vector for a B-frame macroblock, following the macroblock header. In the event that all residuals are zero, the last symbol of the second motion vector is set to true and no further data is sent. In the event that MV2*=0 and CBP=0, the last symbol of MV1* is set to true and the macroblock terminates. When both motion vectors and CBP are zero, the macroblock is skip-coded.
D. Macroblock Syntax for Interlace Field-type Macroblocks in P-Frames and Forward/Backward Predicted Field-Type Macroblocks in B-frames
FIG. 13 is a diagram showing anexemplary macroblock syntax1300 with extended motion vector symbols for use in coding interlace field-type macroblocks in P-frames and forward/backward predicted field-type macroblocks in B-frames. Such macroblocks have two motion vectors, corresponding to the top and bottom field motion. The extended motion vectors are sent subsequent to a modified CBP (CBP′ inFIG. 13). The first and third components of the CBP are reinterpreted to be the union of the corresponding nonzero extended motion vector events and nonzero residual events. The terminal symbols of the top extended motion vector MVT* and the bottom extended motion vector MVB* contain the original block pattern components for the corresponding blocks. AlthoughFIG. 13 shows the extended motion vectors in certain locations, other arrangements are also valid.
E. Macroblock Syntax for Interlace Field-Type Interpolated Macroblocks in B-Frames
FIG. 14 is a diagram showing an exemplary macroblock syntax with extended motion vector symbols for use in coding interlace interpolated (bi-directional) field-type macroblocks in B-frames. The technique used to code motion vectors for interlace field-type interpolated B-frame macroblocks combines ideas from interlace field-type P-frame macroblocks and progressive B-frame macroblocks using 2 motion vectors. Again, whileFIG. 14 shows an exemplary arrangement having certain overloaded CBP blocks, the four extended motion vectors (e.g., MV1T*, MV2T*, MV1B* and MV2B*) can be distributed differently across the block data channels.
F. Simplified CBP and MV* Alphabets
In the syntax formats described above, the coded block pattern CBP=0 (i.e., all bits in CBP are equal to zero) does not occur in the bit stream. Accordingly, in some embodiments, for the sake of efficiency, this symbol is not present in the CBP alphabet. For example, for the six blocks in a 4:2:0 macroblock, the coded block pattern alphabet comprises 2{circumflex over ( )}6−1=63 symbols. Moreover, as discussed earlier, the MV* symbol <intra switch, MVx, MVy, last>=<inter, 0, 0, true> is an invalid symbol. Occurrences of this symbol can be coded using skip bits, or in some cases, CBP.
VI. Generation of Motion Vector Predictors and Differential Motion Vectors
In some embodiments, to exploit continuity in motion vector information, motion vectors are differentially predicted and encoded from neighboring sets of pixels (e.g., blocks, macroblocks, etc.). For example, a video encoder/decoder uses three motion vectors in the neighborhood of a current block, macroblock or field for computing a prediction. The specific features of a predictor calculation technique depend on factors such as whether the sequence is interlace or progressive, and whether one, two, or four motion vectors are being generated for a given macroblock. For example, in a 1MV macroblock, the macroblock has one corresponding motion vector for the entire macroblock. In a 4MV macroblock, the macroblock has one corresponding motion vector for each block in the macroblock.FIG. 15 is a diagram showing amacroblock1500 comprising four blocks, themacroblock1500 has a motion vector corresponding to each block in positions0-3.
In the following sections, there is only one numerical prediction for a given motion vector, and this is calculated by analyzing candidates (which may also be referred to as predictors) for the motion vector predictor.
A. Motion Vector Candidates in 1MV P-frames
FIGS. 16A and 16B are diagrams showing three candidate motion vector predictors for acurrent 1MV macroblock1610 in a P-frame. InFIG. 16A, where thecurrent macroblock1610 is not the last macroblock in a macroblock row, the candidates are taken from the left (Predictor C), top (Predictor A) and top-right (Predictor B) macroblocks. InFIG. 16B, themacroblock1610 is the last macroblock in the row. In this case, Predictor B is taken from the top-left macroblock instead of the top-right. In some embodiments, for the special case where the frame is one macroblock wide, the predictor is always Predictor A (the top predictor).
B. Motion Vector Candidates in Mixed-MV P-frames
FIGS. 17A, 17B,18A,18B,19A,19B,20 and21 show candidate motion vector predictors for 1MV and 4MV macroblocks in mixed-MV P-frames. In these figures, the larger squares are macroblock boundaries and the smaller squares are block boundaries. In some embodiments, for the special case where the frame is one macroblock wide, the predictor is always Predictor A (the top predictor).
FIGS. 17A and 17B are diagrams showing candidate motion vector predictors for a1MV macroblock1710 in a mixed 1MV/4MV P-frame. The neighboring macroblocks may be 1MV or 4 MV macroblocks.FIGS. 17A and 17B show the candidate motion vectors under an assumption that the neighbors are 4MV macroblocks. For example, Predictor A is the motion vector forblock2 in the macroblock above thecurrent macroblock1710 and Predictor C is the motion vector forblock1 in the macroblock immediately to the left of thecurrent macroblock1710. If any of the neighbors are 1MV macroblocks, the motion vector predictors shown inFIGS. 17A and 17B are taken to be the motion vectors for the entire neighboring macroblock. AsFIG. 17B shows, if themacroblock1710 is the last macroblock in the row, then Predictor B is fromblock3 of the top-left macroblock instead of fromblock2 in the top-right macroblock (as inFIG. 17A).
In embodiments such as those shown inFIGS. 17A and 17B, Predictor B is taken from the adjacent macroblock column instead of the block immediately to the right of Predictor A because, in the case where the top macroblock (in which Predictor A lies) is 1MV-coded, the block adjacent to Predictor A will have the same motion vector as A. This can essentially force the predictor to predict from the top, which is not always desirable.
FIGS. 18A, 18B,19A,19B,20 and21 show predictors for each of the 4 luminance blocks in a 4MV macroblock. For example,FIGS. 18A and 18B are diagrams showing candidate motion vector predictors for a block1810 atposition0 in a 4MV macroblock1820 in a mixed 1MV/4MV P-frame. In some embodiments, for the case where the macroblock1820 is the first macroblock in the row, Predictor B for block1810 is handled differently than the remaining blocks in the row. InFIG. 18B, Predictor B is taken from the block atposition3 in the macroblock immediately above the current macroblock1820 instead of from the block atposition3 in the macroblock above and to the left of current macroblock1820, as is the case inFIG. 18A. Again, in some embodiments, Predictor B is to the left of Predictor A in the more frequently occurring case shown inFIG. 18A because the block to the immediate right of Predictor A will have the same motion vector as Predictor A when the top macroblock is 1MV-coded. InFIG. 18B, Predictor C is equal to zero because it lies outside the picture boundary.
FIGS. 19A and 19B are diagrams showing candidate motion vector predictors for ablock1910 atposition1 in a4MV macroblock1920 in a mixed 1MV/4MV P-frame. InFIG. 19B, for the case where themacroblock1920 is the last macroblock in the row, Predictor B for thecurrent block1910 is handled differently than for the case shown inFIG. 19A. InFIG. 19B, Predictor B is taken from the block atposition2 in the macroblock immediately above thecurrent macroblock1920 instead of from the block atposition2 in the macroblock above and to the left of thecurrent macroblock1920, as is the case inFIG. 19A.
FIG. 20 is a diagram showing candidate motion vector predictors for ablock2010 atposition2 in a4MV macroblock2020 in a mixed 1MV/4MV P-frame. InFIG. 20, if themacroblock2020 is in the first macroblock column (in other words, if themacroblock2020 is the first macroblock in a macroblock row) then Predictor C for theblocks2010 is equal to zero.
FIG. 21 is a diagram showing candidate motion vector predictors for ablock2110 atposition3 in a4MV macroblock2120 in a mixed 1MV/4MV P-frame. The predictors forblock2110 are the three other blocks within themacroblock2120. The choice for Predictor B to be taken from the block to the left of Predictor A (e.g., instead of the block to the right of Predictor A) is for causality. In situations such as the example shown inFIG. 21, theblock2110 can be decoded without referencing motion vector information from a subsequent macroblock.
C. Motion Vector Candidates in Interlace P-Frames
FIGS. 22A and 22B are diagrams showing candidate motion vector predictors for a frame-type macroblock2210 in an interlace P-frame. InFIG. 22A, where thecurrent macroblock2210 is not the last macroblock in a macroblock row, the candidates are taken from the left (Predictor C), top (Predictor A) and top-right (Predictor B) macroblocks. InFIG. 22B, themacroblock2210 is the last macroblock in the row. In this case, Predictor B is taken from the top-left macroblock instead of the top-right. In some embodiments, for the special case where the frame is one macroblock wide, the predictor is always Predictor A (the top predictor). When a neighboring macroblock is field-coded, having two motion vectors (one for the top field and the other for the bottom field), the two motion vectors are averaged to generate the prediction candidate. The figure below shows how the motion vector predictor is derived from the neighboring macroblocks for a frame coded macroblock in Interlace P pictures.
In some embodiments, for field-coded macroblocks, the motion vectors of corresponding fields of the neighboring macroblocks are used as candidates for predicting a motion vector for a top or bottom field. For example,FIGS. 23A and 23B are diagrams showing candidate motion vector predictors for a field-type macroblock2310 in an interlace P-frame. InFIG. 23A, where thecurrent macroblock2310 is not the last macroblock in a macroblock row, the candidates are taken from fields in the left (Predictor C), top (Predictor A) and top-right (Predictor B) macroblocks. InFIG. 23B, themacroblock2310 is the last macroblock in the row. In this case, Predictor B is taken from the top-left macroblock instead of the top-right. When a neighboring macroblock is frame coded, the motion vectors corresponding to its fields are deemed to be equal to the motion vector for the entire macroblock. In other words, the top and bottom motion vectors are set to V, where V is the motion vector of the entire macroblock.
D. Calculating a Predictor from Candidates
Given three motion vector predictor candidates, the following pseudocode illustrates the process for calculating the motion vector predictor.
|
|
| if (predictorA is not out of bound) { |
| if (predictorC is out of bound && predictorB is out of bound) { |
| // picture consists of one MB |
| predictor = predictorA; |
| } else { |
| if (predictorC is out of bound) { |
| predictorC = 0; |
| } |
| numIntra = 0; |
| if (predictorA is intra) { |
| predictorA = 0; |
| numIntra =numIntra + 1; |
| } |
| if (predictorB is intra) { |
| predictorB = 0; |
| numIntra =numIntra + 1; |
| } |
| if (predictorC is intra) { |
| predictorC = 0; |
| numIntra =numIntra + 1; |
| } |
| // calculate predictor from A, B and C predictor candidates |
| predictor = cmedian3(predictorA, predictorB, predictorC); |
| } |
| } else if (predictorC is not out of bound) { |
| predictor = predictorC; |
| } else { |
| predictor = 0; |
| } |
|
The function cmedian3 is the component-wise median of three two dimensional vectors.
E. Pullback of Predictor
In some embodiments, after the predictor is computed, an encoder/decoder verifies whether the area of the image referenced by the predictor is within the frame. If the area is entirely outside the frame, it is pulled back to an area that overlaps the frame by one pixel width, overlapping the frame at the area closest to the original area. For example,FIG. 24 shows a technique24 for performing a pull back for a motion vector predictor. At2410, an encoder/decoder calculates a predictor. At2420, the encoder/decoder then finds the area referenced by the calculated predictor. At2430, the encoder/decoder determines whether the referenced area is completely outside the frame. If not, the process ends. If so, the encoder/decoder at2440 pulls back the predictor.
In some embodiments, an encoder/decoder uses the following rules for performing predictor pull backs:
1. For a macroblock motion vector: The top-left point of a 16×16 area pointed to by the predictor is restricted to be from −15 to (picture width −1) in the vertical and horizontal dimensions.
2. For a block motion vector: The top-left point of a 8×8 area pointed to by the predictor is restricted to be from −7 to (picture width −1) in the vertical and horizontal dimensions.
3. For a field motion vector: In the horizontal dimension, the top-left point of a 8×16 area pointed to by the predictor is restricted to be from −15 to (picture width −1). In the vertical dimension, the top-left point of this area is restricted to be from −7 to (picture height −1).
Although the predicted motion vector prior to pullback is valid, pullback assures that more diversity is available in the local area around the predictor. This allows for better predictions by lowering the cost of useful motion vectors.
F. Hybrid Motion Vectors
In some embodiments, if a P-frame is 1MV or mixed-MV, a calculated predictor is tested relative to the A and C predictors, such as those described above. This test determines whether the motion vector must be hybrid coded.
For example,FIG. 25 is a flow chart showing atechnique2500 for determining whether to use a hybrid motion vector for a set of pixels (e.g., a macroblock, block, etc.). At2510, a video encoder/decoder calculates a predictor for a set of pixels. At2520, the encoder/decoder compares the calculated predictor to one or more predictor candidates. At2530, the encoder/decoder determines whether a hybrid motion vector should be used. If not, the encoder/decoder at2540 uses the previously calculated predictor to predict the motion vector for the set of pixels. If so, the encoder/decoder at2550 uses a hybrid motion indicator to determine or signal which candidate predictor to use as the predictor for the set of pixels.
When the variance among the three motion vector candidates used in a prediction is high, the true motion vector is likely to be close to one of the candidate vectors, especially the vectors to the left and the top of the current macroblock or block (Predictors A and C, respectively). When the candidates are far apart, their component-wise median is often not an accurate predictor of motion in a current macroblock. Hence, in some embodiments, an encoder sends an additional bit indicating which candidate the true motion vector is closer to. For example, when the indicator bit indicates that the motion vector for Predictor A or C is the closer one, a decoder uses it as the predictor. The decoder must determine for each motion vector whether to expect a hybrid motion indicator bit, and this determination can be made from causal motion vector information.
The following pseudo-code illustrates this determination. In this example, when either Predictor A or Predictor C is intra-coded, the corresponding motion is deemed to be zero.
predictor: The calculated motion vector prediction, possibly reset below sabs( ): Sum of absolute values of components
| |
| |
| if ((predictorA is out of bounds) || (predictorC is out of bounds)) |
| { |
| return 0 //not a hybrid motion vector |
| } |
| else |
| { |
| if (predictorA is intra) |
| sum = sabs(predictor) |
| else |
| sum = abs(predictor − predictorA) |
| if (sum > 32) |
| return 1 // hybrid motion vector |
| else |
| { |
| if (predictorC is intra) |
| sum = sabs(predictor) |
| else |
| sum = abs(predictor − predictorC) |
| if (sum > 32) |
| return 1 // hybrid motion vector |
| } |
| return 0 // not a hybrid motion vector |
| } |
| |
An advantage of the above approach is that it uses the computed predictor—and in the typical case when there is no hybrid motion, the additional computations are not expensive.
In some embodiments, in a bit stream syntax, the hybrid motion vector indicator bit is sent together with the motion vector itself. Hybrid motion vectors may occur even when a set of pixels (e.g., block, macroblock, etc.) is skipped, in which case the one bit indicates whether to use A or C as the true motion for the set of pixels. In such cases, in the bit stream syntax, the hybrid bit is sent where the motion vector would have been had it not been skipped.
Hybrid motion vector prediction can be enabled or disabled in different situations. For example, in some embodiments, hybrid motion vector prediction is not used for interlace pictures (e.g., field-coded P pictures). A decision to use hybrid motion vector prediction can be made at frame level, sequence level, or some other level.
VII. Motion Vector Modes
In some embodiments, motion vectors are specified to half-pixel or quarter-pixel accuracy. Frames can also be 1MV frames, or mixed 1MV/4MV frames, and can use bicubic or bilinear interpolation. These choices make up the motion vector mode. In some embodiments, the motion vector mode is sent at the frame level. Alternatively, an encoder chooses motion vector modes on some other basis, and/or sends motion vector mode information at some other level.
In some embodiments, an encoder uses one of four motion compensation modes. The frame-level mode indicates (a) possible number of motion vectors per macroblock, (b) motion vector sampling accuracy, and (c) interpolation filter. The four modes (ranked in order of complexity/overhead cost) are:
- 1. Mixed 1MV/4MV per macroblock, quarter pixel, bicubic interpolation
- 2. 1MV per macroblock, quarter pixel, bicubic interpolation
- 3. 1MV per macroblock, half pixel, bicubic interpolation
- 4. 1MV per macroblock, half pixel, bilinear interpolation
VIII. Motion Vector Range and Rollover Arithmetic
Some embodiments use motion vectors that are specified in dyadic (power of two) ranges, with the range of permissible motion vectors in the x-component being larger than the range in the y-component. The range in the x-component is generally larger because (a) high motion typically occurs in the horizontal direction and (b) the cost of motion compensation with a large displacement is typically much higher in the vertical direction.
Some embodiments specify a baseline motion vector range of −64 to 63.x pixels for the x-component, and −32 to 31.x pixels for the y-component. The “.x” fraction is dependent on motion vector resolution. For example, for half-pixel sampling, .x is 0.5 and for quarter-pixel accuracy .x is 0.75. The total number of discrete motion vector components in the x and y directions are therefore 512 and 256, respectively, for bicubic filters (for bilinear filters, these numbers are 256 and 128). In other embodiments, the range is expanded to allow longer motion vectors in “broadcast modes.”
Table 1 shows different ranges for motion vectors (in addition to the baseline), signaled by the variable-length codeword MVRANGE.
| TABLE 1 |
|
|
| Extended motion vector range |
| MVRANGE | Range in X | Range in Y |
| |
| 0 (baseline) | (−64, 63.x) | (−32, 31.x) |
| 10 | (−128, 127.x) | (−64, 63.x) |
| 110 | (−512, 511.x) | (−128, 127.x) |
| 111 | (−1024, 1023.x) | (−256, 255.x) |
| |
Motion vectors are transmitted in the bit stream by encoding their differences from causal predictors. Since the ranges of both motion vectors and predictors are bounded (e.g., by one of the ranges described above), the range of the differences is also bounded. In order to maximize encoding efficiency, rollover arithmetic is used to encode the motion vector difference.
FIG. 26 shows atechnique2600 for applying rollover arithmetic to a differential motion vector. For example, at2610, an encoder finds a motion vector component for a macroblock. The encoder then finds a predictor for that motion vector component at2620. At2630, the encoder calculates a differential for the motion vector component, based on the predictor. At2640, the encoder then applies rollover arithmetic to encode the differential. Motion vector encoding using rollover arithmetic on the differential motion vector is a computationally simple yet efficient solution.
Let the operation Rollover(I, K) convert I into a signed K bit representation such that the lower K bits of I match those of Rollover(I, K). We know the following: If A and B are integers, or fixed point numbers, such that Rollover(A, K)=A and Rollover(B, K)=B, then:
B=Rollover(A+Rollover(B−A, K),K).
Replacing A with MVPx and B with MVx, the following relationship holds:
MVx=Rollover(MVPx+Rollover(MVx−MVPx),K)
where K is chosen as the logarithm tobase 2 of the motion vector alphabet size, assuming the size is a power of 2. The differential motion vector ΔMVx is set to Rollover(MVx−MVPx), which is represented in K bits.
In some embodiments, rollover arithmetic is applied according to the following example.
Assume that the current frame is encoded using the baseline motion vector range, with quarter pixel accuracy motion vectors. The range of both the x-component of a motion vector of a macroblock (MVx) and the x-component of its predicted motion (MVPx) is (−64, 63.75). The alphabet size for each is 2{circumflex over ( )}9=512. In other words, there are 512 distinct values each for MVx and MVPx.
The difference ΔMVx (MVx−MVPx) can be in the range (−128, 127.5). Therefore, the alphabet size for ΔMVx is 2{circumflex over ( )}10−1=1023. However, using rollover arithmetic, 9 bits of precision is sufficient to transmit the difference signal, in order to uniquely recover MVx from MVPx.
Let MVx=−63 and MVPx=63 with K=log2(512)=9. At quarter-pixel motion resolution, with an alphabet size of 512, the fixed point hexadecimal representations of MVx and MVPx are respectively 0×FFFFFF04 and 0×0FC, of which only the last 9 bits are unique. MVx−MVPx=0×FFFFFE08. The differential motion vector value is:
ΔMVx=Rollover (0×FFFFFE08, 9)=0×008
which is a positive quantity, although the raw difference is negative. On the decoder side, MVx is recovered from MVPx:
MVx=Rollover (0×0FC+0×008, 9)=Rollover (0×104)=0×F . . . F04
which is the fixed point hexadecimal representation of −63.
The same technique is used for coding the Y component. For example, K is set to 8 for the baseline MV range, at quarter-pixel resolution. In general, the value of K changes between x- and y-components, between motion vector resolutions, and between motion vector ranges.
IX. Extensions
In addition to the embodiments described above, and the previously described variations of those embodiments, the following is a list of possible extensions of some of the described techniques and tools. It is by no means exhaustive.
1. Motion vector ranges can be any integer or fixed point number, with rollover arithmetic carried out appropriately.
2. Additional motion vector modes can be used. For example, a 4MV, ⅛-pixel resolution, six-tap interpolation filter mode, can be added to the present four modes. Other modes, including different combinations of motion vector resolutions, filters, and number of motion vectors, can also be used. The mode may be signaled per slice, group of pictures (GOP), or other level of data object.
3. For interlace field-coded motion compensation, or for encoders/decoders using multiple reference frames, the index of the field or frame referenced by the motion compensator may be joint coded with extended motion vector information.
4. Other descriptors such as an entropy code table index, fading parameters, etc. may also be joint coded with extended motion vector information.
5. Some of the above descriptions assume a 4:2:0 or 4:1:1 video source. With other color configurations (such as 4:2:2), the number of blocks within a macroblock might change, yet the described techniques and tools can also be applied to the other color configurations.
6. Syntax using the extended motion vector can be extended to more complicated cases, such as 16 motion vectors per macroblock, and other cases.
Having described and illustrated the principles of our invention with reference to various embodiments, it will be recognized that the various embodiments can be modified in arrangement and detail without departing from such principles. It should be understood that the programs, processes, or methods described herein are not related or limited to any particular type of computing environment, unless indicated otherwise. Various types of general purpose or specialized computing environments may be used with or perform operations in accordance with the teachings described herein. Elements of embodiments shown in software may be implemented in hardware and vice versa.
In view of the many possible embodiments to which the principles of our invention may be applied, we claim as our invention all such embodiments as may come within the scope and spirit of the following claims and equivalents thereto.