| H.262 / MPEG-2 Part 2 | |
|---|---|
| Information technology – Generic coding of moving pictures and associated audio information: Video | |
| Status | In force |
| Year started | 1995 |
| First published | May 1996 (1996-05) |
| Latest version | ISO/IEC 13818-2:2013 October 2013 (2013-10) |
| Organization | ITU-T,ISO/IEC JTC 1 |
| Committee | ITU-T Study Group 16VCEG,MPEG |
| Base standards | H.261,MPEG-2 |
| Related standards | H.222.0,H.263,H.264,H.265,H.266,ISO/IEC 14496-2 |
| Predecessor | H.261 |
| Successor | H.263 |
| Domain | Video compression |
| License | Expired patents[1] |
| Website | https://www.itu.int/rec/T-REC-H.262 |
H.262[2] orMPEG-2 Part 2 (formally known as ITU-T Recommendation H.262 andISO/IEC 13818-2,[3] also known asMPEG-2 Video) is avideo coding format standardised and jointly maintained byITU-T Study Group 16Video Coding Experts Group (VCEG) andISO/IECMoving Picture Experts Group (MPEG), and developed with the involvement of many companies. It is the second part of the ISO/IECMPEG-2 standard. The ITU-T Recommendation H.262 and ISO/IEC 13818-2 documents are identical.
The standard is available for a fee from the ITU-T[2] and ISO. MPEG-2 Video is very similar toMPEG-1, but also provides support for interlaced video (an encoding technique used in analog NTSC, PAL and SECAM television systems). MPEG-2 video is not optimized for low bit-rates (e.g., less than 1 Mbit/s), but somewhat outperforms MPEG-1 at higher bit rates (e.g., 3 Mbit/s and above), although not by a large margin unless the video is interlaced. All standards-conforming MPEG-2 Video decoders are also fully capable of playing back MPEG-1 Video streams.[4]
The ISO/IEC approval process was completed in November 1994.[5] The first edition was approved in July 1995[6] and published by ITU-T[2] and ISO/IEC in 1996.[7] Didier LeGall ofBellcore chaired the development of the standard[8] and Sakae Okubo ofNTT was the ITU-T coordinator and chaired the agreements on its requirements.[9]
The technology was developed with contributions from a number of companies.Hyundai Electronics (nowSK Hynix) developed the first MPEG-2 SAVI (System/Audio/Video) decoder in 1995.[10]
The majority ofpatents that were later asserted in apatent pool to be essential for implementing the standard came from three companies:Sony (311 patents),Thomson (198 patents) andMitsubishi Electric (119 patents).[11]
In 1996, it was extended by two amendments to include the registration of copyright identifiers and the 4:2:2 Profile.[2][12] ITU-T published these amendments in 1996 and ISO in 1997.[7]
There are also other amendments published later by ITU-T and ISO/IEC.[2][13] The most recent edition of the standard was published in 2013 and incorporates all prior amendments.[3]
| Edition | Release date | Latest amendment | ISO/IEC standard | ITU-T Recommendation |
|---|---|---|---|---|
| First edition | 1995 | 2000 | ISO/IEC 13818-2:1996[7] | H.262 (07/95) |
| Second edition | 2000 | 2010[2][14] | ISO/IEC 13818-2:2000[15] | H.262 (02/00) |
| Third edition | 2013 | ISO/IEC 13818-2:2013[3] | H.262 (02/12), incorporating Amendment 1 (03/13) |
This sectionmay contain an excessive amount of intricatedetail that may only interest a particular audience.Specifically, this is not the place to explain the general concept of video compression in such detail; focus should be kept on the H.262 video codec.. Please help byspinning off orrelocating any relevant information, and removing excessive detail that may be againstWikipedia's inclusion policy.(May 2020) (Learn how and when to remove this message) |
AnHDTV camera with 8-bit sampling generates a raw video stream of 25 × 1920 × 1080 × 3 = 155,520,000 bytes per second for 25 frame-per-second video (using the4:4:4 sampling format). This stream of data must be compressed if digital TV is to fit in the bandwidth of available TV channels and if movies are to fit on DVDs.Video compression is practical because the data in pictures is often redundant in space and time. For example, the sky can be blue across the top of a picture and that blue sky can persist for frame after frame. Also, because of the way the eye works, it is possible to delete or approximate some data from video pictures with little or no noticeable degradation in image quality.
A common (and old) trick to reduce the amount of data is to separate each complete "frame" of video into two "fields" upon broadcast/encoding: the "top field", which is the odd numbered horizontal lines, and the "bottom field", which is the even numbered lines. Upon reception/decoding, the two fields are displayed alternately with the lines of one field interleaving between the lines of the previous field; this format is calledinterlaced video. The typical field rate is50 (Europe/PAL) or59.94 (US/NTSC) fields per second, corresponding to 25 (Europe/PAL) or 29.97 (North America/NTSC) whole frames per second. If the video is not interlaced, then it is calledprogressive scan video and each picture is a complete frame. MPEG-2 supports both options.
Digital television requires that these pictures be digitized so that they can be processed by computer hardware. Each picture element (apixel) is then represented by oneluma number and twochroma numbers. These describe the brightness and the color of the pixel (seeYCbCr). Thus, each digitized picture is initially represented by three rectangular arrays of numbers.
Another common practice to reduce the amount of data to be processed is tosubsample the twochroma planes (afterlow-pass filtering to avoidaliasing). This works because thehuman visual system better resolves details of brightness than details in the hue and saturation of colors. The term4:2:2 is used for video with the chroma subsampled by a ratio of 2:1 horizontally, and4:2:0 is used for video with the chroma subsampled by 2:1 both vertically and horizontally. Video that has luma and chroma at the same resolution is called4:4:4. The MPEG-2 Video document considers all three sampling types, although 4:2:0 is by far the most common for consumer video, and there are no defined "profiles" of MPEG-2 for 4:4:4 video (see below for further discussion of profiles).
While the discussion below in this section generally describes MPEG-2 video compression, there are many details that are not discussed, including details involving fields, chrominance formats, responses to scene changes, special codes that label the parts of the bitstream, and other pieces of information. Aside from features for handling fields for interlaced coding, MPEG-2 Video is very similar to MPEG-1 Video (and even quite similar to the earlierH.261 standard), so the entire description below applies equally well to MPEG-1.
MPEG-2 includes three basic types of coded frames: intra-coded frames (I-frames), predictive-coded frames (P-frames), and bidirectionally-predictive-coded frames (B-frames).
An I-frame is a separately-compressed version of a single uncompressed (raw) frame. The coding of an I-frame takes advantage of spatial redundancy and of the inability of the eye to detect certain changes in the image. Unlike P-frames and B-frames, I-frames do not depend on data in the preceding or the following frames, and so their coding is very similar to how a still photograph would be coded (roughly similar toJPEG picture coding). Briefly, the raw frame is divided into 8 pixel by 8 pixel blocks. The data in each block is transformed by thediscrete cosine transform (DCT). The result is an 8×8 matrix of coefficients that havereal number values. The transform converts spatial variations into frequency variations, but it does not change the information in the block; if the transform is computed with perfect precision, the original block can be recreated exactly by applying the inverse cosine transform (also with perfect precision). The conversion from 8-bit integers to real-valued transform coefficients actually expands the amount of data used at this stage of the processing, but the advantage of the transformation is that the image data can then be approximated byquantizing the coefficients. Many of the transform coefficients, usually the higher frequency components, will be zero after the quantization, which is basically a rounding operation. The penalty of this step is the loss of some subtle distinctions in brightness and color. The quantization may either be coarse or fine, as selected by the encoder. If the quantization is not too coarse and one applies the inverse transform to the matrix after it is quantized, one gets an image that looks very similar to the original image but is not quite the same. Next, the quantized coefficient matrix is itself compressed. Typically, one corner of the 8×8 array of coefficients contains only zeros after quantization is applied. By starting in the opposite corner of the matrix, then zigzagging through the matrix to combine the coefficients into a string, then substitutingrun-length codes for consecutive zeros in that string, and then applyingHuffman coding to that result, one reduces the matrix to a smaller quantity of data. It is thisentropy coded data that is broadcast or that is put on DVDs. In the receiver or the player, the whole process is reversed, enabling the receiver to reconstruct, to a close approximation, the original frame.
The processing of B-frames is similar to that of P-frames except that B-frames use the picture in a subsequent reference frame as well as the picture in a preceding reference frame. As a result, B-frames usually provide more compression than P-frames. B-frames are never reference frames in MPEG-2 Video.
Typically, every 15th frame or so is made into an I-frame. P-frames and B-frames might follow an I-frame like this, IBBPBBPBBPBB(I), to form aGroup of Pictures (GOP); however, the standard is flexible about this. The encoder selects which pictures are coded as I-, P-, and B-frames.
P-frames provide more compression than I-frames because they take advantage of the data in a previous I-frame or P-frame – areference frame. To generate a P-frame, the previous reference frame is reconstructed, just as it would be in a TV receiver or DVD player. The frame being compressed is divided into 16 pixel by 16 pixelmacroblocks. Then, for each of those macroblocks, the reconstructed reference frame is searched to find a 16 by 16 area that closely matches the content of the macroblock being compressed. The offset is encoded as a "motion vector". Frequently, the offset is zero, but if something in the picture is moving, the offset might be something like 23 pixels to the right and 4-and-a-half pixels up. In MPEG-1 and MPEG-2, motion vector values can either represent integer offsets or half-integer offsets. The match between the two regions will often not be perfect. To correct for this, the encoder takes the difference of all corresponding pixels of the two regions, and on that macroblock difference then computes the DCT and strings of coefficient values for the four 8×8 areas in the 16×16 macroblock as described above. This "residual" is appended to the motion vector and the result sent to the receiver or stored on the DVD for each macroblock being compressed. Sometimes no suitable match is found. Then, the macroblock is treated like an I-frame macroblock.
MPEG-2 video supports a wide range of applications from mobile to high quality HD editing. For many applications, it is unrealistic and too expensive to support the entire standard. To allow such applications to support only subsets of it, the standard defines profiles and levels.
Aprofile defines sets of features such as B-pictures, 3D video, chroma format, etc. The level limits the memory and processing power needed, defining maximum bit rates, frame sizes, and frame rates.
A MPEG application then specifies the capabilities in terms of profile and level. For example, a DVD player may say it supports up to main profile and main level (often written as MP@ML). It means the player can play back any MPEG stream encoded as MP@ML or less.
The tables below summarizes the limitations of each profile and level, though there are constraints not listed here.[2]: Annex E Note that not all profile and level combinations are permissible, and scalable modes modify the level restrictions.
| Abbr. | Name | Picture Coding Types | Chroma Format | Scalable modes | Intra DC Precision |
|---|---|---|---|---|---|
| SP | Simple profile | I, P | 4:2:0 | none | 8, 9, 10 |
| MP | Main profile | I, P, B | 4:2:0 | none | 8, 9, 10 |
| SNR | SNR Scalable profile | I, P, B | 4:2:0 | SNR[a] | 8, 9, 10 |
| Spatial | Spatially Scalable profile | I, P, B | 4:2:0 | SNR,[a] spatial[b] | 8, 9, 10 |
| HP | High-profile | I, P, B | 4:2:2 or 4:2:0 | SNR,[a] spatial[b] | 8, 9, 10, 11 |
| 422 | 4:2:2 profile | I, P, B | 4:2:2 or 4:2:0 | none | 8, 9, 10, 11 |
| MVP | Multi-view profile | I, P, B | 4:2:0 | Temporal[c] | 8, 9, 10 |
| Abbr. | Name | Frame rates (Hz) | Max resolution | Max luminance samples per second (approximately height x width x framerate) | Max bit rate MP@ (Mbit/s) | |
|---|---|---|---|---|---|---|
| horizontal | vertical | |||||
| LL | Low Level | 23.976, 24, 25, 29.97, 30 | 352 | 288 | 3,041,280 | 4 |
| ML | Main Level | 23.976, 24, 25, 29.97, 30 | 720 | 576 | 10,368,000, except in High-profile: constraint is 14,475,600 for 4:2:0 and 11,059,200 for 4:2:2 | 15 |
| H-14 | High 1440 | 23.976, 24, 25, 29.97, 30, 50, 59.94, 60 | 1440 | 1152 | 47,001,600, except in High-profile: constraint is 62,668,800 for 4:2:0 | 60 |
| HL | High Level | 23.976, 24, 25, 29.97, 30, 50, 59.94, 60 | 1920 | 1152 | 62,668,800, except in High-profile: constraint is 83,558,400 for 4:2:0 | 80 |
A few common MPEG-2 Profile/Level combinations are presented below, with particular maximum limits noted:
| Profile @ Level | Resolution (px) | Framerate max. (Hz) | Sampling | Bitrate (Mbit/s) | Example Application |
|---|---|---|---|---|---|
| SP@LL | 176 × 144 | 15 | 4:2:0 | 0.096 | Wireless handsets |
| SP@ML | 352 × 288 | 15 | 4:2:0 | 0.384 | PDAs |
| 320 × 240 | 24 | ||||
| MP@LL | 352 × 288 | 30 | 4:2:0 | 4 | Set-top boxes (STB) |
| MP@ML | 720 × 480 | 30 | 4:2:0 | 15 | DVD (9.8 Mbit/s), SDDVB (15 Mbit/s) |
| 720 × 576 | 25 | ||||
| MP@H-14 | 1440 × 1080 | 30 | 4:2:0 | 60 | HDV (25 Mbit/s) |
| 1280 × 720 | 30 | ||||
| MP@HL | 1920 × 1080 | 30 | 4:2:0 | 80 | ATSC (18.3 Mbit/s), SDDVB (31 Mbit/s), HDDVB (50.3 Mbit/s) |
| 1280 × 720 | 60 | ||||
| 422P@ML | 720 × 480 | 30 | 4:2:2 | 50 | Sony IMX (I only),Broadcast Contribution (I&P only) |
| 720 × 576 | 25 | ||||
| 422P@H-14 | 1440 × 1080 | 30 | 4:2:2 | 80 | |
| 422P@HL | 1920 × 1080 | 30 | 4:2:2 | 300 | SonyMPEG HD422 (50 Mbit/s), Canon XF Codec (50 Mbit/s), Convergent Design Nanoflash recorder (up to 160 Mbit/s) |
| 1280 × 720 | 60 |
Some applications are listed below.
The following organizations have held patents for MPEG-2 video technology, as listed atMPEG LA. All of these patents are now expired in the US and most other territories.[1]
| Organization | Patents[16] |
|---|---|
| Sony Corporation | 311 |
| Thomson Licensing | 198 |
| Mitsubishi Electric | 119 |
| Philips | 99 |
| GE Technology Development, Inc. | 75 |
| Panasonic Corporation | 55 |
| CIF Licensing, LLC | 44 |
| JVC Kenwood | 39 |
| Samsung Electronics | 38 |
| Alcatel Lucent (including Multimedia Patent Trust) | 33 |
| Cisco Technology, Inc. | 13 |
| Toshiba Corporation | 9 |
| Columbia University | 9 |
| LG Electronics | 8 |
| Hitachi | 7 |
| Orange S.A. | 7 |
| Fujitsu | 6 |
| Robert Bosch GmbH | 5 |
| General Instrument | 4 |
| British Telecommunications | 3 |
| Canon Inc. | 2 |
| KDDI Corporation | 2 |
| Nippon Telegraph and Telephone (NTT) | 2 |
| ARRIS Technology, Inc. | 2 |
| Sanyo Electric | 1 |
| Sharp Corporation | 1 |
| Hewlett-Packard Enterprise Company | 1 |