CROSS-REFERENCE TO RELATED APPLICATIONSThis application claims the priority of Korean Patent Application No. 2003-2371, filed on Jan. 14, 2003, in the Korean Intellectual Property Office, the disclosure of which is incorporated herein by reference.[0001]
BACKGROUND OF THE INVENTION1. Field of the Invention[0002]
The present invention relates to a method and apparatus for encoding and/or decoding moving pictures, and more particularly, to a method and an apparatus for encoding and/or decoding moving pictures which are capable of enhancing the efficiency of encoding moving pictures by adaptively selecting a quantization matrix in consideration of the characteristics of images input into a moving picture encoder.[0003]
2. Description of the Related Art[0004]
FIG. 1 is a block diagram of an[0005]encoding unit120 for encoding moving pictures and adecoding unit140 for decoding encoded moving pictures.
In order to provide a video-on-demand (VOD) service or to enable a moving picture communication, the[0006]encoding unit120 creates a bitstream encoded by a compression technique, and thedecoding unit140 restores original images from a bitstream input thereinto.
A discrete cosine transform (DCT)[0007]unit122 carries out a DCT operation on image data input thereinto in the unit of an 8×8 pixel block in order to remove spatial correlation from the input image data. A quantization unit (Q)124 carries out highly efficient data loss compression by carrying out quantization on the input image data using a DCT coefficient obtained by theDCT unit122 and representing the quantized data by several representative values.
An inverse quantization unit (IQ)[0008]126 inversely quantizes the quantized image data provided by thequantization unit124. An inverse discrete cosine transform (IDCT)unit128 carries out an IDCT on the inversely quantized image data provided by theinverse quantization unit126. Aframe memory unit130 stores the IDCT'ed image data provided by the IDCTunit128 on a frame-by-frame basis.
A motion estimation and compensation unit (ME/MC)[0009]132 estimates a motion vector (MV) for each macroblock and a sum of absolute difference (SAD), which correspond to a block matching error, by using the image data of a current frame inputted thereinto and the image data of a previous frame stored in theframe memory unit130.
A variable length coding unit (VLC)[0010]134 removes statistical redundancy from digital cosine transformed and quantized image data according to the estimated motion vector provided by the motion estimation andcompensation unit132.
A bitstream encoded by the[0011]encoding unit120 is decoded by thedecoding unit140. Thedecoding unit140 includes a variable length decoding unit (VLD)142, aninverse quantization unit144, anIDCT unit146, aframe memory unit148, and amotion estimation unit150.
U.S. Pat. No.[0012]6,480,539 discloses an example of an apparatus for encoding moving pictures.
A set-top box, which receives an analog terrestrial broadcast program and then encodes and stores the received program by using a data compression method such as MPEG2 or MPEG4, has recently been developed. However, in the case of a terrestrial broadcast, images arriving at a receiving terminal may be distorted due to channel noise. For example, an image may look as if white Gaussian noise were added thereto. If the image is compressed as it is, the efficiency of compressing the image may be very low due to the influence of the white Gaussian noise.[0013]
Therefore, in order to get rid of noise in a conventional method of encoding moving pictures, a pretreatment filter is provided at an input port of an encoder. However, in the case of using the pretreatment filter, an additional calculation process for encoding moving pictures is needed.[0014]
In addition, in such a conventional method of encoding moving pictures, a quantization matrix is determined irrespective of the characteristics of an input image, and quantization is carried out on the input image by applying the quantization matrix to the input image on a picture-by-picture basis, in which case the efficiency of encoding the inputted image is low.[0015]
SUMMARY OF THE INVENTIONThe present invention provides a method and an apparatus for encoding and/or decoding moving pictures, which are capable of improving the efficiency and performance of compressing moving pictures.[0016]
The present invention also provides a method and an apparatus for encoding and/or decoding moving pictures, which are capable of removing noise without increasing the number of calculations performed.[0017]
According to an aspect of the present invention, there is provided a method of encoding moving pictures using a plurality of quantization matrices. The method involves (a) selecting one of the plurality of quantization matrices in consideration of characteristics of an input image; (b) transforming the input image; and (c) quantizing the transformed input image using the selected quantization matrix.[0018]
According to another aspect of the present invention, there is provided a method of decoding moving pictures using a plurality of quantization matrices. The method involves (a) carrying out variable length decoding on encoded image data; (b) extracting index information that specifies one of the plurality of quantization matrices, classified according to characteristics of an input image, from the variable-length-decoded image data; (c) selecting one of the plurality of quantization matrices based on the extracted index information; and (d) inversely quantizing each macroblock of the variable-length-decoded image data using the selected quantization matrix.[0019]
According to another aspect of the present invention, there is provided an apparatus for encoding moving pictures using a plurality of quantization matrices. The apparatus includes a quantization matrix determination unit that selects one of the plurality of quantization matrices for each macroblock in consideration of characteristics of an input image and generates index information indicating the selected quantization matrix for each macro block; a quantization matrix storage unit that stores a plurality of quantization matrices, which are classified according to the characteristics of the input image and outputs a quantization matrix for each macroblock according to the index information generated by the quantization matrix determination unit; an image transformation unit that transforms the input image; and a quantization unit that quantizes the transformed input image using the selected quantization matrix.[0020]
According to another aspect of the present invention, there is provided an apparatus for decoding moving pictures using a plurality of quantization matrices. The apparatus includes a variable length decoding unit that receives an encoded image stream, carries out variable length decoding on the input image stream, and extracts index information that specifies one of the plurality of quantization matrices, which are classified according to characteristics of an input image, from each macroblock of the variable-length-decoded image stream; a quantization matrix storage unit that stores the plurality of quantization matrices, selects one of the plurality of quantization matrices based on the extracted index information, and outputs the selected quantization matrix; and an inverse quantization unit that inversely quantizes each macroblock of the variable-length-decoded image stream using the quantization matrix output from the quantization matrix storage unit.[0021]
Additional aspects and/or advantages of the invention will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of the invention.[0022]
BRIEF DESCRIPTION OF THE DRAWINGSThese and/or other aspects and advantages of the invention will become apparent and appreciated from the following description of the embodiments taken in the attached drawings in which:[0023]
FIG. 1 is a block diagram of a conventional MPEG encoder and a conventional MPEG decoder;[0024]
FIG. 2 is a block diagram of an approximated generalized Wiener filter processing an image whose average is not 0;[0025]
FIG. 3 is a block diagram of an approximated generalized Wiener filter processing an image whose average is not 0 in a DCT block;[0026]
FIGS. 4A through 4C are block diagrams of different types of approximated generalized Wiener filters used for intra-block encoding;[0027]
FIG. 5 is a block diagram of a typical video encoder used for inter-block encoding;[0028]
FIG. 6 is a block diagram of an apparatus for encoding moving pictures according to an embodiment of the present invention;[0029]
FIG. 7 is a block diagram of an apparatus for encoding moving pictures according to an embodiment of the present invention; and[0030]
FIG. 8 is a block diagram of an apparatus for decoding moving pictures according to an embodiment of the present invention.[0031]
DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTSReference will now be made in detail to the embodiments of the present invention, examples of which are illustrated in the accompanying drawings, wherein like reference numerals refer to the like elements throughout. The embodiments are described below to explain the present invention by referring to the figures.[0032]
When it comes to encoding moving pictures, pre-treatment filtering is very important because it can increase the efficiency of encoding the moving pictures by removing noise from images. While a conventional pre-treatment filtering technique for removing noise from images is generally carried out in a spatial pixel block, in the present invention, a noise removal technique is carried out in a DCT block in an apparatus for encoding moving pictures.[0033]
In the present invention, an approximated generalized Weiner filtering method is used for removing noise from images. In the approximated generalized Weiner filtering method, Weiner filtering is realized by taking advantage of fast unitary transformation, such as a discrete cosine transform (DCT). However, a filtering method other than the approximated generalized Weiner filtering method may be selectively used for carrying out filtering in a DCT block.[0034]
FIG. 2 is a block diagram of an approximated generalized Weiner filter processing image data whose average is not 0.[0035]
In FIG. 2, v represents an image block containing noise, and z,[0036]900 represents a row-ordered column vector of a filtered image block. Since the average of the image block v is not 0, anaverage estimation unit210 estimates an average z,901 of the image block v, and asubtraction unit220 subtracts the estimated average z,901 from the image block v.
A value z, output from the[0037]subtraction unit220 as a result of the subtraction, is filtered by afiltering unit230, and thefiltering unit230 outputs filtered data z,902 as a result of the filtering. Anaddition unit240 adds the estimated average z,901 of the image block v to the filtered data and then outputs desirably filtered data z,900 as a result of the addition.
Hereinafter, an approximated generalized Weiner filtering method for processing an image model whose average is 0 will be described in greater detail.[0038]
The approximated generalized Weiner filtering method for processing an image model whose average is 0 can be expressed by Equation (1) below.[0039]
z,902 =A*T[AL A*T]Az=A*T{tilde over (L)}Z (1)
In Equation (1), {tilde over (L)}=AL A*[0040]T, L=[I+σn2R31 1]−1, R=E[y yT], Z=Az, and σn2represents a noise variance value. In addition, in Equation (1), A represents unitary transformation. Since in the present embodiment, DCT is used as unitary transformation, A represents DCT here. Supposing that C8and z,903 represent an 8×8 DCT matrix and a Kronecker operator, A=C8z,903 C8.
Since, in most cases, {tilde over (L)} is approximately diagonalized in a unitary transformation, Equation (1) can be rearranged into Equation (2) below.[0041]
ŷ=A*Ty (2)
In Equation (2), Ŷ={tilde over (L)}Z≈[Diag{tilde over (L)}]Z.[0042]
Therefore, by applying Equation (2) to an 8×8 block, Equation (3) below can be obtained.[0043]
z,902 (k, l)≈{tilde over (p)}(k, l)Z(k, l) (3)
[0044] In Equation (4), Ψ(k, l) represents normalized elements placed along a diagonal line of AL A*[0045]T, and σ2represents a variance value of an original image y. In general, σ2cannot be known. Therefore, σ2substituted by a result of subtracting the noise variance value σn2from a variance value of z.
As shown in Equation (3), approximated generalized Weiner filtering is carried out on an image block whose average is 0 by multiplying a two-dimensional DCT coefficient Z(k, l) by {tilde over (p)}(k, l). Once z,[0046]902 (m, n) is determined, a final, filtered image is obtained by adding z,901 (m, n) to z,902 (m, n).
Hereinafter, an approximated generalized Weiner filtering method for processing an image model whose average is not 0 will be described in greater detail.[0047]
Let us assume that an average block is obtained by multiplying an input DCT block containing noise by S(k, l), i.e., the average block satisfied in Equation (5) below. Then, the approximated generalized Weiner filter method of FIG. 3, which carries out addition and subtraction in the DCT block, can be restructured into an approximated generalized Weiner filter of FIGS. 4A, 4B, or[0048]4C.
z,904 (k, l)=S(k, l)·V(k, l) (5)
By using Equations (3) and (5), an image block filtered in the DCT block can be represented by Equation (6) below.[0049]
z,905 (k,l)=z,906 (k,l)+{circumflex over (M)}(k, l)=({tilde over (p)}(k, l)·(1−S(k, l))+S(k, l))·V(k, l)=F(k, l)·V(k, l)
F(k, l) in Equation (6) can be expressed by Equation (7) below.
[0050]As shown in Equation (6), the entire filtering process can be simplified into a multiplication of F(k, l). Equation (7) shows that F(k, l) is determined by a signal-to-noise ratio (SNR), a covariance matrix, and an average matrix.[0051]
In order to determine F(k, l), it is necessary to obtain an average matrix S(k, l). In the present embodiment, among possible candidates for the average matrix S(k, l), the one that is satisfied in Equation (5) is selected. The average matrix S(k, l) can be represented by Equation (8) below. Equation (8) illustrates one of the simplest forms that the average matrix S(k, l) could take in the DCT block.
[0052]Hereinafter, a pretreatment process performed in an apparatus for encoding moving pictures will be described in greater detail with reference to FIGS. 4 and 5.[0053]
As described above, an approximated generalized Weiner filtering process can be carried out on an image block whose average is not 0 by multiplying the image block with a DCT value.[0054]
FIGS. 4A through 4C are block diagrams of different types of approximated generalized Weiner filters in an apparatus for encoding moving pictures. More specifically, FIGS. 4A through 4C illustrate the structure of an encoding apparatus that processes an intra block. FIGS. 4A and 4B describe that an intra block is encoded by carrying out filtering on the intra block in a DCT block and carrying out quantization and variable length coding (VLC) on the filtered intra block without performing an inverse DCT on the filtered intra block. In other words, FIGS. 4A and 4B describe that filtering is completed by multiplying the DCT coefficient by F(k, l). In the meantime, quantization is carried out by multiplying or dividing the DCT coefficient by a certain value with reference to a quantization table. The filtering carried out by multiplying the DCT coefficient by F(k, l) and the quantization carried out by multiplying the DCT coefficient by a certain value can be integrated into a single operation, as described in FIG. 4C.[0055]
As described in FIG. 5, the concepts of the present invention, described in FIGS. 4A through 4C, can be directly applied to an occasion when an apparatus for encoding moving pictures processes an inter block, as long as the noise has been removed from the motion-compensated block information p(m, n).[0056]
A covariance value Ψ(k, l) is determined depending on whether an input image block is an inter block or an intra block. Therefore, F(k, l) of FIG. 5 may be varied depending on whether the input image block is an inter block or an intra block.[0057]
Hereinafter, a method of obtaining an estimated variance value of intra blocks or inter blocks, from each of which their average is subtracted, will be described in detail with reference to Equation (9) below. Supposing that S represents an N×N (where N=8) block from which an average of the corresponding block has already been subtracted, a variance matrix of the N×N block can be obtained using Equation (9).
[0058]Equation (9) has been disclosed by W. Niehsen and M. Brunig in “Covariance Analysis of Motion-compensated Frame Differences”, IEEE Trans. Circ. Syst. for Video Technol., June 1999.[0059]
An estimated variance value can be obtained by applying Equation (9) to a variety of experimental images. Where an original image block is an intra block, an original image is divided into 8×8 blocks, and then a variance value of each of the 8×8 blocks is calculated. On the other hand, where the original image block is an inter block, an estimated variance value is calculated by applying Equation (9) above to each image block that is determined as an inter block .[0060]
By using the estimated covariance value, the following equation is obtained: R=E[y y[0061]T]. Thereafter, by carrying out DCT on R, the following equation is obtained: Ψ=ARA*T.
Hereinafter, a method of calculating
[0062]of Equation (7) will be described.[0063]
In Equation (7), the noise variance value σ[0064]n2can be obtained by using a noise estimator. Given that noise and original image pixels are independent random variables, an estimated value z,9072of the variance σ2of an original image can be calculated using Equation (10) below.
z,9072=max(z,9072z−z,907n2, 0) (10)
In Equation (10), σ[0065]z2represents a variance value of each macroblock (MB). In a typical type of apparatus for encoding moving pictures, σz2is calculated on a macroblock-by-macroblock basis. In the present embodiment, 8×8 blocks in the same macroblock are supposed to have the same variance value. Therefore, there is no need to perform additional calculations to obtain a variance value of each of the 8×8 blocks.
FIG. 6 is a block diagram of an apparatus for encoding moving pictures according to an embodiment of the present invention that encodes an input image in consideration of the characteristics of the input image.[0066]
In the present embodiment a level of noise contained in the input image is adaptively reflected in a quantization matrix.[0067]
Hereinafter, the structure and operation of the apparatus for encoding moving pictures according to a preferred embodiment of the present invention will be described in detail with reference to FIGS. 1 through 6.[0068]
The apparatus of FIG. 6 includes a discrete[0069]cosine transfer unit610, a quantization unit (Q)620, a variable length coding unit (VLC)670, an inverse quantization unit (IQ)630, an inverse DCT unit (IDCT)640, aframe memory unit650, and a motion estimation andcompensation unit660, which correspond to theDCT unit122, thequantization unit124, theVLC unit134, theinverse quantization unit126, theinverse DCT unit128, theframe memory130, and the motion estimation andcompensation unit132, respectively, of theencoding unit120 of FIG. 1. In addition, the apparatus further includes anoise estimation unit680, a quantization weightmatrix determination unit692, and a quantization weightmatrix storage unit694.
Since the[0070]DCT unit610, theinverse DCT unit640, theframe memory unit650, and the motion estimation andcompensation unit660 serve the same functions as their respective counterparts of FIG. 1, their description will not be repeated .
The quantization weight[0071]matrix determination unit692 determines a quantization weight matrix corresponding to a predetermined macroblock based on a noise variance value σn2received from thenoise estimation unit680 and the predetermined macroblock's variance value σz2received from the motion estimation andcompensation unit660. Thereafter, the quantization weightmatrix determination unit692 sends index information corresponding to the determined quantization weight matrix to the quantization weightmatrix storage unit694 and theVLC unit670.
Hereinafter, a method of determining a quantization weight matrix corresponding to the predetermined macroblock based on σ[0072]n2received from thenoise estimation unit680 and σz2received from the motion estimation andcompensation unit660, will be described in detail.
As described above with reference to Equation (8) and FIGS. 4 and 5, F(k, l) is determined by Equation (7). Once F(k, l) is determined, the DCT coefficient V(k, l) of an 8×8 block is multiplied by F(k, l), and the result of the multiplication z,[0073]905 (k, l) is divided by a predetermined quantization weight matrix during a quantization process.
The apparatus of FIG. 6 integrates the process of multiplying F(k, l) by the DCT coefficient V(k, l) and the process of dividing z,[0074]905 (k, l) by the quantization weight matrix into a single process and performs the single process. In other words, if a location component of (k, l) of a quantization weight matrix QT is represented by Q(k, l), then a location of (k, l) in a new quantization weight matrix QT′ is Q(k, l)/F(k, l).
In the present embodiment, by integrating the two separate processes into a single process, a plurality of F matrices obtained using σ[0075]n2and σz2are computed in advance, and then the new quantization weight matrix QT′ is then computed using the plurality of F matrices and then is stored in the quantization weightmatrix storage unit694.
In addition, in the present embodiment, five new quantization weight matrices obtained using σ
[0076]n2and σ
z2are stored in the quantization weight
matrix storage unit694. Once σ
n2and σ
z2are determined,
can be calculated using Equation (10).[0077]
As shown in Equation (7), F(k, l) is determined by S(k, l), Ψ(k, l), and
[0078]S(k, l) is calculated using Equation (8), and Ψ(k, l) is variably set depending on whether or not an input image is an inter block or an intra block. Therefore, there is only one variable left for determining F(k, l), i.e.,
[0079]In the present embodiments, five different estimates of
[0080]and their respective quantization weight matrices QT′ are provided. The provided quantization weight matrices QT′ are stored in the quantization weight[0081]matrix storage unit694.
The quantization weight
[0082]matrix determination unit692 quantizes
based on σ[0083]n2received from thenoise estimation unit680 and σz2received from the motion estimation andcompensation unit660. The result of the quantization is transmitted to the quantization weightmatrix storage unit692 and theVLC unit670 as index information of a quantization matrix corresponding to the predetermined macroblock.
For example, if quantization weight matrices stored in the quantization weight
[0084]matrix storage unit694 are classified into five different types according to
the quantization of
[0085]is carried out in five levels, and the index information of each of the five quantization weight matrices is set to 0,1, 2, 3, or 4.[0086]
In an image with a lot of noise, the
[0087]especially for blocks having a small variance value, is very large. When
[0088]is very large, F(k, l) approaches 0, resulting in a severe blocking phenomenon. In order to prevent the blocking phenomenon, T
[0089]cutoffis used, as shown in Equation (11) below.
In general, T[0090]cutoffhas a value between 1 and 2.
The quantization weight[0091]matrix storage unit694 transmits a quantization weight matrix corresponding to the index information received from the quantization weightmatrix determination unit692 to thequantization unit620 and theinverse quantization unit630.
The[0092]quantization unit620 quantizes the predetermined macroblock using the quantization weight matrix received from the quantization weightmatrix storage unit694.
The[0093]inverse quantization unit630 inversely quantizes the predetermined macroblock using the received quantization weight value.
The[0094]VLC unit670 carries out VLC on input image data quantized by thequantization unit620 and inserts the index information of the quantization weight matrix received from the quantization weightmatrix determination unit692 into a macroblock header.
In the present embodiment, the index information of the corresponding quantization weight matrix is inserted into the macroblock header and the macroblock header is transmitted. If there are ten quantization weight matrices stored in the quantization weight[0095]matrix storage unit694, then 4-bit data is required for each macroblock.
Adjacent macroblocks are supposed to have similar image characteristics and there is supposedly a correlation among their index values. Therefore, a difference between an index value of one macroblock and an index value of an adjacent macroblock may be used as index information. The amount of index information to be transmitted can be considerably reduced in cases where a single quantization weight matrix is applied to an entire sequence.[0096]
In the present embodiment, a plurality of quantization weight matrices stored in the quantization weight[0097]matrix storage unit694 should also be stored in a decoding unit. It may also be possible to use a plurality of quantization weight matrices transmitted to the decoding unit on a picture-by-picture basis using a picture extension header or transmitted to the decoding unit on a sequence-by-sequence basis using a sequence extension header.
As described above, it is possible to remove noise from an input image and enhance the efficiency of encoding the input image by adaptively applying a quantization matrix to each macroblock in consideration of a level of noise contained in the input image.[0098]
It is also possible for a user to arbitrarily determine quantization weight matrices. In the present embodiment, noise removal has been described as being performed on a Y component of an input image block in a DCT block. However, the noise removal can also be applied to a U or V component of the input image block, in which case additional quantization weight matrices are required exclusively for the U and V components of the input image block.[0099]
FIG. 7 is a block diagram of an apparatus for encoding moving pictures according to another preferred embodiment of the present invention that encodes an input image in consideration of the characteristics of the input image.[0100]
More specifically, among various characteristics of an input image, the edge characteristics of each macroblock of the input image are taken into consideration in the present embodiment.[0101]
Referring to FIG. 7, an apparatus for encoding moving pictures according to another preferred embodiment of the present invention includes a[0102]DCT unit710, aquantization unit720, aVLC unit770, aninverse quantization unit730, aninverse DCT unit740, aframe memory unit750, and a motion estimation andcompensation unit760, which correspond to theDCT unit122, thequantization unit124, theVLC unit134, theinverse quantization unit126, theinverse DCT unit128, theframe memory130, and the motion estimation andcompensation unit132, respectively, of theencoding unit120 of FIG. 1. In addition, the apparatus further includes a quantizationmatrix determination unit780 and a quantizationmatrix storage unit790. Since theDCT unit710, theinverse DCT unit740, theframe memory unit750, the motion estimation andcompensation unit760, and theVLC unit770 serve the same functions as their respective counterparts of FIG. 1, their description will not be repeated .
The quantization[0103]matrix determination unit780 selects an optimal quantization matrix for each macroblock in consideration of the characteristics of an input image and then transmits index information of the selected quantization matrix to the quantizationmatrix storage unit790 and theVLC unit770.
The quantization[0104]matrix determination unit780 takes the edge characteristics of each macroblock into consideration as a benchmark for selecting one out of a predetermined number of quantization matrices.
Hereinafter, a method of selecting a quantization matrix in consideration of the edge characteristics of a macro block will be described in detail.[0105]
In a case where a predetermined macroblock of an input image is an intra block, the size and direction of an edge in each pixel of the predetermined macroblock are computed using such an edge detector as a sobel operator. The sobel operation can be represented by Equation (12).
[0106]The quantization[0107]matrix determination unit780 calculates the magnitude of a vertical edge and the magnitude of a horizontal edge using Equation (12) above and calculates the intensity and direction of an edge of the predetermined macroblock using the magnitude of the vertical and horizontal edges. Thereafter, the quantizationmatrix determination unit780 selects one from among a predetermined number of quantization matrices in consideration of the intensity and direction of the edge of the predetermined macroblock and encoding efficiency. In other words, in a case where the predetermined macro block includes a horizontal or vertical edge, the quantizationmatrix determination unit780 selects a quantization matrix that can enable quantization in full consideration of the horizontal or vertical edge of the predetermined macro block.
In a case where the predetermined macroblock is an inter block, the intensity and direction of an edge included in the predetermined macroblock can also be obtained using such an edge detector as a sobel operator.[0108]
In the present embodiment, a sobel detector is used for computing the intensity and direction of an edge included in the predetermined macroblock. However, a spatial filter, such as a differential filter or a Robert's filter, can also be used for computing the intensity and direction of the edge included in the predetermined macroblock.[0109]
In addition, in the present embodiment, a quantization matrix is selected in consideration of the edge characteristics of the predetermined macroblock. However, other characteristics of the predetermined macroblock that can affect encoding efficiency or the quality of an output image can be taken into consideration in adaptively selecting an optimal quantization matrix for the predetermined macroblock.[0110]
The quantization[0111]matrix storage unit790 selects a quantization matrix based on the index information received from the quantizationmatrix determination unit780 and transmits the selected quantization matrix to thequantization unit720 and theinverse quantization unit730.
The[0112]quantization unit720 carries out quantization using the quantization matrix received from the quantizationmatrix storage unit790.
The[0113]inverse quantization unit730 carries out inverse quantization using the quantization matrix received from the quantizationmatrix storage unit790.
The[0114]VLC unit770 carries out VLC on quantized input image data, received from thequantization unit720, and index information of a quantization matrix corresponding to the predetermined macroblock, received from the quantization weightmatrix determination unit780. The index information is inserted into a macroblock header.
In the present embodiment, index information of a quantization weight matrix corresponding to a predetermined macroblock is inserted into a header of the predetermined macroblock and then transmitted. A difference between an index value of one macroblock and an index value of an adjacent macroblock may be used as index information.[0115]
In the present embodiment, a plurality of quantization weight matrices stored in the quantization[0116]matrix storage unit790 are also stored in a decoding unit . However, it may also be possible to use a plurality of quantization weight matrices transmitted to the decoding unit on a picture-by-picture basis using a picture extension header or transmitted to the decoding unit on a sequence-by-sequence basis using a sequence extension header.
FIG. 8 is a block diagram of an apparatus for decoding moving pictures according to an embodiment of the present invention. Referring to FIG. 8, the apparatus includes a variable[0117]length decoding unit810, aninverse quantization unit820, aninverse DCT unit830, aframe memory unit840, and amotion compensation unit850, which correspond to the variablelength decoding unit142, theinverse quantization unit144, theinverse DCT unit146, theframe memory unit148, and themotion compensation unit150, respectively, of thedecoding unit140 of FIG. 1. In addition, the apparatus further includes a quantization weightmatrix storage unit860. Theinverse DCT unit830, theframe memory unit840, and themotion compensation unit850 serve the same functions as their respective counterparts of FIG. 1, and thus their description will not be repeated here.
The variable[0118]length decoding unit810 carries out variable length decoding on an input stream, extracts index information of a quantization weight matrix corresponding to a predetermined macroblock of the input stream from a header of the predetermined macroblock, and outputs the extracted index information to the quantization weightmatrix storage unit860.
The quantization weight[0119]matrix storage unit860 outputs a quantization weight matrix corresponding to the index information received from the variablelength decoding unit810 to theinverse quantization unit820. The quantization weightmatrix storage unit860 stores a plurality of quantization weight matrices, which are classified according to the characteristics of an input image processed by an encoding unit, for example, a noise variance value as a ratio between an input image variance value and the edge characteristics of the input image.
The plurality of quantization weight matrices stored in the quantization weight[0120]matrix storage unit860 can be transmitted on a picture-by-picture basis using a picture extension header or transmitted to the decoding unit on a sequence-by-sequence basis using a sequence extension header. The plurality of quantization weight matrices are transmitted from the variablelength decoding unit810 to the quantization weightmatrix storage unit860, as marked by a dotted line in FIG. 8.
The present invention can be applied to different types of methods and apparatuses for encoding and/or decoding moving pictures, such as MPEG-1, MPEG-2, or MPEG-4. In addition, the present invention can be realized as computer-readable codes written on a computer-readable recording medium. The computer-readable recording medium includes any type of recording device on which data can be written in a computer-readable manner. For example, the computer-readable recording medium includes ROM, RAM, CD-ROM, a magnetic tape, a hard disk, a floppy disk, flash memory, an optical data storage, and a carrier wave (such as data transmission through the Internet). In addition, the computer-readable recording medium can be distributed over a plurality of computer systems which are connected to one another in a network sort of way so that computer-readable codes are stored on the computer-readable recording medium in a decentralized manner.[0121]
As described above, the methods of encoding and/or decoding moving pictures according to the embodiments of present invention, a quantization matrix is adaptively applied to each macroblock of an input image in consideration of the characteristics of the input image. Thus, it is possible to enhance the efficiency and performance of encoding the input image.[0122]
Although a few embodiments of the present invention have been shown and described, it would be appreciated by those skilled in the art that changes may be made in this embodiment without departing from the principles and spirit of the invention, the scope of which is defined in the claims and their equivalents.[0123]