1. SCOPE OF THE INVENTIONThe invention relates to the general field of compression and coding of pictures. The invention relates more specifically to a method for coding, in the form of a coded data stream, a block of a picture and a method for decoding such a stream with a view to the reconstruction of this block. The invention also relates to a coding device and a decoding device implementing said methods.
2. PRIOR ARTA transcoding device is used to modify the coding cost of a sequence of pictures. Indeed, it is sometimes necessary to transfer a coded data stream representative of a sequence of pictures of a first network of bandwidth B1 to a second network of bandwidth B2, where B1>B2. For this purpose, a transcoding device is used to modify the coding cost of said sequence of pictures, i.e. the number of bits used to encode it. Such a transcoding device also enables a coded data stream to be adapted to the resources of a terminal or even to insert such a stream into a multiplex.
A transcoding device1 of the FPDT type according to the prior art is shown inFIG. 1. It is notably described by G. J. Keesman, in the document entitled “Multi-program Video Data Compression”, Thesis Technische Universität Delft. ISBN 90-74445-20-9, 1995. Such a transcoding device1 receives at its input a first data stream coded S1 representing a sequence of pictures. The input of the transcoding device is connected to an entropy decoding module VLD, itself connected to a first inverse quantization module IQ1. The decoding module VLD decodes part of the first coded data stream into current picture data I that are then dequantized by the first dequantization module IQ1 into dequantized data ID with a first quantization step. This first quantization step is itself decoded from the stream S1. In general, the picture data I is in the form of blocks of coefficients. The quantization module IQ1 is connected to a first input of a first computation module C1. The first computation module C1 is suitable to calculate residual data R. For this purpose, the first computation module C1 computes the difference between the current dequantized data ID and prediction data PT sent to a second input of the first computation module C1. The output of the first computation module C1 is connected to the input of a quantization module Q2 suitable to quantize the residual data R into quantized residual data RQ with a second quantization step. The second quantization step is determined according to the required bitrate B2. This quantized residual data RQ is then transmitted to an entropy coding module VLC to generate part of the second coded data stream S2. It is also sent to a second dequantization module IQ2 operating an inverse quantization from the one operated by the quantization module Q2 and generating dequantized residual data RD. This dequantized residual data RD is then transmitted to a first input of a second computation module C2. The second computation module C2 is suitable to compute requantization errors data E. For this purpose, the second computation module C2 computes the difference between the dequantized residual data RD and the corresponding residual data R sent to a second input of the second computation module C2. The output of the second computation module C2 is connected to the input of a first IDCT transformation module applying a first transform to the requantization errors data E to generate requantization errors in the spatial domain also called pixel domain, called transform requantization errors data EP. The IDCT module preferentially operates an Inverse Discrete Cosine Transform. The transform requantization errors data EP are stored in a memory MEM. The memory MEM is also connected to a prediction module PRED suitable to generate intermediate prediction data P from the transform requantization errors data EP stored in the memory MEM. The prediction module PRED implements, for example, a temporal prediction by motion compensation using motion vectors MVs decoded from the coded data streams S1 in the case where the current dequantized data ID is in INTER mode. It can also implement a spatial prediction for example in the case where the current dequantized data is data in INTRA mode as defined in the video coding standard H.264. The intermediate prediction data P is then sent to the input of a second DCT transformation module that applies a second transform to said intermediate prediction data P to generate the prediction data PT. The DCT module preferentially operates a Discrete Cosine Transform.
Such a transcoding device1 has the disadvantage of leading to a temporal or spatial drift effect. Indeed, the estimation of requantization errors made while transcoding picture data that serve as a temporal or spatial reference for other picture data is not perfect. A bias is introduced that cumulates along a group of pictures known as a GOP (Group of Pictures) within even pictures in the case of INTRA prediction leading to a progressive deterioration of the quality of said pictures until the transcoding of an INTRA type picture.
3. SUMMARY OF THE INVENTIONThe purpose of the invention is to compensate for at least one disadvantage of the prior art. For this purpose, the invention relates to a method for coding a block of a picture belonging to a sequence of pictures. This block comprises pixels with each of which at least one picture data is associated. The coding method comprises the following steps to:
a) determining a prediction coefficient of a DC coefficient of a block from a DC coefficient of at least one previously reconstructed reference block,
b) determining, for each pixel of the block, a prediction value such that the average of prediction values is proportional to a proportionality coefficient close to the prediction coefficient,
c) calculating, for each pixel of the block, a residual value by subtracting from the picture data of the pixel the prediction value of the pixel,
d) transforming the block of residual values by a first transform into a first block of coefficients,
e) replacing, in the first block of coefficients, the coefficient DC by the difference between the product of the proportionality coefficient and the average of picture data of the block and the prediction coefficient, and
f) quantizing and coding the first block of coefficients.
The proportionality coefficient depends on the first transform.
According to a specific aspect of the invention, the steps a), b), c), d) and e) are applied to a plurality of spatially neighbouring blocks and the method comprises, before the step of quantizing and coding, a step of transformation by a second transform of at least a part of the coefficients of the first blocks of coefficients into a second block of coefficients.
In the particular case where the block is an INTRA block, the prediction values of pixels of the block are determined as follows:
Xpred=Xn−Avg(Xn)+DCpred/R
where:
R is the proportionality coefficient,
Xn are the previously reconstructed values of pixels of neighbouring blocks used for the prediction of the block,
Avg(.) is the average function, and
DCpred is the prediction coefficient (DCpred).
In the particular case where the block is an INTER block, said prediction values (Xpred) of pixels of the block are determined as follows:
Xpred=MV(Xref)−Avg(MV(Xref))+DCpred/R
where:
Xref are the previously reconstructed values of pixels of reference blocks used for the prediction of the block,
MV(.) is a motion compensation function, and
Avg(.) is the average function.
The invention also relates to a method for decoding a stream of coded data representative of a block of a picture belonging to sequence of pictures with a view to the reconstruction of the block. The method comprises the following steps:
determining a prediction coefficient of a DC coefficient of a block from a DC coefficient of at least one previously reconstructed reference block,
decoding the coded data representative of the block to reconstruct coefficients,
inverse quantization of coefficients of the block into dequantized coefficients,
inverse transformation by an inverse transform of dequantized coefficients into residual values,
determining a prediction value for each of the pixels of the block such that the average of prediction values of the block is proportional to the prediction coefficient to a proportionality coefficient close, the proportionality coefficient depending on the transform, and
reconstructing for each pixel of the block a picture data by summing for the pixel the prediction value and the residual value corresponding to the pixel.
The invention also relates to a coding device of a sequence of pictures each picture of the sequence being divided into blocks of pixels with each of which at least one picture data is associated. The coding device comprises:
a prediction module for determining a prediction coefficient of a DC coefficient of a block of a picture of the sequence from a DC coefficient of at least one reference block previously reconstructed and a prediction value such that the average of prediction values is proportional to the prediction coefficient to a proportionality coefficient close,
a calculation module for calculating, for each of the pixels of the block, a residual value by subtracting from the picture data of the pixel the prediction value of the pixel,
a transformation module for transforming the block of residual values by a first transform into a first block of coefficients, for to replacing, in the first block of coefficients, the DC coefficient by the difference between the product of the proportionality coefficient and the average of picture data of the block and the prediction coefficient, and for quantizing the first block of coefficients, and
a coding module for coding the first block of coefficients,
the proportionality coefficient depending on the first transform.
Moreover, the invention also relates to a device for decoding a stream of coded data representative of a sequence of pictures, each picture being divided into blocks of pixels with each of which at least one picture data is associated. The decoding device comprises:
a decoding module for decoding the coded data representative of a block of a picture of the sequence to reconstruct coefficients,
a module for applying an inverse quantization and an inverse transform on said coefficients to generate residual values,
a prediction module for determining a prediction coefficient of a DC coefficient of a block from the DC coefficient of at least one reference block previously reconstructed and a prediction value such that the average of prediction values is proportional to the prediction coefficient to a proportionality coefficient close, the proportionality coefficient depending on the transform, and
a reconstruction module for reconstructing for each pixel of the block a picture data by summing for the pixel the prediction value and the residual value corresponding to the pixel.
4. LIST OF FIGURESThe invention will be better understood and illustrated by means of non-restrictive embodiments and advantageous implementations, with reference to the accompanying drawings, wherein:
FIG. 1 shows a transcoding device according to the prior art,
FIG. 2 shows a diagram of the coding method according to a first embodiment of the invention,
FIG. 3 shows a diagram of the coding method according to a second embodiment of the invention,
FIG. 4 shows the transformation steps of the method according to a second embodiment of the invention,
FIG. 5 shows the spatial prediction method according to a first INTRA coding mode,
FIG. 6 shows the spatial prediction method according to a second INTRA coding mode,
FIG. 7 shows the spatial prediction method according to a third INTRA coding mode,
FIG. 8 shows the spatial prediction method according to a fourth INTRA coding mode,
FIG. 9 shows the temporal prediction method according to an INTER coding mode,
FIG. 10 shows a diagram of the decoding method according to the invention,
FIG. 11 shows a coding device according to the invention, and
FIG. 12 shows a decoding device according to the invention.
5. DETAILED DESCRIPTION OF THE INVENTIONEither a block Xsrc of N pixels or picture points belonging to a picture. With each pixel i of the block Xsrc is associated at least one picture data Xsrc(i), for example a luminance value and/or chrominance values.
Assume that the picture data are transformed by a transform T, then:
T(Xsrc)=Coef(i)i=0, . . . N−1={DC,AC(i)i=1, . . . N−1}
Where DC is the continuous component and AC(i) are the components known as alternative or non continuous components.
Due to a notable property of T, the following relationship is verified:
R is a proportionality coefficient that depends on the transform T. For example if T is the DCT (Discrete Cosine Transform) transform 4×4, R=16.
FIG. 2 shows a coding method of such a block Xsrc of N pixels or picture points belonging to a picture of a sequence of pictures according to a first implementation of the invention.
Atstep100, a prediction coefficient DCpred is determined for the block Xsrc. This prediction coefficient DCpred is able to predict the DC coefficient or continuous component of the block Xsrc. More specifically DCpred is determined from the DC coefficients of reference blocks previously coded and reconstructed, noted as DCrec. In fact, the block Xsrc is a block predicted either spatially if it is in INTRA mode or temporally if it is in INTER mode from reference blocks previously coded and reconstructed. In the case of INTRA mode, the reference blocks are blocks spatially neighbouring the block Xsrc. They belong to the same picture as the block Xsrc. In the case of INTER mode, the reference blocks are blocks located in other pictures of the sequence than that to which the block Xsrc belongs.
Atstep110, a prediction value Xpred(i) is determined for each pixel i of the block Xsrc, i varying from 0 to N−1. The values Xpred(i) are determined such that their average on the block Xsrc is proportional to the prediction coefficient DCpred determined instep100 to a proportionality coefficient R close, i.e. DCpred=R*Avg(Xpred). The proportionality coefficient R depends on the first transform T used by the coding method instep130.
Atstep120, a residual value Xres(i) is calculated for each pixel i of the block Xsrc as follows: Xres(i)=Xsrc(i)−Xpred(i). The block composed of residual values Xres(i) associated with each pixel i of the block Xsrc is called the residual block and is noted as Xres.
Atstep130, the residual block Xres is transformed by a first transform T into a first block of coefficients AC(i)i=0, . . . N−1. The coefficient AC(0) is the continuous component and corresponds to the DC coefficient.
Atstep140, the coefficient AC(0) is replaced by the following DCres difference: (DCsrc−DCpred), where DCsrc is equal to R*Avg(Xsrc). Avg(Xsrc) is equal to the average of picture data of the block Xsrc, i.e.
Atstep150, the block of coefficients AC(i)i=0, . . . N−1after thereplacement step140 is quantized into a block of coefficients q(AC(i)) then coded. According to a first embodiment each coefficient of the block is divided by a predefined quantization step, for example set by a bitrate regulation module, or even set a priori. The quantized coefficients are then coded by entropy coding, for example using VLC (Variable Length Coding) tables.
According to a variant embodiment, this step implements the quantization and coding method described in the document ISO/IEC 14496-10 entitledAdvanced Video Codingand more specifically in sections 8.5 (relating to quantization) and 9 (relating to entropy coding). Those skilled in the art can also refer to the book by lain E Richardson entitled H.264 and MPEG-4Video Compressionpublished in September 2003 by John Wiley & Sons. However, the invention is in no way linked to this standard that is cited only as an example.
It should be noted that to code other blocks, the value DCrec=DCpred+dq(q(DCres)) is calculated for the current block Xsrc, where dq(.) is the inverse quantization function of the quantization function q(.) applied instep150.
A second embodiment of the coding method according to the invention is described in reference toFIG. 3. In this figure, the steps of the method identical to those of the method according to the first embodiment are identified using the same numerical references and are not described in further detail. The coding method according to this second embodiment comprises all the steps of the method described in reference toFIG. 2. Thesteps100 to140 are reiterated on several spatially neighbouring Xsrc blocks. InFIG. 4, 16 neighbouring blocks are represented. Each black square represents the continuous component of the block after thereplacement step140, i.e. the value DCsrc−DCpred.
The method also comprises astep145 of transformation of coefficients DCres=(DCsrc−DCpred) of neighbouring blocks. For this purpose, in reference toFIG. 4, a block of coefficients (DCsrc−DCpred) is formed from the corresponding coefficients in the neighbouring blocks. This block of coefficients (DCsrc−DCpred) is transformed by a second transform into a second block of coefficients.
Instep150, the coefficients of the second block of coefficients and the coefficients of neighbouring blocks different to the coefficient (DCsrc−DCpred), i.e. AC(i)i=1, . . . N−1, are quantized then coded.
The coding methods described in reference toFIGS. 2 to 4 apply to any type of coding method. In the specific case of the H.264 video coding standard described in the document ISO/IEC 14496-10 as well as in the book by lain E Richardson entitled H.264 and MPEG-4Video Compressionpublished in September 2003 by John Wiley & Sons, several coding modes are described to predict a block of pixels Xsrc. These different coding modes define the way that, for a block Xsrc, the corresponding prediction block Xpred is determined. According to the invention, these modes are modified to take into account the constraint set instep110, that is the values Xpred(i) are determined such that their average on the block Xsrc are proportional to the prediction coefficient DCpred determined instep100 to a proportionality coefficient R close.
The H.264 standard defines the spatial prediction modes used to predict a block Xsrc in INTRA mode. According to the invention, the spatial prediction modes are modified such that Xpred=Xn−DCn/R+DCpred/R, where DCn=R*Avg(Xn) and where Xn are reconstructed pixels, neighbouring the block Xsrc used in the context of the H.264 standard to predict the pixels of the block Xsrc. In this case, the constraint set instep110 is necessarily verified.
Among these modes features the horizontal prediction mode shown inFIG. 5. In this figure, the block Xsrc is a block of 4×4 pixels shown in grey. In this mode the pixels of the first line of the block Xsrc are predicted from pixel I, the pixels of the second line are predicted from pixel J, the pixels of the third line are predicted from pixel K and the pixels of the fourth line are predicted from pixel L that belong to the block situated left of the block Xsrc. According to the invention, the horizontal prediction mode is modified as follows:
- the pixels of the first line of the block Xsrc are predicted from the following value: I—(I+J+K+L+2)/4+DCLeft/R,
- the pixels of the second line are predicted from pixel J—(I+J+K+L+2)/4+DCLeft/R,
- the pixels of the third line are predicted from pixel K—(I+J+K+L+2)/4+DCLeft/R and
- the pixels of the fourth line are predicted from pixel L—(I+J+K+L+2)/4+DCLeft/R.
In this case DCpred=DCLeft. The average of the pixels I, J, K and L equals (I+J+K+L+2)/4. By adding 2 before dividing by 4 makes it possible to find the nearest integer, the operation/being an integer division, thus returning the integer part of the quotient. According to a variant, the horizontal prediction mode is modified as follows: - the pixels of the first line of the block Xsrc are predicted from the following value: I—(I+J+K+L)/4+DCLeft/R,
- the pixels of the second line are predicted from pixel J—(I+J+K+L)/4+DCLeft/R,
- the pixels of the third line are predicted from pixel K—(I+J+K+L)/4+DCLeft/R and
- the pixels of the fourth line are predicted from pixel L—(I+J+K+L)/4+DCLeft/R.
In thisvariant 2 is not added before dividing by 4.
In the same way, in reference toFIG. 6, the H.264 vertical prediction mode is modified as follows:
- the pixels of the first column of the block Xsrc are predicted from the following value: A—(A+B+C+D+2)/4+DCUp/R,
- the pixels of the second column are predicted from pixel B—(A+B+C+D+2)/4+DCUp/R,
- the pixels of the third column are predicted from pixel C—(A+B+C+D+2)/4+DCUp/R and
- the pixels of the fourth column are predicted from pixel D—(A+B+C+D+2)/4+DCUp/R.
In this case DCpred=DCUp. The average of the pixels A, B, C and D equals (A+B+C+D+2)/4. By adding 2 before dividing by 4 makes it possible to find the nearest integer, the operation/being an integer division, thus returning the integer part of the quotient. According to a variant, the horizontal prediction mode is modified as follows: - the pixels of the first column of the block Xsrc are predicted from the following value: A—(A+B+C+D)/4+DCUp/R,
- the pixels of the second column are predicted from pixel B—(A+B+C+D)/4+DCUp/R,
- the pixels of the third column are predicted from pixel C—(A+B+C+D)/4+DCUp/R and
- the pixels of the fourth column are predicted from pixel D—(A+B+C+D)/4+DCUp/R.
In thisvariant 2 is not added before dividing by 4.
Among these modes features the DC prediction mode shown inFIG. 7. In this figure, the block Xsrc is a block of 4×4 pixels shown in grey. In this mode all the pixels of the block Xsrc are predicted from the pixels A, B, C, D, I, J, K and L. According to the invention, the DC prediction mode is modified so that the pixels of the block Xsrc are predicted from the following value:
∀i,Xpred(i)=DCpred
In this case DCpred=(DCLeft+DCUp)/2 or DCpred=2*(DCLeft+DCUp+2)/4.
Among these modes feature the diagonal prediction modes such as the prediction mode shown inFIG. 8 known as “diagonal down-right” mode. In this figure, the block Xsrc is a block of 4×4 pixels shown in gray. In this mode all the pixels of the block Xsrc are predicted from the pixels A, B, C, D, I, J, K L and M. According to the invention, the diagonal prediction mode orientated towards the right is modified so that the pixels of the block Xsrc are predicted from the following value:
Xpred(i)=Xn−(C+2B+3A+4M+3I+2J+K+8)/16+2*(DCLeft+DCUp+DCUp−Left+3)/(6*R);
with Xn which is the prediction value defined by the H.264 standard.
For example for the 4 pixels of the diagonal D0 of Xsrc
Xpred(i)=M−(C+2B+3A+4M+3I+2J+K+8)/16+2*(DCLeft+DCUp+DCUp−Left+3)/(6*R);
For the 3 pixels of diagonal D1:
Xpred(i)=A−(C+2B+3A+4M+3I+2J+K+8)/16+2*(DCLeft+DCUp+DCUp−Left+3)/(6*R);
For the 2 pixels of diagonal D2:
Xpred(i)=B−(C+2B+3A+4M+3I+2J+K+8)/16+2*(DCLeft+DCUp+DCUp−Left+3)/(6*R);
For the pixel of diagonal D3:
Xpred(i)=C−(C+2B+3A+4M+3I+2J+K+8)/16+2*(DCLeft+DCUp+DCUp−Left+3)(6*R);
For the 3 pixels of diagonal D4:
Xpred(i)=I−(C+2B+3A+4M+3I+2J+K+8)/16+2*(DCLeft+DCUp+DCUp−Left+3)(6*R);
For the 3 pixels of diagonal D5:
Xpred(i)=J−(C+2B+3A+4M+3I+2J+K+8)/16+2*(DCLeft+DCUp+DCUp−Left+3)/(6*R);
For the pixel of diagonal D6:
Xpred(i)=K−(C+2B+3A+4M+3I+2J+K+8)/16+2*(DCLeft+DCUp+DCUp−Left+3)/(6*R);
In this case DCpred=2*(DCLeft+DCUp+DCUp−Left+3)/6. However, any linear combination of DCLeft, DCUp, DCUp−Leftcan be used for DCpred.
The other diagonal modes of the H.264 standard can be modified in the same way as the mode shown inFIG. 8, to the extent that Xpred=Xn−DCn/R+DCpred/R. The average of the pixels A, B, C, I, J, K and M equals−(C+2B+3A+4M+3I+2J+K+8)/16. By adding 8 before dividing by 16 makes it possible to find the nearest integer, the operation/being an integer division, thus returning the integer part of the quotient. According to a variant, the horizontal prediction mode is modified as follows:
For example for the 4 pixels of the diagonal D0 of Xsrc
Xpred(i)=M−(C+2B+3A+4M+3I+2J+K)/16+(DCLeft+DCUp+DCUp−Left)/(3*R);
For the 3 pixels of diagonal D1:
Xpred(i)=A−(C+2B+3A+4M+3I+2J+K)/16+(DCLeft+DCUp+DCUp−Left+)/(3*R);
For the 2 pixels of diagonal D2:
Xpred(i)=B−(C+2B+3A+4M+3I+2J+K)/16+(DCLeft+DCUp+DCUp−Left)/(3*R);
For the pixel of diagonal D3:
Xpred(i)=C−(C+2B+3A+4M+3I+2J+K)/16+(DCLeft+DCUp+DCUp−Left)(3*R);
For the 3 pixels of diagonal D4:
Xpred(i)=I−(C+2B+3A+4M+3I+2J+K)/16+(DCLeft+DCUp+DCUp−Left)(3*R);
For the 3 pixels of diagonal D5:
Xpred(i)=J−(C+2B+3A+4M+3I+2J+K)/16+(DCLeft+DCUp+DCUp−Left)/(3*R);
For the pixel of diagonal D6:
Xpred(i)=K−(C+2B+3A+4M+3I+2J+K)/16+(DCLeft+DCUp+DCUp−Left)/(3*R);
The H.264 standard also defines the temporal prediction modes to predict an Xsrc block in INTER mode. According to the invention, the temporal prediction modes are modified, in reference toFIG. 9, such that Xpred=MV(Xref)−DCmv/R+DCpred/R, where DCmv=R*Avg(MV(Xref)) and where MV(Xref) are pixels reconstructed from reference block(s) used in the context of the H.264 standard to predict the pixels of the Xsrc block. In this case, the constraint set instep110 is necessarily verified.
For example,
where:
- DC1, DC2, DC3 and DC4 are DC coefficients of the reference block previously coded and reconstructed, and
- (xa.ya) is the surface of Xsrc predicted by the reference block whose DC coefficient is equal to DC1,
- (xb.ya) is the surface of Xsrc predicted by the reference block whose DC coefficient is equal to DC2,
- (xa.yb) is the surface of Xsrc predicted by the reference block whose DC coefficient is equal to DC3, and
- (xb.yb) is the surface of Xsrc predicted by the reference block whose DC coefficient is equal to DC1.
In this particular case, the reference blocks in question belong to a reference picture other than that belonging to the Xsrc block.
According to a particular embodiment, only the modified INTRA modes can be used with the non-modified INTER modes.
According to a particular embodiment, only the modified INTER modes can be used with the non-modified INTRA modes.
According to another variant, the modified INTRA and INTER modes are used.
The coding methods according to the preceding embodiments offer the advantage of avoiding the drift phenomenon when the stream of coded data that they generate is transcoded using the FPDT transcoding method. The Xsrc block prediction is slightly modified as concerns the DC coefficients while it remains identical for the AC coefficients with respect to the prediction as defined in the original standard, namely the H.264 standard. Thus, the performance in terms of the compression rate is only slightly impacted while in the case of transcoding by FPDT, the quality of the transcoded stream is improved by suppression of the drift effect.
Moreover, such methods predict the DC coefficients independently of the AC coefficients, that is only from the DC coefficients of reference blocks previously coded and reconstructed, said reference blocks belonging to reference pictures in the case of INTER or to the current picture in the case of INTRA. This enables as another advantage a reconstruction of a sequence of low resolution pictures without applying any inverse (DCT) transform by only reconstructing the DC coefficients. In the standard case, when the AC and DC coefficients are predicted together, the reconstruction of a low resolution picture from only DC coefficients is only possible on condition that the AC coefficients are also decoded.
In reference toFIG. 10, the invention relates to a method for decoding a stream of coded data representative of a block Xsrc of a picture belonging to sequence of pictures with a view to the reconstruction of this block Xsrc.
Atstep200, a prediction coefficient DCpred is determined for the block Xsrc. This prediction coefficient DCpred is able to predict the DC coefficient, or continuous component of the block Xsrc. More specifically DCpred is determined from the DC coefficients of reference blocks previously coded and reconstructed, noted as DCrec. In fact, the block Xsrc is a block predicted either spatially if it is in INTRA mode or temporally if it is in INTER mode from reference blocks previously coded and reconstructed. In the case of INTRA mode, the reference blocks are spatially neighbouring blocks of the block Xsrc they therefore belong to the same picture as the block Xsrc. In the case of INTER mode, the reference blocks are blocks located in other pictures of the sequence than that to which the block Xsrc belongs.
Instep210, the coded data {bk} representative of the block Xsrc are decoded to reconstruct the coefficients q(AC(i)). Step210 is an entropy decoding step. It corresponds to theentropy coding step150 of the coding method.
Instep220, the coefficients are dequantized by inverse quantization into dequantized coefficients dq(q(AC(i))). It corresponds to thequantization step150 of the coding method. More specifically, it implements the inverse of the quantization step applied instep150 of the coding method.
Instep230, the dequantized coefficients (dq(q(AC(i)))) are transformed into residual values Xresid′ by an inverse transformation to that applied instep130 of the coding method. As an example, if thestep130 of the coding method implements a DCT transform then step230 implements an IDCT (Inverse Discrete Cosine Transform) transform.
Naturally the invention is in no way limited by the type of transform used. Other transforms can be used, for example the Hadamard transform.
Instep240, a prediction value Xpred(i) is determined for each pixel i of the block Xsrc, i varying from 0 to N−1. The values Xpred(i) are determined such that their average on the block Xsrc is proportional to a proportionality coefficient R close to the prediction coefficient DCpred determined instep200. The proportionality coefficient R depends on the transform T−1used by the decoding method instep230, and thus consequently depends on the transform T used by the coding method instep130.
Instep250, a picture data Xrec(i) is reconstructed for each pixel of the block Xsrc by summing the prediction value Xpred(i) and the residual value Xresid(i) corresponding to the pixel i.
It should be noted that to reconstruct other blocks, the value DCrec=DCpred+dq(q(AC(0))) is calculated for the current block Xsrc.
The decoding method has the advantage of enabling a reconstruction of a sequence of pictures at low resolution by only reconstructing the DC coefficients. In the standard case, when the AC and DC coefficients are predicted together, the reconstruction of a low resolution picture from only DC coefficients is only possible on condition that the AC coefficients are also decoded. In fact, in the present case, the DC coefficients are predicted independently of the AC coefficients, that is, only from the DC coefficients of reference blocks previously reconstructed.
The invention also relates to acoding device12 described with reference toFIG. 11. Thecoding device12 receives at the input, pictures I belonging to a sequence of pictures. Each picture is divided into blocks of pixels with each of which at least one picture data is associated. Thecoding device12 notably comprises acalculation module1200 capable of subtracting pixel by pixel from a current block Xsrc, according to step120 of the coding method, a prediction block Xpred to generate a residual picture data block or residual block noted as Xres. It further comprises amodule1202 capable of transforming then quantizing the residual block Xres into quantized data. The transform T is for example a discrete cosine transform (or DCT). Themodule1202 notably implementsstep130 of the coding method. It also implements thereplacement step140 and thequantization step150. Thecoding module12 also comprises anentropy coding module1204 able to code quantized data in a stream F of coded data. Theentropy coding module1204 implements thecoding step150 of the coding method. It also comprises amodule1206 carrying out the inverse operation ofmodule1202. Themodule1206 carries out an inverse quantization IQ followed by an inverse transform IT. Themodule1206 is connected to acalculation module1208 able to add pixel by pixel the block of data from themodule1206 and the prediction block Xpred to generate a block of reconstructed picture data that is stored in amemory1210.
Thecoding device12 further comprises amotion estimation module1212 capable of estimating at least one motion vector between the block Xsrc and a reference picture stored in thememory1210, this picture having previously been coded then reconstructed. According to a variant the motion estimation can be carried out between the current block Xsrc and the original reference picture. According to a method known to those skilled in the art, themotion estimation module1212 searches in the reference picture for a motion vector so as to minimise the error calculated between the current block Xsrc and a reference block Xref in the reference picture identified using said motion vector.
The motion data are transmitted by themotion estimation module1212 to adecision module1214 able to select a coding mode for the block Xsrc in a predefined set of coding modes. The term “motion data” is to be understood in the widest sense, i.e. motion vector and possibly a reference picture index identifying the picture in the sequence of pictures. The coding modes of the predefined set of coding modes are defined such that the constraint defined instep110 of the coding method is verified.
The chosen coding mode is for example the one that minimizes a bitrate-distortion type criterion. However, the invention is not restricted to this selection method and the mode chosen can be selected according to another criterion for example an a priori type criterion. The coding mode selected by thedecision module1214 as well as the motion data, for example the item or items of motion data, in the case of the temporal prediction mode or INTER mode, are transmitted to aprediction module1216. The coding mode and possibly the item or items of motion data selected are also transmitted to theentropy coding module1204 to be coded in the stream F. Theprediction module1216 determines the prediction block Xpred according tosteps100 and110 of the coding method notably from reference pictures Ir previously reconstructed and stores in thememory1210, the coding mode and possibly the item or items of motion data selected by thedecision module1214. It is noted that the coefficient DCrec of the block Xsrc is also reconstructed and stored in thememory1210 with a view to the reconstruction of other blocks. Themodules1200,1202,1204,1206,1210,1214 form a group of modules called coding modules.
The invention further relates to adecoding device13 described with reference toFIG. 12. Thedecoding device13 receives at the input a stream F of coded data representative of a sequence of pictures. The stream F is for example generated and transmitted by acoding device12. Thedecoding device13 comprises anentropy decoding module1300 able to generate decoded data, e.g. coding modes and decoded data relating to the content of the pictures. For this purpose theentropy decoding device1300 implements step210 of the decoding method.
Thedecoding device13 further comprises a motion data reconstruction module. According to a first embodiment, the motion data reconstruction module is theentropy decoding module1300 that decodes a part of the stream F representative of said motion vectors.
According to a variant not shown inFIG. 13, the motion data reconstruction module is a motion estimation module. This solution for reconstructing motion data by thedecoding device13 is known as “template matching”.
The decoded data relative to the content of pictures that correspond to quantized data from themodule1202 of thecoding device12 are then transmitted to amodule1302 able to carry out an inverse quantization followed by an inverse transform. Themodule1302 notably implements theinverse quantization step220 and theinverse transform step230 of the decoding method. Themodule1302 is identical to themodule1206 of thecoding module12 that generated the coded stream F. Themodule1302 is connected to acalculation module1304 able to add pixel by pixel, according to step250 of the decoding method, the block from themodule1302 and a prediction block Xpred to generate a block of reconstructed picture data that is stored in amemory1306. Thedecoding device13 also comprises aprediction module1216 of thecoding device12. Theprediction module1308 determines the prediction block Xpred according tosteps200 and240 of the decoding method from notably reference pictures Ir previously reconstructed and stored in thememory1306, DC coefficients reconstructed from reference blocks also stored in thememory1306, the coding mode and possibly motion data for the current block Xsrc decoded by theentropy decoding module1300. It is to be noted that the coefficient DCrec of the block Xsrc is also reconstructed and stored in thememory1306 with a view to the reconstruction of other blocks. Themodules1302,1304,1306 form a group of modules called the reconstruction module.
InFIGS. 11 and 12, the modules shown are functional units that may or may not correspond to physically distinguishable units. For example, these modules or some of them can be grouped together in a single component, or constitute functions of the same software. On the contrary, some modules may be composed of separate physical entities. As an example, themodule1202 can be implemented by separate components, one carrying out a transform and the other a quantization.
Obviously, the invention is not limited to the embodiment examples mentioned above.
In particular, those skilled in the art may apply any variant to the stated embodiments and combine them to benefit from their various advantages. Notably, the invention is in no way limited to specific picture coding standard. The only condition is that the prediction modes verify the following constraints:
Case INTRA:Xpred=Xn−DCn/R+DCpred/R
Case INTER:Xpred=MV(Xref)−DCmv/R+DCpred/R
with DCpred that is determined from DC coefficients of reference blocks previously reconstructed.