RELATED APPLICATIONThis is a continuation-in-part of U.S. patent application Ser. No. 08/219,472, filed Mar. 29, 1994, now U.S. Pat. No. 5,473,380.
BACKGROUND OF THE INVENTIONThe present invention relates to coding and decoding of a picture signal for transmission, and, more particularly, is directed to matching the type of predictive coding applied to pictures of the picture signal.
In, for example, a teleconferencing system or a video telephone system, moving picture signals are compressed and encoded by taking advantage of intra-frame and inter-frame correlation so that they can be more efficiently transmitted over a communication channel to a remote location.
Intra-frame correlation can be utilized by an orthogonal transformation, such as a discrete cosine transformation (DCT).
Inter-frame correlation can be utilized by predictive encoding between successive pictures. As used herein, a picture generally refers to an image represented by a frame. When the fields of a frame are coded in a non-interlaced manner, that is, separately, each field may be referred to as a picture.
As shown inFIG. 1A, for example, frame pictures PC1, PC2 and PC3 are generated at time points t1, t2 and t3. As shown by shading inFIG. 1B, the difference between the frame pictures PC1 and PC2 is obtained as difference picture data PC12, and the difference between the frame pictures PC2 and PC3 is obtained as difference picture data PC23. Since there is a fairly small change between signals of temporally neighboring frames, transmission of only the difference picture data utilizes the transmission channel more efficiently than transmission of the original pictures. That is, using the difference picture data as encoded picture signals reduces the amount of data to be transmitted.
However, if only the difference signals are transmitted, the original picture cannot be restored. It is advantageous to occasionally transmit a picture which is not predictively encoded as a reference for difference picture data, and because it is sometimes more efficient than transmitting the picture as a predictively encoded picture.
Pictures which are encoded utilizing only intra-frame correlation and not inter-frame correlation, are referred to herein as intra-pictures or I-pictures.
Pictures which are encoded with predictive encoding relative to one previously encoded picture are referred to herein as predictive pictures or P-pictures. The previously encoded picture may be an I-picture or a P-picture, and temporally succeeds the P-picture.
Pictures which are encoded with predictive encoding relative to at most two pictures, a temporally preceding and a temporally succeeding picture, are referred to herein as bi-directionally predictive coded pictures or B-pictures. The two pictures may each be an I-picture or a P-picture. When both are used, the mean value of the two pictures is obtained and used as a reference picture for the picture to be encoded.
A series of pictures may be considered as groups of pictures having a predetermined number of frames such as F1 . . . F17. The luminance and chrominance picture signals of the leading frame F1 are encoded as an I-picture, the picture signals of the second frame F2 are encoded as a B-picture, and the picture signals of the third frame F3 are encoded as a P-picture. The fourth and the following frames F4 to F17 are encoded alternately as B-pictures and P-pictures.FIG. 2A shows the reference pictures used for encoding P-pictures, whileFIG. 2B shows the reference pictures used for encoding B-pictures.
As shown inFIGS. 3A and 3B, there are four methods for encoding the macro-blocks (discussed below) of a picture. When multiple methods are suitable, the method which will give the smallest amount of encoded data is employed on a macro-block by macro-block basis. Blocks F1 to F5 inFIG. 3A represent data for frames of moving picture signals, whereas blocks F1X to F5X inFIG. 3B represent data for encoded frames. The solid line arrows inFIG. 3A show the frames to which motion vectors x1 . . . x6 relate.
The first method, shown as SP1, is to not use predictive encoding, that is, to use only intra-frame correlation. This is suitable for any macro-blocks of an I-picture, a P-picture and a B-picture. In other words, if less encoded data is produced without predictive encoding, then this method is selected.
The second method, shown as SP2, is to predictively encode relative to a picture which temporally succeeds the current picture, referred to as backward predictive encoding. The third method, shown as SP3, is to predictively encode relative to a picture which temporally precedes the current picture, referred to as forward predictive encoding. The second method is suitable for macro-blocks of only B-pictures. The third method is suitable for macro-blocks of P-pictures and B-pictures.
The fourth method, shown as SP4, is to predictively encode relative to the mean value of two pictures, one temporally preceding and one temporally succeeding the current picture. This method is suitable for macro-blocks of only B-pictures.
The encoding sequence will now be described.
The first frame F1 is encoded as an I-picture using the first method SP1 so that it is directly transmitted over a transmission channel as encoded data F1X.
The third frame F3 is encoded as a P-picture. When the third method SP3, forward predictive coding, is used for a macro-block, difference signals from the temporally preceding frame F1 used as the reference picture, as indicated by a broken-line arrow SP3, and a motion vector x3 between the reference picture F1 and the current picture F3, are calculated and encoded as data F3X for that macro-block. Alternatively, in this or another macro-block of the P picture, if a smaller amount of encoded data is produced for a macro-block of the P picture being encoded, the first method SP1 can be used wherein the data of the original frame F3 are directly utilized as the transmission data F3X for that macro-block.
The second frame F2 is encoded as a B-picture.
When the fourth method SP4 is used to encode a macro-block of the frame F2, a difference between the mean value of the temporally preceding frame F1 and the temporally succeeding frame F3 is calculated, on a pixel by pixel basis. The difference data and the motion vectors x1 and x2 are encoded as data F2X.
When the first processing method SP1 is used to encode a macro-block of the frame F2, the data of the original frame F2 forms the encoded data F2X.
When one of the second or third methods SP2, SP3 is used to encode a macro-block of the frame F2, one of the difference between the temporally succeeding frame F3 and the current frame F2, and the difference between the temporally preceding frame F1 and the current frame F2 is calculated. The difference data and one of the motion vectors x1, x2 are encoded as the data F2X.
The frame F4 for the B-picture and the frame F5 for the P-picture are processed in a similar manner as described above to generate transmitted data F4X and F5X.
FIG. 4 illustrates an arrangement for encoding and decoding moving picture signals in accordance with the above-described predictive encoding scheme. As shown inFIG. 4, anencoding device1 encodes input picture signals and transmits the encoded signals to arecording medium3 as a transmission channel for recording. Adecoding device2 reproduces the signals recorded on therecording medium3 and decodes these as output signals.
Theencoding device1 includes aninput terminal10, a pre-processing circuit11, A/D converters12 and13, aframe memory14 including a luminancesignal frame memory15 and a color differencesignal frame memory16, aformat converting circuit17 and anencoder18.
Input terminal10 is adapted to receive a video signal VD and to supply the signal VD to pre-processing circuit11 which functions to separate the video signal VD into luminance signals and color signals, herein chrominance or color difference signals, that are applied to analog-to-digital (A/D)converters12 and13, respectively. The video signals, digitized by analog-to-digital conversion by the A/D converters12 and13, are supplied to framememory14 havingmemories15,16 which function to store the luminance signals and the color difference signals, respectively, and to read out the signals stored therein to format convertingcircuit17.
Theconverter17 is operative to convert frame format signals stored in theframe memory section14 into block format signals. As shown inFIG. 5A, pictures are stored in theframe memory section14 as frame-format data having V lines each consisting of H dots. The convertingcircuit17 divides each frame into N slices, each slice comprising a multiple of16 lines. As shown, inFIG. 5B, theconverter17 divides each slice into M macro-blocks. As shown inFIG. 5C, each macro-block represents luminance signals Y corresponding to 16×16 pixels or dots, and associated chrominance Cr, Cb signals. These luminance signals are subdivided into blocks Y1 to Y4, each consisting of 8×8 dots. The 16×16 dot luminance signals are associated with 8×8 dot Cb signals and 8×8 dot Cr signals. Theconverter17 is also operative to supply the block format signals to theencoder18, which is described in detail below with reference to FIG.6.
Theencoder18 operates to encode the block format signals and to supply the encoded signals as a bitstream over a transmission channel for recording on therecording medium3.
Thedecoding device2 includes adecoder31, aformat converting circuit32, aframe memory section33 including a luminancesignal frame memory34 and a color differencesignal frame memory35, digital-to-analog converters36 and37, apost-processing circuit38 and anoutput terminal30.
Thedecoder31 is operative to reproduce encoded data from therecording medium3 and to decode the encoded data, as described in detail below with reference toFIG. 9, and to supply decoded data signals to format convertingcircuit32 which is operative to convert the decoded data signals into frame format data signals and to supply the frame format data signals as luminance signals and color difference signals to thememory33. Thememories34,35 of thememory33 function to store the luminance and chrominance signals, respectively, and to apply these signals to D/A converters36 and37, respectively. The analog signals fromconverters36,37 are synthesized by apost-processing circuit38 which functions to form output picture signals and to output them tooutput terminal30, and thence to a display unit, such as a CRT, not shown, for display.
FIG. 6 illustrates theencoder18 shown in FIG.4.
Generally, theencoder18 stores three pictures, the current picture and the pictures temporally preceding and succeeding the current picture. Based on the sequential position of the current picture in the group of pictures, the picture coding type (I, P or B) is selected for each picture. The picture type sequence is determined by a user using picturetype input device65, independent of the pictures applied to aninput terminal49.
Theencoder18 also chooses one of frame-based and field-based predictive encoding as will be explained with reference toFIG. 7, and further chooses one of frame-based and field-based DCT encoding as will be explained with reference to FIG.8. For each picture, appropriate motion vectors are obtained and the picture is predictively encoded relative to zero, one or two previously encoded pictures which have been locally decoded and which are referred to as reference pictures to form a difference data signal. The difference data signal is orthogonally transformed into blocks of coefficient data which are quantized, variable length encoded and transmitted as encoded data.
At theencoder18, the quantized data are dequantized, inverse orthogonally transformed, and stored as the reference pictures. The predictive encoding applies the motion vector(s) obtained for the current picture to the reference picture(s) to produce a prediction picture which is subtracted from the current picture to yield the difference data.
The elements of theencoder18 will now be explained in detail.
Picture data for encoding is supplied macro-block by macro-block to theinput terminal49 and thence to a motionvector detection circuit50 which is operative to process the picture data of respective frames as I-pictures, P-pictures or as B-pictures, in accordance with a predetermined sequence for each group of pictures, as shown for example, inFIGS. 2A,2B. Thecircuit50 applies the picture data of the current frame to aframe memory51 havingframe memories51a,51b,51c used for storing a temporally preceding picture, the current picture and a temporally succeeding picture, respectively.
More specifically, the frames F1, F2, F3 are stored in thememories51a,51b,51c, respectively. Then the picture stored inmemory51c is transferred to memory51a. The frames F4, F5 are stored in thememories51b,51c, respectively. The operations of transferring the picture inmemory51c to memory51a and storing the next two pictures inmemories51b,51c are repeated for the remaining pictures in the group of pictures.
After the pictures are read into the memory and temporarily stored, they are read out and supplied to a predictionmode changeover circuit52 which is adapted to process the current picture for one of frame based and field based predictive encoding. After processing the first frame picture data in a group of pictures as an I-picture and before processing the second frame picture as a B-picture, the motionvector detection circuit50 processes the third frame P-picture. The processing sequence is different from the sequence in which the pictures are supplied because the B-picture may involve backward prediction, so subsequent decoding may require that the P-picture temporally succeeding the B-picture have been previously decoded.
The motionvector detection circuit50 calculates as an estimated value for intra-coding for each macro-block, the sum of absolute values of prediction errors for the frame prediction mode for each macro-block and the sum of absolute values of prediction errors for the field prediction mode for each macro-block and supplies these sums to theprediction decision circuit54 which compares these sums and selects frame prediction mode or field prediction mode in accordance with the smallest of these values and provides the selected mode to the predictionmode changeover circuit52.
If the frame prediction mode is selected, the predictionmode changeover circuit52 outputs the four luminance blocks Y1 to Y4 and the two chrominance or color difference blocks Cb, Cr of each macro-block received from the motionvector detection circuit50 without processing. As shown inFIG. 7A, odd or first field line data, indicated by solid lines, and even or second field line data, indicated by dashed lines, alternate in each luminance and color difference block as received from the motionvector detection circuit50. InFIG. 7A, a indicates units for motion compensation. In the frame prediction mode, motion compensation is performed with four luminance blocks (macro-blocks) as a unit and a single motion vector is associated with the four luminance blocks Y1 to Y4.
If the field prediction mode is selected, the predictionmode changeover circuit52 processes the signals received from the motionvector detection circuit50 so that each of the four luminance blocks comprises data from a single field and the two color difference blocks have non-interlaced odd and even field data. Specifically, as shown inFIG. 7B, the luminance blocks Y1 and Y2 have odd-field data and the luminance blocks Y3 and Y4 have even-field data, while the upper halves of the color difference blocks Cb, Cr represent odd field color difference data for the luminance blocks Y1 and Y2 and the lower halves of the color difference blocks Cb, Cr represent even field color difference data for the luminance blocks Y3 and Y4. InFIG. 7B, b indicates units for motion compensation. In the field prediction mode, motion compensation is performed separately for the odd-field blocks and even-field blocks so that one motion vector is associated with the two luminance blocks Y1 and Y2 and another motion vector is associated with the two luminance blocks Y3 and Y4.
The predictionmode changeover circuit52 supplies the current picture, as processed for frame based or field based predictive encoding, toarithmetic unit53 of FIG.6. Thearithmetic unit53 functions to perform one of intra-picture prediction, forward prediction, backward prediction or bi-directional prediction. Aprediction decision circuit54 is adapted to select the best type of prediction in dependence upon the prediction error signals associated with the current picture signals.
The motionvector detection circuit50 calculates, for the current picture, the sum of absolute values of the differences between each Aij and the average value of the Aij in each macro-block Σ|Aij−(average of Aij)| and supplies the sum as an estimated value for intra-coding to theprediction decision circuit54.
The motionvector detection circuit50 calculates the sum of absolute values (or sum of squares) of the difference (Aij−Bij) between signals Aij of the macro-blocks of the current picture, and signals Bij of the macro-blocks of the prediction picture Σ|Aij−Bij| in each of frame prediction mode and field prediction mode. As explained above, the motion vector(s) for the current picture are applied to the reference picture(s) to generate the prediction picture. When the reference picture temporally precedes the current picture, the quantity Σ|Aij−Bij| is referred to as a forward prediction error signal, and when the reference picture temporally succeeds the current picture, the quantity Σ|Aij−Bij| is referred to as a backward prediction error signal. When the prediction picture is the mean of a temporally preceding and a temporally succeeding reference picture, as motion-compensated, the quantity Σ|Aij−Bij| is referred to as a bi-directional prediction error signal.
Thecircuit50 supplies the forward frame prediction, the forward field prediction, the backward frame prediction, the backward field prediction, the bi-directional frame prediction and the bi-directional field prediction error signals to theprediction decision circuit54.
Theprediction decision circuit54 selects one of intra-coding, forward inter-picture prediction, backward inter-picture prediction or bi-directional inter-picture prediction and one of frame and field prediction mode in accordance with the smallest of the estimated value for intra-coding and the forward frame, the forward field, the backward frame, the backward field, the bi-directional frame and the bi-directional field prediction error signals. Thearithmetic unit53 predictively encodes the current picture, as processed by the frame orfield changeover circuit52, in accordance with the prediction mode selected by theprediction decision circuit54.
The motionvector detection circuit50 serves to calculate and supply the motion vector(s) associated with the selected prediction mode to a variablelength encoding circuit58 and amotion compensation circuit64, explained later.
The sums of the absolute values of the inter-frame differences (prediction errors) on the macro-block basis are supplied from the motionvector detection circuit50 to the predictionmode changeover circuit52 and to theprediction decision circuit54, in the manner as described above.
Thearithmetic unit53 supplies predictively encoded data, also referred to as difference data, for the current picture to a DCTmode changeover circuit55 which is adapted to process the current picture for one of frame based and field based orthogonal transformation.
TheDCT changeover circuit55 functions to compare the encoding efficiency when the DCT operations for the macro-blocks in a picture are performed with the odd field data alternating with the even field data, that is, for frame based orthogonal transformation, as shown inFIG. 8A, with the encoding efficiency when the DCT operations for the macro-blocks in a picture are performed with the odd field data separated from the even field data, that is, for field based orthogonal transformation, as shown in FIG.8B. Thecircuit55 functions to select the mode with the higher encoding efficiency.
To evaluate the encoding efficiency for frame based orthogonal transformation, the DCTmode changeover circuit55 places the luminance macro-block data into interlaced form, as shown inFIG. 8A, and calculates the differences between the odd field line signals and even field line signals vertically adjacent to each other, and finds the sum of absolute values of the differences EFM, or the sum of squared values of the differences.
To evaluate the encoding efficiency for field based orthogonal transformation, the DCTmode changeover circuit55 places the luminance macro-block data into non-interlaced form, as shown inFIG. 8B, and calculates the differences between vertically adjacent odd field line signals and the differences between vertically adjacent even field line signals, and finds the sum of absolute values of the differences EFD, or the sum of squared values of the differences.
TheDCT changeover circuit55 compares the difference between the frame based and field based sums of the absolute values with a predetermined threshold and selects frame based DCT transformation if the difference EFM−EFD is less than the predetermined threshold.
If the frame prediction mode is selected in the predictionmode changeover circuit52, the probability is high that the frame DCT mode will be selected in the DCTmode changeover circuit55. If the field prediction mode is selected in the predictionmode changeover circuit52, the probability is high that the field DCT mode will be selected in the DCTmode changeover circuit55. However, since this is not necessarily the case, the predictionmode changeover circuit52 sets the mode which will give the least value of the sum of the absolute values of prediction errors, while the DCTmode changeover circuit55 sets the mode which will give the optimum orthogonal transformation encoding efficiency.
If frame based orthogonal transformation mode, also referred to as frame DCT mode, is selected, the DCTmode changeover circuit55 functions to ensure that the four luminance blocks Y1 to Y4 and two color difference blocks Cb, Cr represent alternating or interlaced odd and even field lines, as shown in FIG.8A.
If field based orthogonal transformation mode, also referred to as field DCT mode, is selected, the DCTmode changeover circuit55 functions to ensure that each of the luminance blocks represents only one field, and that each of the color difference blocks has segregated or non-interlaced odd and even field lines, as shown in FIG.8B.
The DCTmode changeover circuit55 functions to output the data having the configuration associated with the selected DCT mode, and to output a DCT flag indicating the selected DCT mode to the variablelength encoding circuit58 and themotion compensation circuit64.
The DCTmode changeover circuit55 supplies appropriately configured difference picture data to aDCT circuit56 shown inFIG. 6 which is operative to orthogonally transform it using a discrete cosine transformation into DCT coefficients, and to supply the DCT coefficient data to aquantization circuit57 that functions to quantize the coefficient data with quantization steps selected in accordance with the volume of data stored in atransmission buffer59 and to supply quantized data to a variablelength encoding circuit58.
The variablelength encoding circuit58 is also supplied with the quantization step or scale data from thequantization circuit57, prediction mode data from theprediction decision circuit54, that is data indicating which of the intrapicture prediction, forward prediction, backward prediction or bi-directional prediction is used, and motion vector data from the motionvector detection circuit50. Theencoding circuit58 also receives prediction flag data from theprediction decision circuit54 comprising a flag indicating which of the frame prediction mode or the field prediction mode is used, and prediction flag data from the DCTmode changeover circuit55 comprising a flag indicating which of the frame DCT mode or the field DCT mode is used. This information is placed into the header portion of the encoded data stream.
The variablelength encoding circuit58 serves to encode the quantized data and the header information using a variable length code such as a Huffman code, in accordance with the quantization step data supplied from thequantization circuit57, and to output the resulting data to atransmission buffer59.
The quantized data and quantization step are also supplied to adequantization circuit60 which serves to dequantize the quantized data using the quantization step, and to supply the recovered DCT coefficient data to an inverse DCT circuit61 that functions to inverse transform the DCT coefficient data to produce recovered difference data and to supply the recovered difference data to anarithmetic unit62.
Thearithmetic unit62 combines the recovered difference data with a previously encoded and decoded reference picture, as motion compensated, to produce decoded data for a reconstructed picture which will be used as a reference picture and which is read into one of twoframe memories63a,63b. Thememories63a,63b are adapted to read out the reference picture data stored therein to amotion compensation circuit64 that uses the motion vectors from the motionvector detection circuit50 to produce a prediction picture from the reference picture. Specifically, thecircuit50 uses the motion vector to alter the readout address of the reference picture from thememory63a or63b.
For a group of pictures, after the first frame I-picture data and the third frame P-picture data are stored in the forward and backward prediction picture memories orunits63a,63b, respectively, the second frame B-picture data is processed by the motionvector detection circuit50. Theprediction decision circuit54 selects the frame or field prediction mode, while setting the prediction mode to one of intra-frame prediction mode, forward prediction mode, backward prediction mode and bi-directional prediction mode in correspondence with the sum of absolute values of predictive errors by macro-block.
Since a reconstructed B-picture is not used as a reference picture for other pictures, it is not stored in theframe memory63.
It will be appreciated that theframe memory63 has its forward and backwardprediction picture units63a,63b bank-exchanged as needed so that a picture stored in one of theunits63a or63b can be outputted as either a forward or a backward prediction picture.
Themotion compensation circuit64 functions to supply the motion compensated data as a prediction picture to thearithmetic unit62 and to thearithmetic unit53 which subtracts the prediction picture from the P-picture or the B-picture currently being predictively encoded.
More specifically, when the motionvector detection circuit50 receives picture data for an I-picture from the forward original picture unit51a, theprediction decision circuit54 selects the intra-frame prediction mode and sets aswitch53d of thearithmetic unit53 to an input contact a. This causes the I-picture data to be inputted directly to the DCTmode changeover circuit55. In this case, no prediction picture is expected from themotion compensation circuit64. The I-picture data is also supplied to the forwardprediction picture unit63a.
When the forward prediction mode is selected by theprediction decision circuit54, thecircuit54 also sets theswitch53d to an input contact b which causes thearithmetic unit53a to subtract the prediction picture, produced by themotion compensation circuit64, from the picture read out from thememory51, for each macro-block on a pixel by pixel basis, to produce difference data. The P-picture, after encoding and local decoding, is supplied to one of theunits63a,63b. For example, if the P-picture immediately follows an I-picture, then the P-picture is stored in the backwardprediction picture unit63b.
For forward predictive encoding, the prediction picture is a reference I-picture or P-picture read out from the forwardprediction picture unit63a of theframe memory63 and motion-compensated by themotion compensation circuit64 in accordance with the motion vector outputted from the motionvector detection circuit50. More specifically, for each macro-block, themotion compensation circuit64 shifts the readout address of the forwardprediction picture unit63a in an amount corresponding to the motion vector currently output by the motionvector detection circuit50.
When the backward prediction mode is selected by theprediction decision circuit54, thecircuit54 also sets theswitch53d to an input contact c which causes thearithmetic unit53b to subtract the prediction picture, produced by themotion compensation circuit64, from the picture read out from thememory51, on a pixel by pixel basis, to produce difference data.
For backward predictive encoding, the prediction picture is a P-picture read out from the backwardprediction picture unit63b of theframe memory63 and motion-compensated by themotion compensation circuit64 in accordance with the motion vector outputted from the motionvector detection circuit50. More specifically, for each macro-block, themotion compensation circuit64 shifts the readout address of the backwardprediction picture unit63b in an amount corresponding to the motion vector currently output by the motionvector detection circuit50.
When the bi-directional prediction mode is selected by theprediction decision circuit54, thecircuit54 sets theswitch53d to an input contact d which causes thearithmetic unit53c to subtract a prediction picture from the picture read out from thememory51, on a pixel by pixel basis, to produce difference data. The prediction picture is the mean value of a forward prediction picture and a backward prediction picture.
In the case of bi-directional prediction, the picture stored in the forwardprediction picture unit63a, and the picture stored in the backwardprediction picture unit63b, are read out and motion-compensated by themotion compensation circuit64 in dependence upon the motion vectors outputted from the motionvector detection circuit50. More specifically, for each macro-block, themotion compensation circuit64 shifts the readout address of the forward and backwardprediction picture units63a,63b in an amount corresponding to the appropriate one of the motion vectors currently output by the motionvector detection circuit50.
Thetransmission buffer59 temporarily stores the data supplied thereto, generates control data indicating the volume of data stored therein and supplies the control data to thequantization circuit57. When the volume of data stored in thetransmission buffer59 reaches a predetermined upper limit value, the control data from thetransmission buffer59 causes the quantization scale of thequantization circuit57 to increase so as to decrease the volume of the quantized data. Similarly, when the volume of data stored in thetransmission buffer59 reaches a predetermined lower limit value, the control data from thetransmission buffer59 causes the quantization scale of thequantization circuit57 to decrease so as to increase the volume of the quantized data. In this manner, thetransmission buffer59 prevents the data supplied thereto from overflowing or underflowing its capacity. The data stored in thetransmission buffer59 are read out at a predetermined timing to anoutput terminal69 and thence to a transmission channel for recording on, for example, therecording medium3.
Although the foregoing description has been made with reference mainly to the luminance blocks, the color difference blocks are similarly processed and transmitted using the motion vector which corresponds to the motion vector of the luminance block halved in both the vertical and horizontal directions.
FIG. 9 illustrates thedecoder31 shown in FIG.4.
The reproduced encoded picture data transmitted from therecording medium3 is applied to a reception circuit, not shown, or to aninput terminal80 which applies the encoded picture data to areception buffer81 that serves to temporarily store the encoded picture data and to supply this data to a variablelength decoding circuit82 of adecoding circuit90.
The variablelength decoding circuit82 functions to variable length decode the encoded data, to output the recovered motion vector, prediction mode data, prediction flags and DCT flags to themotion compensation circuit87, and to output the quantization step data and variable length decoded picture-data, including the predictive mode, the motion vector, the predictive flag, the DCT flag and the quantized picture data for each macro-block, to aninverse quantization circuit83.
Theinverse quantization circuit83 is adapted to dequantize the picture data supplied from the variablelength decoding circuit82 in accordance with the quantization step data supplied from the variablelength decoding circuit82 and to output the thus recovered coefficient data to an inversetransformation IDCT circuit84.
TheIDCT circuit84 is adapted to perform an inverse transformation on the recovered coefficient data to produce recovered difference data, and to supply the recovered difference data to anarithmetic unit85.
If the recovered difference data supplied from theIDCT circuit84 represents an I-picture, thearithmetic unit85 does not process the data and simply supplies it through anoutput terminal91 to theformat converting circuit32 shown inFIG. 4, and to a forwardprediction picture unit86a of aframe memory86.
If the recovered difference data supplied from theIDCT circuit84 represents a macro-block of a P-picture produced in the forward prediction mode, then the reference picture data of the preceding frame, as stored in the forwardprediction picture memory86a of theframe memory86, is read and motion-compensated by amotion compensation circuit87 in dependence upon the motion vector outputted from the variablelength decoding circuit82 to generate a prediction picture. Specifically, themotion compensation circuit87 uses the motion vector to alter the read out address supplied to thememory86a. Thearithmetic unit85 adds the prediction picture to the recovered difference data to produce a decoded or reconstructed picture which is stored in a backwardprediction picture memory86b of theframe memory86. The decoded P-picture is retained in thedecoder31, and output after the next B-picture is decoded and output, so as to restore the pictures to the order in which they were supplied to theencoder18 of FIG.4.
Even if the macro-block of the P-picture was encoded as intra-coded mode data, the decoded P-picture is directly stored in the backwardprediction picture unit86b, without being output to theoutput terminal91 by thearithmetic unit85.
If the recovered difference data supplied from theIDCT circuit84 represents a macro-block of a B-picture encoded in the intra-coding mode, as determined from the prediction mode supplied from the variablelength decoding circuit82 to themotion compensation circuit87, a prediction picture is not generated.
If the recovered difference data supplied from theIDCT circuit84 represents a macro-block of a B-picture encoded in the forward prediction mode, as determined from the prediction mode supplied from the variablelength decoding circuit82 to themotion compensation circuit87, the data stored in the forwardprediction picture unit86a of theframe memory86 is read out and motion compensated by themotion compensation circuit87 using the motion vector supplied from the variablelength decoding circuit82 to form the prediction picture. Thearithmetic unit85 sums the recovered difference data with the prediction picture to form the recovered B-picture.
If the recovered difference data supplied from theIDCT circuit84 represents a macro-block of a B-picture encoded in the backward prediction mode, as determined from the prediction mode supplied from the variablelength decoding circuit82 to themotion compensation circuit87, the data stored in the backwardprediction picture unit86b is read out and motion compensated by themotion compensation circuit87 using the motion vector supplied from the variablelength decoding circuit82 to form the prediction picture. Thearithmetic unit85 sums the recovered difference data with the prediction picture to form the recovered B-picture.
If the recovered difference data supplied from theIDCT circuit84 represents a macro-block of a B-picture encoded in the bi-directional prediction mode, as determined from the prediction mode supplied from the variablelength decoding circuit82 to themotion compensation circuit87, the data stored in both the forward and backwardprediction picture memories86a,86b are read out and respectively motion compensated by themotion compensation circuit87 using the motion vectors supplied from the variablelength decoding circuit82, then averaged to form the prediction picture. Thearithmetic unit85 sums the recovered difference data with the prediction picture to form the recovered B-picture.
The recovered B-picture is supplied via theoutput terminal91 to theformat converting circuit32. However, since the B-picture is not utilized for generating a prediction picture for other pictures, it is not stored in theframe memory86.
After outputting of the B-picture, picture data of the P-picture stored in the backwardprediction picture unit86b is read and supplied via themotion compensation circuit87 to thearithmetic unit85. Motion compensation is not performed at this time.
The counterpart circuits to the predictionmode changeover circuit52 and the DCTmode changeover circuit55 in theencoder18 ofFIG. 6 are not shown in thedecoder31. The processing to be performed by these circuits, that is, the processing for restoring the configuration in which odd-field line signals and even-field line signals are separated from each other to the configuration in which odd and even-field line signals alternate with each other, is performed by themotion compensation circuit87.
The processing of the luminance signals has been explained in the foregoing. As will be appreciated by one of ordinary skill in the art, the processing of the color difference signals is carried out in a similar manner. However, the motion vector employed in such case is the motion vector for luminance signals which is halved in both the vertical and horizontal directions.
FIG. 10 shows the signal to noise ratio (SNR) for pictures transmitted using the above-described technique. As can be seen, the best quality transmission is obtained for I-pictures, good quality transmission is obtained for P-pictures, and the poorest quality transmission is obtained for B-pictures. Thus, if the transmission path has adequate capacity, it is preferable to transmit a picture as an I-picture.
If all pictures cannot be transmitted as I-pictures, it is better to transmit a series of pictures as shown inFIG. 10, rather than in a format in which one average picture quality is used for all pictures. The technique shown inFIG. 10 takes advantage of the human visual sense characteristic of perceiving a series of changing picture quality, as shown inFIG. 10, as of higher quality than a series of unchanging picture quality, for a predetermined transmission rate.
Accordingly, in the configuration ofFIG. 6, transmission rate control is carried out by thequantizer57 in order to attain the picture quality perceived as better.
To dub pictures, two coder-decoder (codec) units are used in series. However, the picture quality obtained from the second codec is substantially worse than the picture quality obtained from the first codec, as explained below.
FIG. 11 shows a configuration representing two codecs connected by an analog connection, namely,coder201,decoder202,coder203 anddecoder204, connected in series.
InFIG. 11, an analog video signal is supplied to aninput terminal200 as an input signal a. Theinput terminal200 functions to apply the analog video signal to an A/D converter211 ofcoder201. Theconverter211 is adapted to convert the analog video signal to a digital video signal, and to apply the digital video signal tocoding circuit212 that serves to encode this signal as previously described to produce a coded digital video signal.
The coded digital video signal fromcoding circuit212 ofcoder201 is supplied to adecoding circuit213 ofdecoder202 which is adapted to decode the coded digital video signal and to apply the decoded video signal to D/A converter214.
The analog video signal produced by D/A converter ofdecoder202 is supplied as an output signal b to thecoder203, which functions in a similar manner as thecoder201.
The digital video signal produced by thecoder203 is supplied todecoder204 which functions in a similar manner as thedecoder202. Thedecoder204 delivers an analog video signal as an output signal c to a terminal205, which may transmit the signal c to another coder (not shown) and so on.
FIG. 12 shows the SNR of the output signals b, c shown in FIG.11. The SNR of the output signal c is seen to be substantially worse than the SNR of the output signal b.
The deterioration in picture quality results from a mismatch between the picture type applied in the first codec and the picture type applied in the second codec. Namely, if a picture coded as a B picture in the first coder/decoder pair is coded as, e.g., P picture in the second coder/decoder pair, a great deterioration of picture quality results because the picture quality changes as a function of the picture type.
Since the deterioration in picture quality results from the mismatch between picture types of respective stages of codecs, such deterioration similarly takes place when digital connections are used between respective codecs.
FIG. 13 shows a configuration representing two codecs connected by a digital connection, namely,coder302,decoder303,coder304 anddecoder305, connected in series.
An analog video signal is supplied toterminal300, which supplies the analog video signal as an input signal a to A/D converter301 that serves to digitize the signal a, and to apply the digital signal to adigital interface311 ofcoder302. Thedigital interface311 applies the signal supplied thereto to acoding circuit312 which encodes or compresses the digital video data to an encoded digital video bit stream.
The encoded digital video signal from thecoding circuit312 is supplied todecoding circuit313 ofdecoder303 that decodes the signal supplied thereto, and applies the decoded signal todigital interface314. Theinterface314 functions to output the decoded signal as an output signal b.
The output signal b is supplied to coder304 which functions in a similar manner ascoder302 to produce a coded signal that is applied todecoder305 which functions in a similar manner asdecoder303. The digital signal output from thedecoder303 is supplied to a D/A converter that serves to convert the signal supplied thereto to an analog video signal and to supply the analog video signal as an output signal c tooutput terminal307.
FIG. 12 also generally represents the SNR of the output signals b, c shown in FIG.13.
OBJECTS AND SUMMARY OF THE INVENTIONTherefore, an object of the present invention is to provide a method and apparatus for encoding and decoding picture signals which avoid the aforementioned disadvantages of the prior art.
Another object of the present invention is to provide a method and apparatus for transmitting and receiving picture signals in serial stages which minimizes the deterioration in picture quality at each stage.
Yet another object of the present invention is to match the type of coding applied to pictures of a picture signal in serial processing stages, each stage comprising coding and decoding.
In accordance with one embodiment of the present invention, apparatus and method for processing a digital picture signal operate by receiving a digital picture signal which has picture type data included in a data identification area of the digital picture signal and which indicates one of intrapicture coding, predictive coding and bidirectionally predictive coding for respective pictures represented by the digital picture signal. The picture signal is encoded as a function of the picture type data to produce an encoded picture signal.
In accordance with another embodiment of the present invention, apparatus and method for processing an encoded digital picture signal operate by decoding the encoded digital picture signal so as to produce picture type data which represents the type of encoding of the encoded digital picture signal and to produce a decoded digital picture signal. The picture type data is added to a data identification area of the decoded digital picture signal to produce an output signal.
The above, and other objects, features and advantages of the present invention will be apparent from the following detailed description of the preferred embodiments of the present invention when read in conjunction with the accompanying drawings in which corresponding parts are identified by the same reference numeral.
BRIEF DESCRIPTION OF THE DRAWINGSFIGS. 1A and 1B are pictures illustrating inter-frame correlation;
FIGS. 2A and 2B are diagrams illustrating types of pictures used in predictive encoding;
FIG. 3 is a diagram illustrating how picture signals are converted into encoded data for transmission;
FIG. 4 is a block diagram showing a conventional device for encoding and decoding picture signals;
FIG. 5 is a diagram referred to in explaining the operation of the format converting circuit shown inFIG. 4;
FIG. 6 is a block diagram showing the encoder of the device shown inFIG. 4;
FIG. 7 is a chart referred to in explaining the predictive encoding operation of the encoder shown inFIG. 6;
FIG. 8 is a chart referred to in explaining the orthogonal transformation operation of the encoder shown inFIG. 6;
FIG. 9 is a block diagram showing the decoder of the device shown inFIG. 4;
FIG. 10 is a graph showing picture quality as a function of picture type in a transmitted signal;
FIG. 11 is a block diagram showing two conventional video codecs connected in series using an analog connection;
FIG. 12 is a graph showing picture quality of the signals output by the codecs ofFIG. 11;
FIG. 13 is a block diagram showing two conventional video codecs connected in series using a digital connection;
FIG. 14 is a block diagram showing two video codecs according to the present invention connected in series using an analog connection;
FIG. 15 is a graph showing picture quality of the signals output by the codecs ofFIG. 14;
FIG. 16 is a block diagram showing two video codecs according to the present invention connected in series using a digital connection;
FIGS. 17A,17B and17C illustrate the data structure of a decoded digital video signal;
FIGS. 18A,18B and18C are diagrams illustrating various encoding structures of groups of pictures;
FIG. 19 is a block diagram showing a coder circuit according to the present invention; and
FIG. 20 is a block diagram showing a decoding circuit according to the present invention.
DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTSThe present invention comprises a picture type identifier which is included with the picture signal at all times, namely, when the signal is encoded and when the signal is decoded. The picture type indicates one of intra-picture coding (an I-picture), predictive coding (a P-picture) and bi-directionally predictive coding (a B-picture). An I-picture includes macro-blocks encoded by only intra-coding. A P-picture includes macro-blocks encoded by intra-coding and/or macroblocks encoded by forward predictive coding. A B-picture includes macro-blocks encoded by intra-coding and/or macroblocks encoded by forward predictive coding and/or macroblocks encoded by backward predictive coding and/or macroblocks encoded by bi-directionally predictive coding.
Initially, each picture of a picture signal is encoded as a function of a respective picture type, then appropriately decoded. The decoded picture signal includes the respective picture types, preferably in the vertical blanking interval of each decoded picture. The decoded picture signal may be further processed by, for example, dubbing.
When the pictures of the decoded picture signal are again encoded, the re-encoding is a function of the picture type included in the decoded picture signal. The re-encoded signal includes the picture type. Subsequently decoding of the re-encoded picture is a function of the picture type. Each picture of the re-decoded picture signal includes its respective picture type.
Thus, the present invention matches the type of predictive coding applied to pictures in a picture signal by serially arranged coders which process the picture signal.
The present invention promotes optimum picture quality. A picture, previously encoded as an I-picture, P-picture or B-picture is again encoded as an I-picture, P-picture or B-picture, respectively. Also, encoding of a picture, previously encoded as a B-picture, as an I-picture or a P-picture is prevented. Thus, deterioration in signal quality after plural coding and decoding operations is minimized.
Referring now to the drawings, and in particular toFIG. 14, there are illustrated coding and decoding units (codecs) according to the present invention having a serial analog connection therebetween. A first codec comprisescoder120 anddecoder121, while a second codec comprisescoder122 anddecoder123. It will be appreciated by one of ordinary skill in the art that additional codecs may be serially connected to those shown in FIG.14.
InFIG. 14, an analog video signal is supplied to aninput terminal100 as an input signal a, and a picture type signal is supplied to aninput terminal108. The picture type indicates one of intra-picture coding (an I-picture), predictive coding (a P-picture) and bi-directionally predictive coding (a B-picture).
Theinput terminals100,108 function to apply the analog video signal and the picture type signal, respectively, to an A/D converter101 and acoding circuit102, respectively, ofcoder120. Theconverter101 is adapted to convert the analog video signal to a digital video signal, and to apply the digital video signal to thecoding circuit102.
Thecoding circuit102 serves to encode the digital video signal as a function of the picture type signal to produce a coded digital video signal which includes, for each encoded picture, its picture type as identified by the picture type signal. More specifically, if the picture type for a picture indicates intra-picture coding, then thecoding circuit102 codes the picture as an I-picture. If the picture type for a picture indicates predictive coding, then thecoding circuit102 codes the picture as a P-picture. If the picture type for a picture indicates bi-directionally predictive coding, then thecoding circuit102 codes the pictures as a B-picture.
Thecoding unit120 may alternatively have the structure shown inFIG. 6, in which coding is performed without reference to an externally supplied picture type. As used herein, an externally supplied picture type means a picture type supplied from generally the same source as supplies the digital video signal, rather than from a separate source as is shown inFIG. 6, namely, the picturetype input device65.
The coded digital video signal fromcoding circuit102 ofFIG. 14 is supplied to adecoding circuit103 ofdecoder121 which is adapted to decode the coded digital video signal as a function of the picture type included in the encoded signal and to apply the decoded video signal to D/A converter104. Thedecoding circuit103 is further adapted to apply the picture type decoded from the coded digital video signal to amultiplexer105.
Themultiplexer105 is operative to multiplex the picture type information with the decoded video signal to produce a multiplexed analog video signal as an output signal b in which the picture type information is contained in the decoded video signal. Preferably, themultiplexer105 inserts the picture type for a picture of the decoded video signal into the vertical blanking interval of the picture. As mentioned, a picture may be either a frame or a field of the video signal.
The output signal b is supplied from themultiplexer105 to aseparating circuit106 of thecoder122. The separating circuit is operative to separate or demultiplex the analog video signal and the picture type information from the output signal b, to supply the separated analog video signal to an A/D converter107, and to supply the separated picture type information to acoding circuit108. Theconverter107 is adapted to convert the separated analog video signal to a digital video signal, and to apply the digital video signal to thecoding circuit108.
Thecoding circuit108 serves to encode the digital video signal as a function of the separated picture type to produce a re-coded digital video signal which includes, for each re-encoded picture, its picture type as identified by the separated picture type signal.
The re-coded digital video signal from thecoding circuit108 is supplied to thedecoder123, which operates in a similar manner as thedecoder121.
Decodingcircuit110 ofdecoder123 decodes the re-coded digital video signal to produce a re-decoded digital video signal and a corresponding picture type signal. The re-decoded digital video signal is converted to an analog signal by D/A converter109, and applied to a multiplexer111 which multiplexes the analog video signal with the picture type signal from decodingcircuit110 to produce a multiplexed analog video signal as an output signal c. Preferably, the multiplexer111 inserts the picture type for a picture into the vertical blanking interval of the picture. The multiplexer111 applies its output signal c to anoutput terminal119.
Due to the inclusion of the picture type identifier in the signals b and c, the codecs ofFIG. 14 process respective pictures of the video signals b and c in the same manner, that is, as the same one of an I-picture, a P-picture or a B-picture.
FIG. 15 shows the SNR of the output signals b, c shown in FIG.14. The SNR of the output signal c is seen to be only slightly worse than the SNR of the output signal b.
That is, since the type of predictive coding applied to each picture is the same in each of the serially arranged codecs, the deterioration in picture quality at each codec is minimized even when the picture quality changes from picture to picture due to the type of predictive coding employed from picture to picture.
FIG. 16 shows codecs according to the present invention having a serial digital connection therebetween. A first codec comprisescoder142 anddecoder143, while a second codec comprisescoder144 anddecoder145.
InFIG. 16, an analog video signal is supplied to aninput terminal140 as an input signal a, and a picture type signal is supplied to aninput terminal148. Theinput terminals140,148 function to apply the analog video signal and the picture type signal, respectively, to an A/D converter141 and acoding circuit152 ofcoder142, respectively. Theconverter141 is adapted to convert the analog video signal to a digital video signal, and to apply the digital video signal to adigital interface151 ofcoder142.
Thecoding circuit152 serves to encode the digital video signal as a function of the picture type signal to produce a coded digital video signal which includes, for each encoded picture, its picture type as identified by the picture type signal. More specifically, if the picture type for a picture indicates intra-picture coding, then thecoding circuit152 codes the picture as an I-picture. If the picture type for a picture indicates predictive coding, then thecoding circuit152 codes the picture as a P-picture. If the picture type for a picture indicates bi-directionally predictive coding, then thecoding circuit152 codes the picture as a B-picture.
Thecoding unit120 may alternatively have the structure shown inFIG. 6, in which coding is performed without reference to an externally supplied picture type.
The coded digital video signal fromcoding circuit152 is supplied to adecoding circuit153 ofdecoder143 which is adapted to decode the coded digital video signal as a function of the picture type included in the encoded signal and to apply the decoded video signal to adigital interface154. Thedecoding circuit153 is further adapted to apply the picture type decoded from the coded digital video signal to amultiplexer155.
Themultiplexer155 is operative to multiplex the picture type information with the decoded video signal to produce a multiplexed digital video signal as an output signal b in which the picture type information is contained in the decoded video signal. Preferably,multiplexer155 multiplexes the picture type for a picture of the decoded video signal as a flag in the respective picture.
In a preferred embodiment of the present invention, the picture type information is inserted (or multiplexed) into the decoded video signal at a location therein which precedes the actual video data that represents the video field or frame.FIGS. 17A,17B and17C illustrate the data structure of the decoded (MPEG) video signal.FIG. 17A illustrates the data structure of a decoded video signal having a serial digital interface format as specified in the standard SMPTE—259 (Society of Motion Picture & Television Engineers). As shown, a frame consists of a first vertical blanking area VBK1, a first optional blanking area OBK1 and a first active video area ACV1, which constitutes the first field, followed by a second vertical blanking area VBK2, a second optional blanking area OBK2 and a second active video area ACV2, which constitutes the second field. In the preferred embodiment, each of the vertical blanking areas consists of9 horizontal scanning lines, each of the optional blanking areas consists of10 horizontal scanning lines, the first active video area consists of244 horizontal scanning lines and the second active video area consists of243 horizontal scanning lines, for a total of525 horizontal scanning lines for a single frame.
FIG. 17B illustrates the data structure of a horizontal scanning line. As shown, a horizontal scanning line includes an end of active video (EAV) area followed by an ancillary (ANC) area, a start of active video (SAV) area and a video area. The ANC area, as shown inFIG. 17C, includes an ancillary data flag (ADF) area followed by a data identification (DID) area, an ancillary number data (DBN) area, an ancillary word data (DC) area, an ancillary data (ANC DATA) area, and a check sum (CS) area. Since the above-noted areas of a horizontal scanning line of digital data are well-known in the art, their descriptions are omitted herein except where necessary for an understanding of the present invention.
In accordance with the present invention, the picture type information is inserted into the DID area of the ANC area of each of the horizontal scanning lines in the first and second vertical blanking areas. However, the picture type information also may be inserted into the DID area of the ANC area of other horizontal scanning lines of the decoded video signal, although these DID areas may be used for transmitting other types of data. For example, the DID areas in horizontal lines of non-vertical blanking areas may include other formatting information.
The picture type information (or picture type data) may identify the type of encoding of the picture (e.g., intra-picture coding, predictive coding, and bi-directionally predictive coding) in various ways.FIGS. 18A to18C illustrate one method in which the type of encoding is identified by the structure of the group of pictures (GOP). As shown, the structure of a group of pictures (GOP) may be identified by the minimum number of frames “M” between I and P pictures, between P and P pictures, and between I and I pictures, and the total number of frames “N” (pictures) in the group of pictures. For example,FIG. 18A illustrates groups of pictures having an encoding structure of M=3 and N=9 in which there are 9 frames in each group and wherein there are 3 frames from each I or P frame to the respectively succeeding I or P frame. Similarly,FIG. 18B illustrates groups of pictures having an encoding structure of M=2 and N=2, andFIG. 18C illustrates groups of pictures having an encoding structure of M=1 and N=2.
When the picture type data identifies the “M” and “N” numbers, the type of encoding for each picture can be determined by the location of a respective picture within the group of pictures, and the location of a respective picture may be identified in the picture type data either by identifying each picture's location within the group of pictures or by identifying only the first picture within the group of pictures. For example, when M=3 and N=9 for a group of frames in the decoded video signal (FIG.18A), the third frame in that group is identified as a decoded I-frame.
Returning toFIG. 16, the output signal b (the multiplexed decoded video signal) is supplied from themultiplexer155 to aseparating circuit156 of thecoder144. The separating circuit is operative to separate or demultiplex the digital video signal and the picture type data from the output signal b, to supply the separated digital video signal to adigital interface157, and to supply the separated picture type data to acoding circuit158. Theinterface157 is adapted to apply the separated digital video signal to thecoding circuit158.
Thecoding circuit158 serves to encode the separated digital video signal as a function of the separated picture type to produce a re-coded digital video signal which includes, for each re-encoded picture, its picture type as identified by the separated picture type signal.
The re-coded digital video signal from thecoding circuit158 is supplied to thedecoder145, which operates in a similar manner as thedecoder143.
Decodingcircuit160 ofdecoder145 decodes the re-coded digital video signal to produce a re-decoded digital video signal and a corresponding picture type signal. The re-decoded digital video signal is supplied to adigital interface159 and thence to amultiplexer161 which multiplexes the re-decoded digital video signal with the picture type signal from decodingcircuit160 to produce a multiplexed digital video signal. Themultiplexer161 multiplexes the picture type signal as a flag (e.g., in the DID area of the ANC area) in the re-decoded digital video signal.
Themultiplexer161 supplies the multiplexed digital video signal to an A/D converter146 which serves to convert the multiplexed digital video signal to an analog video signal also referred to as output signal c. Theconverter146 applies the output signal c to anoutput terminal147.
Due to the inclusion of the picture type identifier in the signals b and c, the codecs ofFIG. 16 process respective pictures of the video signals b and c in the same manner, that is, as the same one of an I-picture, a P-picture or a B-picture. Consequently, the deterioration in picture quality at each codec is minimized even when the picture quality changes from picture to picture due to the type of predictive coding employed from picture to picture.
FIG. 19 shows the coding circuits ofFIGS. 14 and 16 in more detail. InFIG. 19, elements similar to those inFIG. 6 are indicated by the same reference numerals, and detailed explanations thereof are omitted.
InFIG. 19, a picture type signal is supplied to input terminal70 which serves to supply the picture type signal tomotion vector detector450, predictive judgingcircuit454 and variablelength coding circuit458. The processing performed byelements450,454 and458 is similar to the processing performed byelements50,54 and58 ofFIG. 6, except that the elements ofFIG. 19 perform in accordance with the picture type identified in the external picture type signal which indicates the picture type used in previous coding. The variablelength coding circuit458 includes the picture type based on the external picture type signal as part of the header information.
FIG. 20 shows the decoding circuits ofFIGS. 14 and 16 in more detail. InFIG. 20, elements similar to those inFIG. 9 are indicated by the same reference numerals, and detailed explanations thereof are omitted.
Variablelength decoding circuit482 ofFIG. 20 is similar to variablelength decoding circuit82 ofFIG. 9, except thatcircuit482 applies the picture type separated from the encoded signal not only tomotion compensator487, but also to output terminal92.
Although an illustrative embodiment of the present invention, and various modifications thereof, have been described in detail herein with reference to the accompanying drawings, it is to be understood that the invention is not limited to this precise embodiment and the described modifications, and that various changes and further modifications may be effected therein by one skilled in the art without departing from the scope or spirit of the invention as defined in the appended claims.