This application is a divisional of U.S. patent application Ser. No. 08/280,584, filed Jul. 26, 1994 which issued as U.S. Pat. No. 6,198,848 on Mar. 6, 2001, which was a continuation of application Ser. No. 07/738,562, filed Jul. 31, 1991, abandoned.
BACKGROUND OF THE INVENTION1. Field of the Invention
This invention relates to an apparatus and method for image processing and, more particularly, to an image processing apparatus and method for compressing and storing data indicative of a full-color image, such as a photograph having tones (colors), upon partitioning the data into blocks.
2. The Prior Art
The memory capacity necessary for storing a full color image (hereinafter referred to as an “image”) such as a photograph in a memory is given by (number of pixels)×(number of tone bits). As a typical example, a color image composed of 1024 lines (vertical)×1280 lines (horizontal)×24 bits per pixel (eight bits for each of the colors R, B and G) is equivalent to 30 megabytes of data. An enormous memory capacity would be required to store such a high-quality color image.
For this reason, a variety of methods of compressing the amount of information have been proposed. Attempts have been made to reduce the required memory capacity by first compressing image information and then storing the compressed information in memory, and subsequently expanding the information when it is read out of the memory to obtain the original image information.
FIG. 32 is a block diagram of an image storing circuit proposed by the JPEG (JointPhotographicExpertsGroup) of the CCITT/ISO as a method of achieving international standardization of color still-picture coding. The circuit ofFIG. 32 is based upon a coding method [see “International Standard for Color Photographic Coding”, Hiroshi Yasuda, The Journal of the Institute of Image Electronics Engineers of Japan, Vol. 18, No. 6, pp. 398-407, 1989 (in Japanese)] of a baseline system which combines a discrete cosine transformation (hereinafter referred to as “DCT”) and variable length coding (hereinafter referred to as “VLC”).
As shown inFIG. 32, pixel data entered from aninput terminal1101 is cut into an 8×8 pixel block in ablock forming circuit1102, the data is subjected to a cosine transformation by aDCT circuit1103, and the transformation coefficients are supplied to aquantization unit1105. In accordance with quantization-step information supplied by a quantization table1106, thequantization unit1105 subjects the transformation coefficients to linear quantization. Of the quantized transformation coefficients, a DC (direct current) coefficient is applied to a predictive coding circuit [hereinafter referred to as a “DPCM” (differential pulse-coded modulation)circuit1401, which obtains the differential (a prediction error) between this DC coefficient and the DC component of the preceding block. The difference is applied to a one-dimensional Huffmancoding circuit1402.
FIG. 33 is a detailed block diagram showing theDPCM1401. In theDPCM1401, the quantized DC coefficient from thequantization unit1105 is applied to adelay circuit1501 and asubtracter1502. Thedelay circuit1501 applies a delay equivalent to the time needed for the discrete cosine transformation circuit to operate on one block, namely 8×8 pixels. Accordingly, thedelay circuit1501 supplies thesubtracter1502 with the DC coefficient of the preceding block. As a result, the subtracter54 outputs the differential (prediction error) between the current DC coefficient and that of the preceding block. (In this predictive coding, the value of the preceding block is used as the prediction value, and therefore the predicting unit is constituted by the delay circuit, as set forth above.)
In accordance with a DC Huffman code table1403, the one-dimensional Huffmancoding circuit1402 applies variable-length coding to the prediction error signal supplied by theDPCM1401 and supplies a DC Huffman code amultiplexer1410.
An AC (alternating current) coefficient (a coefficient other than the DC coefficient) quantized by thequantization unit1105 is zigzag-scanned in order from coefficients of lower order as shown inFIG. 34 by means of ascan converting circuit1404, and the output of thescan converting circuit1404 is applied to a non-zerocoefficient detector circuit1405. The latter determines whether the quantized AC coefficient is “0” or not. If the AC coefficient is “0”, a count-up signal is supplied to a run-length counter1406, thereby incrementing the counter.
If the coefficient is other than “0”, however, a reset signal is applied to the run-length counter1406 to reset the counter, and the coefficient is split into a group number SSSS and annexed bits, as shown inFIG. 37, by agrouping circuit1407. The group number SSSS is supplied to a two-dimensional Huffmancoding circuit1408, and the annexed bits are supplied to themultiplexer1410.
The run-length counter1406 counts a run length of “0” and supplies the two-dimensional Huffmancoding circuit1408 with the number NNNN of “0”s between non-zero coefficients other than “0”. In accordance with the AC Huffman code table1409, the two-dimensional Huffmancoding circuit1408 applies variable-length coding to the “0” run length NNNN and the non-zero coefficient group number SSSS and supplies themultiplexer1410 with an AC Huffman code.
Themultiplexer1410 multiplexes the DC Huffman code, AC Huffman code and annexed bits of one block (8×8 input pixels) and outputs compressed image data from itsoutput terminal1411.
Accordingly, the compressed data outputted by theoutput terminal1411 is stored in a memory, and at read-out the data is expanded by a reverse operation, thereby making it possible to reduce memory capacity.
In the example of the prior art described above, however, variable length coding (VLC) is used in the coding units (the one-dimensional Huffmancoding circuit1402 and two-dimensional Huffman coding circuit1408). Consequently, the code length (information quantity) of one block of the DCT is not constant, and the correspondence between the memory addresses and the blocks is complicated. Executing the combining of images, such as the overlapping of images as shown in FIG.35 and the partial overlaying of images as shown inFIG. 36, is very difficult to perform in memory.
In the frame memory of a page printer, this difficulty leads to a problem of a complicated correspondence between a frame address and block position on a page.
Further, in the example of the prior art described above, DPCM is used in the DC coefficient after DCT. Consequently, in a case where there is a partial block overlay, decoding must be performed retroactively back to the block at which the prediction value of DPCM is reset (namely the block at which an operation between blocks has not been performed). In addition, overlaying of the DC coefficient must be performed up to the block at which the next DPCM is reset, in such a manner that the overlaying will not cause the prediction value at the time of coding to differ from that at the time of decoding. This complicates the processing procedure for combining images in memory and prolongs the necessary computation time, thereby making it even more difficult to combine images in memory.
SUMMARY OF THE INVENTIONIt is a purpose of the present invention to provide an image processing apparatus and method which make it possible to combine compressed images in memory.
According to the present invention, the foregoing object is attained by providing an image processing apparatus comprising orthogonal transformation means for orthogonally transforming inputted data in block units, quantizing means for quantizing the data orthogonally transformed by the orthogonal transformation means, coding means for variable-length coding the data quantizing by the quantizing means, and control means for controlling the quantity of the coded data output from the coding means to be less than a predetermined quantity in block units.
According to the other aspect of the present invention, the foregoing object is attained by providing an image processing method comprising the steps of an orthogonal transformation step of orthogonally transforming inputted data in block units, a quantizing step of quantizing the data orthogonally transformed at the orthogonal transformation step, and a coding step of variable-length coding the data quantizing at the quantizing step by performing control in such a manner that the information quantity of the data becomes a predetermined quantity in block units.
In accordance with the present invention as described above, the orthogonal transformation means subjects the inputted data to an orthogonal transformation in block units, the quantizing means quantizes the data orthogonally transformed by the orthogonal transformation means, and the coding means subjects the data quantized by the quantizing means to variable-length coding. In the coding means, the control means executes control in such a manner that the quantity of the coded data output from the coding means is less than a predetermined quantity.
The invention is particularly advantageous since memory control is greatly facilitated and it becomes possible to combine compressed image data in a frame memory.
It is another purpose of the present invention to provide an image processing apparatus and method in which optimum quantization conforming to output characteristics and human visual characteristics is selected, whereby band-compensated excellent compression can be carried out up to the threshold frequency.
According to the present invention, the foregoing object is attained by providing an image processing apparatus for controlling the data quantity of every block to be less than a predetermined value when inputted data is coded, comprising transformation means for orthogonally transforming the inputted data in block units, quantizing means for quantizing the data orthogonally transformed by the transformation means, scanning means for scanning the data quantized by the quantizing means, coding means for performing variable-length coding based upon the data scanned by the scanning means, counting means for counting quantity of coded data generated by the coding means, comparing means for comparing the code quantity counted by the counting means and a predetermined quantity for every block, and changeover means for changing over quantization conditions of the quantizing means based upon results of comparison performed by the comparing means.
According to the other aspect of the present invention, the foregoing object is attained by providing an image processing method for controlling data quantity of every block to be less than a predetermined value when inputted data is coded, comprising the steps of a transforming step of orthogonally transforming the inputted data in block units, a quantizing step of quantizing the data orthogonally transformed at the transforming step, a scanning step of scanning the data quantized at the quantizing step, a coding step of performing variable-length coding based upon the data scanned at the scanning step, a counting step of counting quantity generated when variable-length coding is performed at the coding step, a comparing step of comparing the data quantity counted at the counting step and a predetermined quantity for every block, and a changeover step of changing over quantization conditions of the quantizing step based upon results of comparison performed at the comparing step.
In accordance with the present invention as described above, the transformation means subjects the inputted data to an orthogonal transformation in block units, the quantizing means quantizes the data orthogonally transformed by the transformation means, the scanning means scans the data quantized by the quantizing means, the coding means performs variable-length coding based upon the data scanned by the scanning means, the counting means counts quantity of coded data generated by the coding means, the comparing means compares the code quantity counted by the counting means and a predetermined quantity for every block, and the changeover means for changes over quantization conditions of the quantizing means based upon results of comparison performed by the comparing means.
The invention is particularly advantageous since optimum quantization conforming to output characteristics and human visual characteristics is selected, whereby band-assured excellent compression can be carried out up to the threshold frequency.
It is another purpose of the present invention to provide an imaging processing apparatus and method in which memory management is facilitated and a degradation in picture quality held to a minimum by using a plurality of quantization tables and compressing data to a desired amount.
According to the present invention, the foregoing object is attained by providing an image processing apparatus having a plurality of quantization tables for quantizing image data in block units comprising quantizing means for quantizing the image data in block units based upon any of the quantization tables, counting means for counting data quantity of the quantized image data quantized by the quantizing means in block units, and selecting means for selecting a quantization table in dependence upon the result of counting performed by the counting means.
In a preferred embodiment, the selecting means is adapted to select quantization tables in a direction in which the data quantity is decreased or increased.
In another preferred embodiment, the selecting means includes memory means for storing an index of the selected quantization table.
According to the other aspect of the present invention, the foregoing object is attained by providing an image processing method using a plurality of quantization tables for quantizing image data in block units comprising the steps of a quantizing step of quantizing the image data in block units based upon the quantization tables, a counting step of counting data quantity of the quantized image data quantized at the quantizing step, and a selecting step of selecting a quantization table in dependence upon the result of counting performed at the counting step.
In accordance with the present invention as described above, the image data in block units is quantized based upon the quantization tables, and the data quantity of the quantized coded image data is counted. A quantization table is selected in dependence upon the result of counting, and data is quantized to the desired data quantity.
The invention is particularly advantageous since memory management is facilitated and a degradation in picture quality is held to a minimum by using a plurality of quantization tables and compressing data to a desired amount.
It is another purpose of the present invention to provide an imaging processing apparatus and method whereby images can be edited and treated with ease.
According to the present invention, the foregoing object is attained by providing an image processing apparatus comprising input means for inputting image data, coding means for subjecting the image data inputted by the input means to a plurality of coding processing operations which differ from one another, and selecting means for selecting one result from among the results of the plurality of coding processing operations obtained by the coding means.
According to the other aspect of the present invention, the foregoing object is attained by providing an image processing method comprising the steps of an input step of inputting image data, a coding step of subjecting the image data inputted at the input step to a plurality of coding processing operations which differ from one another, and a selecting step of selecting one result from among the results of the plurality of coding processing operations obtained at the coding step.
In accordance with the present invention as described above, the input means inputs the image data, the coding means subjects the image data inputted by the input means to a plurality of coding processing operations which differ from one another, and the selecting means selects one result from among the results of the plurality of coding processing operations obtained by the coding means.
The invention is particularly advantageous since an image processing apparatus featuring easy editing and treating of data can be realized.
It is another purpose of the present invention to provide an image processing apparatus and method whereby it is possible to combine compressed images in memory.
According to the present invention, the foregoing object is attained by providing an image processing apparatus comprising orthogonal transformation means for orthogonally transforming input image data in block units, coding means for quantizing the data orthogonally transformed by the orthogonal transformation means, and coding a quantized transformation coefficient to generate variable-length coded data of which quantity is less than a predetermined quantity in block units and memory means for storing the data, which has been coded by the coding means, wherein data stored in the memory means is capable of being read out in block units.
According to the other aspect of the present invention, the foregoing object is attained by providing an image processing method comprising the steps of an orthogonal transformation step of orthogonally transforming input image data in block units, a coding step of quantizing the data orthogonally transformed at the orthogonal transformation step, and coding a quantized transformation coefficient to generate variable-length coded data of which is less than a predetermined quantity in block units, a storing step of storing the data, which has been coded at the coding step and a reading step of reading data, which has been stored at the storing step, in block units.
In accordance with the present invention as described above, one block of compressed data can be stored within the predetermined quantity (S) and coding of the predetermined quantity (S) is possible irrespective of the fact that variable-length coding is possible. As a result, operation in such that accessing of the memory means is performed in units of the predetermined quantity (S).
The invention is particularly advantageous since one block of compressed data can be stored within bits of the predetermined quantity irrespective of the fact that variable-length coding is used, and image processing becomes possible solely in S bits. Accessing of the memory means also is performed in units of S bits, memory control is greatly facilitated, and it becomes possible to combine image-processed data in the memory means. In addition, since the processing units are made a fixed length in block units, the time required for image processing is rendered substantially constant for every block. Therefore, there is no need for a buffer for rendering constant the transmission rate of data after the decoding necessary for such image processing as, e.g., variable-length coding. This makes possible a great simplification in hardware.
Other features and advantages of the present invention will be apparent from the following description taken in conjunction with the accompanying drawings, in which like reference characters designate the same or similar parts throughout the figures thereof.
BRIEF DESCRIPTION OF THE DRAWINGSThe accompanying drawings, which are incorporated in and constitute a part of the specification, illustrate embodiments of the invention and, together with the description, serve to explain the principles of the invention.
FIG. 1 is a block diagram illustrating the overall construction of an image processing apparatus which is a typical embodiment of the present invention;
FIGS. 2A and 2B are block diagrams illustrating the construction of an image memory unit in accordance with a first embodiment of the present invention;
FIG. 3A is a flowchart for describing a coding operation in accordance with the first embodiment;
FIG. 3B is a flowchart for describing a decoding operation in accordance with the first embodiment;
FIGS. 4A and 4B are block diagrams illustrating the construction of the image memory unit in accordance with a second embodiment of the present invention;
FIGS. 5A and 5B are block diagrams illustrating the construction of the image memory unit in accordance with a third embodiment of the present invention;
FIG. 6 is a block diagram illustrating the construction of the image memory unit in accordance with a fourth embodiment of the present invention;
FIG. 7 is a flowchart for describing a coding operation in accordance with the fourth embodiment;
FIG. 8 is a block diagram illustrating the construction of a VLC circuit in accordance with the fourth embodiment of the present invention;
FIG. 9 is a diagram for describing zigzag scanning in accordance with the fourth embodiment;
FIG. 10 is a block diagram illustrating the construction of the image memory unit in accordance with a fifth embodiment of the present invention;
FIG. 11 is a block diagram illustrating the construction of the image memory unit in accordance with a sixth embodiment of the present invention;
FIGS. 12A and 12B are block diagrams illustrating the construction of the image memory unit in accordance with a seventh embodiment of the present invention;
FIG. 13 is a diagram showing the contents of a quantization table illustrated inFIG. 12;
FIG. 14 is a block diagram illustrating the construction of coding unit shown inFIG. 12;
FIG. 15 is a block diagram illustrating the construction of decoding unit shown inFIG. 12;
FIG. 16 is a flowchart illustrating a coding and storing operation in accordance with the seventh embodiment;
FIGS. 17A and 17B are block diagrams illustrating the construction of the image memory unit in accordance with an eighth embodiment of the present invention;
FIG. 18 is a circuit diagram illustrating an example of the construction of EOB detecting circuits of the eighth embodiment;
FIG. 19 is a circuit diagram illustrating an example of the construction of a coding selecting circuit of the eighth embodiment;
FIG. 20 is a timing chart associated with the coding selecting circuit of the eighth embodiment;
FIG. 21 is a block diagram illustrating the construction of the image memory unit in accordance with a ninth embodiment of the present invention;
FIGS. 22A and 22B are block diagrams illustrating the construction of the image memory unit in accordance with a tenth embodiment of the present invention;
FIG. 23 is a block diagram illustrating the construction of coding circuits according to the tenth embodiment;
FIG. 24 is a block diagram illustrating the construction of decoding circuits according to the tenth embodiment;
FIGS. 25A and 25B are block diagrams illustrating the construction of the image memory unit in accordance with a block mapping method;
FIGS. 26A and 26B are block diagrams illustrating the construction of the image memory unit in accordance with an 11th embodiment of the present invention;
FIGS. 27A and 27B are block diagrams illustrating the construction of the image memory unit in accordance with a 12th embodiment of the present invention;
FIG. 28 is a block diagram illustrating the construction of a coding circuit according to the 12th embodiment;
FIG. 29 is a block diagram illustrating the construction of a decoding circuit according to the 12th embodiment;
FIGS. 30A and 30B are block diagrams illustrating the construction of the image memory unit in accordance with a 13th embodiment of the present invention;
FIG. 31 is a block diagram illustrating the construction of a decoding circuit according to the 13th embodiment;
FIG. 32 is a block diagram illustrating the construction of an image memory circuit which uses the coding method of JPEG baseline system according to the prior art;
FIG. 33 is a block diagram illustrating the detailed construction of a DPCM circuit;
FIG. 34 is a diagram for describing a scanning method of coefficients of orthogonal transformation;
FIGS. 35,36 are diagrams showing examples of image combination such as the overlapping of images and partial transcription of images; and
FIG. 37 is a diagram for describing two-dimensional Huffman encoding of AC coefficients by JPEG baseline system.
DESCRIPTION OF THE PREFERRED EMBODIMENTSPreferred embodiments of the present invention will now be described in detail in accordance with the accompanying drawings.
[General Construction of the Apparatus]
FIG. 1 is a block diagram illustrating the overall construction of an image processing apparatus used in 13 embodiments described below.
As shown inFIG. 1, the apparatus includes animage input unit10 constituted by an image reader such as an image scanner which includes a CCD sensor, or an interface of an external item of equipment such as a host computer, an SV camera or a video camera, etc. Image data inputted from theimage input unit10 is supplied to aninput terminal100 of animage memory unit11 illustrated inFIG. 2,FIGS. 4 through 6,FIGS. 10 through 12,FIG. 17,FIGS. 21,22,FIGS. 26,27 and FIG.30. The apparatus further includes acontrol panel12 which an operator uses to designate an output destination of the image data, and anoutput controller13. Thecontrol panel12 is for selecting the output destination of the image data, and theoutput controller13 outputs a synchronizing signal for memory read-but. The synchronizing signal is, e.g., an ITOP signal from the output controller, which together with animage output unit16 constructs a printer engine.
Theimage memory unit11 has anoutput terminal25 and aninput terminal26 for the synchronizing signal of the image memory unit. The apparatus is further provided with an image display unit such as a display device.Numeral15 denotes a transmitting unit for transmitting the image data via a public line or LAN (local area network). Theimage output unit16 is, for example, a laser beam printer which irradiates a photosensitive body with a laser beam to form a latent image and then converts the latent image into a visible image. Theimage output unit16 can be an ink-jet printer, a thermal printer or a dot printer, etc.
[First Embodiment]
The operation of a first embodiment will now be described.
FIG. 2 is a block diagram showing the detailed construction of theimage memory unit11 ofFIG. 1 according to the first embodiment of the invention.
As shown inFIG. 2, theimage memory unit11 has aninput terminal100 for image data, and ablock forming circuit102 which forms the input data from theinput terminal100 into blocks of, say, 8×8 pixels each. Theblock forming circuit102 is connected to a DCT (discrete cosine transformation)circuit103 which performs an orthogonal transformation. TheDCT circuit103 is connected to ascan converting circuit104 which, as shown inFIG. 34, subjects each item of data that has undergone DCT processing to a zigzag scan conversion. Thescan converting circuit104 is connected to aquantization unit105 which subjects the zigzag-scanned data to linear quantization at the step width of a quantization table106, described below. The quantization table106 shall be referred to as a “Q table” hereinafter.
The output of the Q table106 is connected to a variable-length coding (hereinafter referred to as “VLC”)circuit109, which in turn is connected to data-length counter110 for counting the cumulative value, block by block, of the data length of the coded data outputted by theVLC circuit109. The data-length counter110 is connected to aG discriminating circuit111 which, based upon the output of thecounter110, determines whether the sum total of the quantity of coded data of the prevailing block is less than G bits (G: a predetermined value). AnS factor107 performs scaling of the values in the Q table106. In accordance with the scaling value from theS factor107, amultiplier108 controls the Q-table values, namely the step width of quantization. TheVLC circuit109 is further connected to abuffer112, which is for storing and holding G bits of data. The output of thebuffer112 is connected to aframe memory113, which is for storing one frame of data delivered by thebuffer112. Anindex memory114 writes the result outputted by theG discriminating circuit111 into theframe memory113.
Numeral26 denotes an input terminal for inputting a synchronizing signal from an external device (not shown). In accordance with the synchronizing signal from theinput terminal115, amemory control circuit115 controls theframe memory113. Aninverse VLC circuit116 subjects the coded data, which has been outputted by theframe memory113, to decoding which corresponds to the coding performed by theVLC109. Aninverse quantization circuit117 subjects the decoded data to inverse quantization (Q−1). A quantization table118 is used when inverse quantization is performed by theinverse quantization circuit117. Adelay circuit120 delays timing when inverse quantization is performed.
AnS factor119 is for generating, block by block, an S factor which corresponds to the index, of every block, read out of theindex memory114. Amultiplier121 is for scaling the values of the Q table118, namely the step width of quantization. Ascan converting circuit122 subjects the output of theinverse quantization circuit117 to an inverse transformation which corresponds to thescan converting circuit104. Aninverse DCT circuit123 subjects the output of thescan converting circuit122 to an inverse discrete cosine transformation which corresponds to theDCT circuit103. Arasterization unit124 is for converting the image data outputted by the inverse DCT circuit into raster scanning data. The output of therasterization circuit124 is delivered from anoutput terminal25.
FIG. 3A is a flowchart for describing the coding operation of the first embodiment, andFIG. 3B is a flowchart for describing the decoding operation of the first embodiment.
First, in coding, theblock forming circuit102 forms the input data from theinput terminal101 into blocks of, e.g., 8×8 pixels, at step S1. Next, upon receiving the data put into block form by theblock forming circuit102, theDCT circuit103 subjects the data to an orthogonal transformation, namely DCT processing, and outputs the result to thescan converting circuit104 at step S2. As shown inFIG. 3A, thescan converting circuit104 subjects each item of data that has undergone DCT processing to a zigzag scan conversion and then outputs the result to thequantization unit105 at step S3. Thequantization unit105 performs linear quantization at a step width indicated in the Q table106 and outputs the resulting data to theVLC circuit109 at step S4.
TheVLC circuit109 outputs the coded data to thebuffer112 and outputs the data length of this data to the data-length counter110. At step S5, it is examined whether beginning of block is detected. If the beginning of block is detected, the process proceeds to step S7. At step S7, the cumulative value is reset to be zero, then the process returns to step S6. On the other hand, if the beginning of block is not detected, the process proceeds to step S6. At step S6, the data-length counter110 counts the cumulative value, block by block, of the data length of the coded data outputted by theVLC circuit109. The data length of one block of coded data is accumulated by the data-length counter110. Based upon the output of the data-length counter110, theG discriminating circuit111 determines at step S8 whether the sum total of the quantity of coded data of the prevailing block, namely the cumulative value, is less than G bits. If it is greater than G bits, then theS factor107 is changed over at step S9. In conformity with the changeover of theS factor107, calculation of the Q-table value, namely of the quantization step width, is performed. At this time theQ factor107 controls the quantization step width according to the scaling value of the value in the Q table106. If the decision rendered at step S8 is that the cumulative value is less than G bits, the process proceeds to step S10. At step S10, if end of block is detected, then the coded data planted in thebuffer112 by theVLC circuit109 is stored in the frame memory113 (steps S11, S12). On the other hand, if end of block is not detected, the process returns to step S6.
By virtue of this operation, block-by-block control of the quantity of coded data (the quantity of VLC data) becomes possible. As a result of the foregoing operation, theG discriminating circuit111 controls theS factor107 block by block in such a manner that the sum total of the quantity of coded data within a block is maximized but made less than G bits. Thebuffer112 buffers the coded data, block by block, with respect to theS factor107 obtained as set forth above. The buffer writes this data in theframe memory113. At step S13 theG discriminating circuit111 writes the index of theS factor107 of every block in theindex memory114 for the sake of later decoding. The foregoing operation is repeated so that one frame of coded data and indices are written in theframe memory113 andindex memory114, respectively.
Decoding will be described next.
When a synchronizing signal from an external device (not shown) enters theinput terminal26 at step S21, thememory control circuit115, in accordance with this synchronizing signal, controls theframe memory113 so as to perform read-out in units of G bits from the beginning of theframe memory113 and, at the same time, controls theindex memory114 in such a manner that an index corresponding to the block read out of theframe memory113 is read out of the index memory114 (step S22). The coded data read out of theframe memory114 is decoded at step S23 by theinverse VLC circuit116, which corresponds to theVLC circuit109. The decoded data is subjected to inverse quantization by theinverse quantization circuit117 at step S24. At this time the quantization table118 is used and the index of the S factor corresponding to the block which has undergone inverse quantization is used upon being read out of theindex memory114. Thedelay circuit120 adjusts timing when inverse quantization is performed.
TheS factor119 generates, block by block, an S factor corresponding to the index of every block read out of the index memory114 (step S25). Themultiplier121 scales the value of the quantization table119, namely the quantization step width, and supplies the result to theinverse quantization unit117 at step S26. Thescan converting circuit122 subjects the output of theinverse quantization unit117 to an inverse transformation corresponding to thescan converting circuit104, and delivers its output to theinverse DCT circuit123 for an inverse discrete cosine transformation corresponding to the scan converting circuit104 (step S27). The image data that has been returned to the real-space data by theinverse DCT circuit123 is converted into raster scanning data by therasterization circuit124 at step S28. The raster scanning data is outputted from theoutput terminal25 at step S29.
Thus, irrespective of the fact that variable-length coding is used, one block of compressed data can be stored within a predetermined value of G bits, and decoding (expansion) can be performed only in G bits. This makes it possible to access theframe memory113 in units of G bits. Accordingly, memory control is greatly facilitated and compressed image data can be combined in theframe memory113.
In addition, since a fixed length is achieved in block units, the time required for decoding also is rendered substantially constant every block. Consequently, there is no need for a buffer for rendering constant the transmission rate of data after the decoding necessary for variable-length coding. This makes possible a great simplification in hardware.
[Second Embodiment]
FIG. 4 is a block diagram showing the detailed construction of theimage memory unit11 ofFIG. 1 according to a second embodiment of the invention. Blocks inFIG. 4 indicating functions identical with those shown inFIG. 2 are designated by like reference characters. In the first embodiment set forth above, the data quantity of every block is controlled by adjusting the S factor, as illustrated in FIG.2. In this embodiment, however, the data quantity of every block is controlled by changing over the Q table.
The description will focus on the portions inFIG. 4 that differ from those of the first embodiment of FIG.2.Numeral202 denotes a Q-table group having a plurality of Q tables.Numeral201 denotes a Q-table selecting circuit which, in accordance with the cumulative value from the data-length counter110, selects one Q table form the Q-table group202.Numeral203 designates a Q-table group having a plurality of Q tables for selecting a Q table corresponding to the index, of every block, read out of theindex memory114.
The operation of the second embodiment will now be described in brief.
In coding, the data-length counter110 counts the sum total of the data length block by block, just as in the embodiment of FIG.2. The Q-table selecting circuit201 determines whether the output (cumulative value) of the data-length counter110 is less than G and selects one Q table from among the Q-table group202. The Q-table selecting circuit201 performs the Q-table selection in such a manner that the sum total of the data quantity within a block becomes less than G bits and maximum. The coded data corresponding to the Q table thus selected is buffered in thebuffer12, after which it is written in theframe memory113 block by block. At the same time, the index corresponding to the Q table of this block is written in theindex memory114.
In decoding, the coded data read out of theframe memory113 block by block in units of G bits is decoded by theinverse VLC circuit116 corresponding to theVLC circuit109, after which the decoded data is inversely quantized by theinverse quantization circuit117. At this time, the Q table read out of the index memory114 (the Q table corresponding to the index of the block which has been inversely quantized) is selected from the Q-table group203. The operation from inverse quantization onward is the same as in the first embodiment, and raster data is finally outputted from theoutput terminal25.
Thus, the second embodiment provides effects similar to those of the first embodiment.
[Third Embodiment]
FIG. 5 is a block diagram showing the detailed construction of theimage memory unit11 ofFIG. 1 according to a third embodiment of the invention. Blocks inFIG. 5 having functions identical with those shown inFIG. 2 are designated by like reference characters.
The third embodiment is so adapted that a Huffman coding circuit is adopted as theVLC circuit109 of the second and third embodiments, and the data quantity of every block stored in theframe memory113 is controlled upon changing over a Huffman code table. InFIG. 5, numeral301 denotes the Huffman coding circuit,306 a Huffman decoding circuit, namely an inverse Huffman coding circuit, for performing decoding corresponding to theHuffman coding circuit301, and305 a Huffman code table group having a plurality of Huffman codes.Numeral303 denotes-statistical processing circuit which obtains, for every block, the variance of DCT coefficients after quantization.Numeral304 designates a table selecting circuit which, based upon the result outputted by thestatistical processing circuit303, selects one Huffman code table from the Huffmancode table group305.Numeral307 denotes a Huffman code table group having a plurality of Huffman code tables for selecting a Huffman code corresponding to the index, of every block, read out of theindex memory114.
Hereinafter, only aspects of the third embodiment that differ from the first embodiment will be described.
In coding, theHuffman coding circuit301 Huffman-codes the output of thequantization unit105 and outputs the result to abuffer302. The Huffman code table used at this time is selected from the Huffmancode table group305. The method of selection is as follows: Thequantization unit105 delivers its output to both theHuffman coding circuit301 andstatistical processing circuit303. The variance of the DCT coefficients after quantization is found by thestatistical processing circuit303. Thetable selecting circuit304 selects the optimum Huffman code table that conforms to the variance value from thetable group305 and supplies this Huffman code table to theHuffman coding circuit301. In addition, it stores the table index of this block in theindex memory114. Though thebuffer302 buffers the Huffman-coded data every block, there is no assurance in Huffman coding that the total quantity of data will be less than G bits, namely that such a coding table will always exist and be selected. Accordingly, in a case where the total quantity of block data is greater than G bits, that portion of the data that has exceeded G bits is rounded down. In other words, the buffer capacity is assumed to be G bits, and storage of data in excess of (G+1) bits is prohibited. As a result, thebuffer302 stores block-by-block data of a maximum of G bits in theframe memory113.
Operation at the time of decoding will now be described.
In synchronization with the synchronizing signal atinput terminal26, one block of data read out of theframe memory113 in units of G bits is subjected to decoding processing by the inverseHuffman coding circuit306, after which an output is obtained atoutput terminal25 via processing similar to that of the first embodiment. As for the code table used at the time of Huffman decoding processing, the index corresponding to the prevailing block is read out of theindex memory114 and the code table corresponding to this index is selected from thetable group307. Since there are cases where data of (G+1) bits or greater is rounded down, as mentioned above, the inverseHuffman coding circuit306 is reset at the beginning of the block.
Thus, this embodiment is capable or providing effects similar to those of the first embodiment.
Though the Huffman code table is selected in accordance with the variance value in the third embodiment, it is possible to adopt an arrangement in which coding is carried out for each Huffman code table and the Huffman code table selected is that for which the information quantity (the sum total of code length) within a block is minimized.
According to the first through third embodiments set forth above, selection of the S factor, Q table and Huffman code table is performed serially. However, these may be selected in parallel fashion to select the optimum coded data and parameters.
In addition, in the first through third embodiments, selection of the S factor, Q table and Huffman code is performed individually. However, it goes without saying that these may be combined.
Further, in the first through third embodiments, theindex memory114 is provided separate from theframe memory113. However, an arrangement can be adopted in which multiplexed indexing is applied to the coded data which is then stored in the frame memory. In this case, theindex memory114 will be unnecessary.
Further, the coding method is not limited to the above-described ADCT (advanced discrete cosine transformation). For example, other forms of variable-length coding, such as arithmetic coding, can be employed.
Moreover, the processing described above can be performed by computer software as well as by a hardware configuration.
[Fourth Embodiment]
FIG. 6 is a block diagram showing the detailed construction of theimage memory unit11 according to a fourth embodiment of the invention.FIG. 9 is a diagram for describing the zigzag scanning of the fourth embodiment.
As shown inFIG. 6, theimage memory unit11 has theinput terminal100 for inputting image pixel data (multivalued information having density information), and ablock forming circuit401 which forms the input data from theinput terminal100 into blocks of, say, 8×8 pixels each. Theblock forming circuit401 is constituted by line memories for applying a delay equivalent to several lines. Theblock forming circuit401 is connected to aDCT circuit402 which performs an orthogonal transformation by DCT. TheDCT circuit402 is connected to abuffer403, which stores transformation coefficients in block units. Thebuffer403 is connected to aquantization unit404 which, in accordance with Q (quantization) step information applied by a Q (quantization) table, subjects the transformation coefficients stored in thebuffer403 to linear quantization (Q).Numeral405 denotes a Q-table group equipped with a plurality of Q tables supplied to thequantization unit404.
Numeral408 denotes a coding circuit (hereinafter referred to as a “PCM circuit”) which obtains the differential (error) between a DC coefficient, from among transformation coefficients quantized by thequantization unit404, and the DC component of the preceding block.Numeral409 denotes a scan converting circuit for sequentially zigzag scanning the AC coefficients (coefficients other than DC coefficients), quantized by thequantization unit404, from coefficients of lower order, as illustrated in FIG.9.Numeral410 denotes a VLC circuit which, in accordance with a Huffman table411 described below, applies variable-length coding to the quantized coefficients zigzag-scanned by thezigzag scanning circuit409.Numeral406 denotes an address counter for counting the addresses outputted by aVLC circuit110, namely a run length of (NNNN+1).Numeral407 denotes a code-quantity counter for counting a code quantity outputted by theVLC circuit410. The aforementioned Huffman table411 possesses data set in such a manner that the code quantity is minimized in conformity with the frequency of occurrence of quantized coefficients inputted to theVLC circuit410.
Adiscriminating circuit412 determines whether the sum of the count in the code-quantity counter407 and the code quantity presently being generated is greater than a predetermined value th. Thediscriminating circuit412 determines whether scanning and coding within a block have ended. If the scanning and coding have not ended, thezigzag scanning circuit408 is so informed. A Q-table changeover circuit413 changes over the Q table of the Q-table group405 based upon the determination made by the discriminatingcircuit412. Abuffer414 temporarily stores the coded data outputted by thePCM circuit408. Anindex memory415 stores an index signal (information indicating which Q table has been selected) sent from the Q-table changeover circuit413. Amemory416 stores, in pairs, the coded data from thebuffer414 and the index signal from theindex memory415. Theoutput terminal25 is for outputting the data frommemory416.
TheVLC circuit410 will now be described in detail.
FIG. 8 is a block diagram illustrating the construction of theVLC circuit410 according to the fourth embodiment. As shown inFIG. 8, theVLC circuit410 includes a significant-coefficient detecting circuit501 for detecting that a quantized coefficient inputted to theVLC circuit410 is at state “0”, a run-length counter502 for counting the “0” run length from the significant-coefficient detecting circuit501, agrouping circuit503 for splitting a quantized coefficient other than “0” from the significant-coefficient detecting circuit501 into a group number and annexed bits and outputting these separately, and a two-dimensionalHuffman coding circuit504 which, in accordance with the Huffman table411, described below, applies variable-length coding to the run length NNNN and group number SSSS respectively outputted by the run-length counter502 andgrouping circuit503.
The operation of the fourth embodiment will be described next.
FIG. 7 is a flowchart for describing the coding operation of the fourth embodiment.
First, multivalued information inputted from theinput terminal100 is cut into, say, a block of 8×8 pixels by theblock forming circuit401, which delivers the resulting information to theDCT circuit402. In this embodiment, DCT is employed in the orthogonal transformation, but it is of course permissible to use another method of orthogonal transformation. The DCT-processed transformation coefficient is temporarily stored in thebuffer403 and then supplied to thequantization unit404. This embodiment is characterized by a control method in which, in order to render a code quantity within a cut block constant, a changeover is made in the Q-table group405, which has various types of Q tables, so as to employ the optimum Q table.
At step S31 in the flowchart ofFIG. 7, “1” is substituted into the Q-table number (hereinafter referred to as a “Q No.”). In other words, the Q table is made Q No. 1. Next, two variables indicated by a, b are initialized at step S32. The variable a represents the count ofaddress counter406, and the variable b represents the code-quantity count recorded by the code-quantity counter407. Next, at step S33, linear quantization of the DCT coefficient within thebuffer403 is carried out in accordance with the Q table of Q No. 1.
The quantized DC component is coded by thePCM circuit408, and the quantized AC component is rearranged in one dimension by thezigzag scanning circuit409 at step S34. As in the example of the prior art, zigzag scanning entails scanning in the forward direction from lower order to higher order AC components. Next, while zigzag scanning is carried out, the scanned quantized coefficients are coded in theVLC circuit410 at step S35.
In variable-length scanning, the Huffman table411, which has been set in such a manner that the code quantity is minimized in conformity with the frequency of occurrence of the inputted coefficients, is loaded, and the coefficients are subjected to entropy coding. TheVLC circuit110 will now be described as applied to the aforementioned JPEG method. InFIG. 8, the portion enclosed by the dashed line corresponds to theVLC circuit410. First, when a quantized zigzag-scanned coefficient enters from theinput terminal500 ofFIG. 8, the non-zerocoefficient detecting circuit501 determines whether the inputted quantized coefficient is “0”. If the coefficient is “0”, the detectingcircuit501 supplies a count-up signal to the run-length counter502, whereby the count incounter502 is incremented. In case of a quantized coefficient other than “0”, the detectingcircuit501 supplies a reset signal to the run-length counter502, whereby the count in thecounter502 is reset and the quantized coefficient is split into a group number and annexed bits by thegrouping circuit503. The group number SSSS is supplied to theHuffman coding circuit504, and the annexed bits are supplied to the code-quantity counter407. The run-length counter502 is a circuit which counts the “0” run length and supplies theHuffman coding circuit504 with the number NNNN of “0”s between non-zero coefficients other than “0”. In accordance with the Huffman table111, theHuffman coding circuit504 subjects the supplied “0”-run length NNNN and group number SSSS of non-zero coefficients to variable-length encoding and supplies the results to the code-quantity counter407.
In the arrangement of theVLC circuit410, variable-length coding occurs when a quantized coefficient other than “0”, namely a non-zero coefficient, enters. However, the code quantity which occurs (namely the sum of the annexed bits and the code quantity in the two-dimensional Huffman coding) is made n bits.
In the case where the input is a quantized coefficient other than “0”, as mentioned above, no code quantity is produced. When a non-zero coefficient other than “0”, is inputted, the run-length counter502 is reset and the “0”-run value NNNN is outputted. However, the number of inputs, namely (NNNN+1), which is equivalent to (“0”-run value)+(the input itself for which a non-zero coefficient has appeared), is supplied to theaddress counter406. In other words, from the moment a code quantity is produced until the moment the next code quantity is produced, the number of pixels that have been zigzag-scanned is expressed by m (pixels)=(NNNN+1).
More specifically, at step S36 inFIG. 7, the code quantity n (bits) at the time of code-quantity occurrence and the number m of pixels scanned during this time are decided. Then, at step S37, the discriminatingcircuit412 determines whether the sum of the count b in the code-quantity counter407 and the code quantity n currently produced is greater than a predetermined code quantity indicated by G′. In a case where G represents the quantity of code attempting to be rendered constant within a block, the predetermined code quantity G′ is as indicated by the following equation:
G=G′+d+i  (1)
where d represents the code quantity of the DC component, and i represents an index number which indicates which Q table has been used.
By way of example, if the DC component has a fixed length of eight bits and four types of Q tables are provided in a case where G=64 bits holds, two bits are required for the index signal and therefore G′=64−8−2=54 (bits) will hold.
If (b+n)≦G′ is found to hold at step S37 inFIG. 7, namely if the predetermined code quantity G′ has not yet been attained, the operation a=a+m is performed at step S39, whereby theaddress counter406 is counted up. Next, at step S40, it is determined whether a<end holds (where “end” is the address of the last pixel within the block; this would be the 63rd pixel in a block composed of 8×8 pixels). This is to determine whether scanning and coding within the block have ended. If scanning and coding have not yet ended, processing returns to step S34 and zigzag scanning is resumed from the address at which processing stopped.
If the condition b+n≦G′ is not satisfied at step S37 after the foregoing operation has been repeated, namely if the count in the code-quantity counter407 to which the current n bits have been added is found to exceed the predetermined code quantity G′, then it is determined at step S41 whether a≧th (threshold value) holds.
Reference will be had toFIG. 9 to describe the meaning of th.
FIG. 9 illustrates zigzag scanning in a block composed of 8×8 pixels. Here numbers are assigned in the order of AC-component scanning. In this example, the 42nd component is adopted as the threshold value th.
By way of example, if the present apparatus is applied to an image output apparatus such as a printer, resolution will differ, as a matter of course, depending upon the output characteristics of the printer used. Also, when the outputted image is observed, there is a limitation upon resolving power in the high-frequency region owing to the characteristics of human vision. Accordingly, it is necessary to determine both of these characteristics in advance, based upon experience and experimentation, and establish the boundary frequency beforehand. In other words, the idea is to compensate at least coding of a band up to the boundary value without performing coding up to the 63rd component, which is the last component within the block.
In this embodiment, it is assumed, by way of example, that the boundary value is the 42nd component. That is, it is required to discern whether the coded address has reached the threshold value, wherein the threshold value (th) serves as the boundary value. This is the determination regarding a≧th indicated at step S41 in FIG.7. If the code quantity has exceeded the predetermined value without the threshold value (th) being attained, the Q No. is counted up and quantization is performed again using a different Q table (step. S42). Conversely, in a case where coding has attained the threshold value (th), this means that coding up to the boundary value has been assured, and coding ends. If coding has ended, the code quantity should be equal to G′, but there is no assurance that it will. In such case, various expedients are conceivable, such as “stuffing” “0” or applying an EOB (end-of-block) signal until the code quantity becomes equal to G′. Here it is assumed that the th-settingsignal26 is outputted by theoutput controller13 inFIG. 1 so that the value of th is set in thediscriminating circuit412 in dependence upon the printer used. The value of th is set to be large in case of a high-resolution printer and small in case of a low-resolution filter.
A characterizing feature of this embodiment is that it is necessary to gradually change over the Q table from one having a fine quantization step width to one having a coarse quantization step width. More specifically, the quantization steps are set so as to gradually become coarser in the manner Q No.=0, Q No.=1, Q No.=2, . . . , in which Q No.=0 is the finest. Conversely, the code quantity after quantization is set so as to gradually decrease whenever feedback is performed.
As a result of the foregoing, Q No. is counted up and excellent coding is performed up to the boundary value (threshold value th).
Returning toFIG. 6, the coded data that has been stored in thebuffer414 is planted in thememory416, along with the index signal from theindex memory415 indicating which Q table has been selected, by satisfying the above-mentioned conditions. Though a minute description of what takes place outside the present apparatus after an output is delivered from theoutput terminal25 is deleted, decoding of the data that has been stored in thememory416 is performed through a procedure which is the reverse of the operation described above.
Thus, in accordance with the fourth embodiment of the invention, it is possible to perform optimum quantization which conforms to the output characteristics of the image output unit and the visual characteristics of the human being who observes the output image. More specifically, up to a preset threshold frequency, a band compensated excellent compression can be at least achieved through a simple arrangement which entails adding a counter and a comparator.
In addition, it is possible also to shorten the time needed to select the optimum quantization conditions.
[Fifth Embodiment]
FIG. 10 is a block diagram showing the construction of theimage memory unit11 ofFIG. 1 according to a fifth embodiment of the invention. Portions inFIG. 10 identical with those shown inFIG. 6 are designated by like reference characters. In this embodiment, the Q table is of only one type (denoted Q table405′), and the quantization step width in the Q table is made variable by multiplication with a certain coefficient.
InFIG. 10, numeral601 denotes an S (scaling)-factor changeover circuit, which corresponds to the Q-table changeover circuit413 of FIG.6.Numeral602 denotes a multiplier for multiplying the data of the Q table405′ by a coefficient applied to the S-factor changeover circuit601.
When thediscriminating circuit412 determines that coding has not reached the boundary value (threshold value th) within a block, as described in connection with the fourth embodiment, the S-factor changeover circuit601 changes over the coefficient (hereinafter referred to as an “S factor”) by which the Q table is multiplied, the multiplication is performed by themultiplier602, and quantization is performed again. In this case, what is important is the premise that the S factor is changed over gradually from a small value to a large value. Several types of S factors are provided, and which S factor is selected is stored in theindex memory415. For example, in a case where four types of S factors have been provided, the four types can be expressed by a two-bit signal. Therefore, the type of S factor is stored in theindex memory415 by the two-bit signal, and the selected index signal is supplied to thememory416.
In this embodiment, the Q table is fixed and the quantization step width within the table varies linearly. Consequently, though control conforming to the frequency characteristic (f characteristic) is not feasible, the overall construction is simpler than that of the fourth embodiment.
[Sixth Embodiment]
FIG. 11 is a block diagram showing the construction of theimage memory unit11 ofFIG. 1 according to a sixth embodiment of the invention. Portions inFIG. 11 identical with those shown inFIG. 6 are designated by like reference characters. In this embodiment, athreshold changeover circuit610 is connected to thediscriminating circuit412 described in the fourth embodiment. More specifically, the fixed boundary value (threshold value) within a block in accordance with the fourth and fifth embodiments is made variable in this embodiment. However, the threshold value varies only under the condition that the output characteristic of the printer used varies, as well as the characteristic of human vision.
By way of example, if the present embodiment is applied to a color-image output apparatus such as a color printer or color copier, it is assumed that three plain color signals Y (yellow), M (magenta) and C (cyan) are inputted. In a case where the Y, M and C signals are compressed, it is preferred that these types of signals be handled individually even with regard to the output characteristic of the printer and the characteristics of human vision. In other words, it is required that the output characteristic and visual characteristic be obtained experimentally and based upon experience for Y, M and C independently, and that the boundary frequency be determined in advance. The Y, M and C boundary values, which are denoted by thY, thMand thC, are judged based upon threshold values conforming to the color components of the input signal set in thethreshold changeover circuit610. Here, in accordance with visual characteristics, the boundary value of Y (yellow), which is difficult for the human eye to sense, is set to be small, while the boundary values of M (magenta) and C (cyan), which are comparatively easy to sense, are set to be large. The judgement conditions are the same as the contents shown in the flowchart of FIG.7. Though Y, M and C are taken as examples in this embodiment, it goes without saying that another color model can be used.
FIG. 12 is a block diagram showing the construction of theimage memory unit11 ofFIG. 1 according to a seventh embodiment of the invention.
As shown inFIG. 12, image pixel data which has entered from theinput terminal100 is split into blocks of, say, 8×8 pixels each by ablock forming circuit702 constituted by line memories for applying a delay equivalent to several lines. A cosine transformation is performed every block by aDCT circuit703, and ascan converting circuit704 then converts the data into a one-dimensional data string. Thereafter, the coded data from acoding unit705 is stored in abuffer memory709 or710 controlled by atable selector712 andmemory controller708, which will be described below.
FIG. 14 is a block diagram illustrating the detailed construction of thecoding unit705. As shown inFIG. 14, thecoding unit705 includes a quantizer (Q)722 and a VLC (variable length coding unit)723 of the kind indicated in the prior-art example ofFIG. 32 from 1401 to 1411. Ordinarily, quantization is carried out while referring to the quantization (Q) table706.FIG. 13 is a diagram showing an example of this quantization table (JPEG Y table).
However, in this embodiment, as illustrated inFIG. 12, a plurality of quantization tables706a-706care provided so as to be selectable by aswitch707. The arrangement is such that the coded data from thecoding unit705 is monitored and thetable selector712 changes over the quantization tables706a-706cbased upon the result of monitoring. The quantization tables706a-706chave contents that differ from one another, and the tables are so arrayed that values at the same positions in the tables706a-706cbecome successively smaller in theorder706a→706b→706c.
Accordingly, when the quantization tables are changed over in themanner706a→706b→706c, the quantity of information generated by thecoding unit705 increases in monotonous fashion (i.e., the compression rate decreases). Next, the coding and storing operations of this embodiment will be described in accordance with the flowchart shown in FIG.16.
First, at step S51, the quantization table706ais selected by thetable sensor712, and the index i is initialized. Next, at step S52, thecoding unit705 codes the image data, which has been put into block form, based upon the selected quantization table. This is followed by step S53, at which the coded data from thecoding unit705 is written in thebuffers709,710 controlled by thememory controller708. At the same time, the coded data enters acounter711, where the quantity of information (compression rate) is monitored.
Next, at step S54, the output X of thecounter711 enters thetable selector712, where it is compared with a predetermined information quantity G determined from the capacity of theframe memory713. If the result of the comparison is that the output X is less than the predetermined information quantity G, the program proceeds to step S57, where the next quantization table is selected. The index i is updated at step S58, after which the program returns to step S52. In other words, this operation is such that thetable selector12 changes over theswitch707 in themanner 1→2→3 to obtain the maximum output X which will not exceed the predetermined quantity G.
If the result of the decision at step S54 is that the output X is equal to or greater than the predetermined information quantity G, the program proceeds to step S55, at which the coded data prevailing at this time is stored in theframe memory713 from thebuffer709,710. Next, at step S56, the index data of the quantization tables706a-706cis recorded in theindex memory714 and operation is ended.
By repeating the foregoing operation block by block, all of the image data is stored in theframe memory713 as compressed data. The information quantity of each block is less than the predetermined information quantity G, and one block of the memory capacity inframe memory713 can be fixed.
Next, when the compressed data stored in theframe memory713 is read out, aninverse quantization unit725 of adecoder715 the details of which are shown inFIG. 15 performs inverse quantization by referring to quantization tables717a-717cthe contents whereof are the same as the contents of the quantization tables706a-706c. In other words,table selector712 selects one of the quantization tables717a-717cin accordance with the index data recorded in theindex memory714, and theinverse quantization unit725 performs inverse quantization based upon the content of the selected quantization table. The image pixel data processed by an ordinaryinverse scanning converter718,inverse DCT circuit719 andrasterization circuit720 is outputted from theoutput terminal25.
It is permissible to change the order of the quantization tables in such a manner that the generated information quantity decreases monotonously, which is the opposite of the operation described above.
In addition, the above-described variable-length coding is not limited to ADCT. For example, another coding method such as arithmetic coding may be used.
In accordance with the embodiment described above, a plurality of quantization tables are provided and output information from a coding unit is monitored, thereby making it possible to obtain the maximum output information without exceeding a designated information quantity.
Accordingly, the memory capacity within a block can be made constant, and an image editing function can be implemented in the form of an inexpensive system. In addition, a deterioration in picture quality due to compression can be minimized.
[Eighth Embodiment]
In this embodiment, a method (hereinafter referred to as a “block mapping method”) is proposed in which the information quantity within one block constituted by a plurality of pixels as shown inFIG. 25 is made less than the predetermined value G, decoding is capable of being performed in block units and accessing of the frame memory is carried out in G units. The block mapping method will first be described with reference to FIG.25.
As shown inFIG. 25, image pixel data which has entered from theinput terminal100 is cut into DCT blocks (e.g., of 8×8 pixels each) by ablock forming circuit892, and the resulting data is supplied tocoding circuits835a-835d. Theblock forming circuit892 is constituted by line memories for applying a delay equivalent to several lines (eight lines in this embodiment). Thecoding circuits835a-835dare coding circuits which include variable-length coding set in such a manner that the information quantities differ from one another. The coded words are supplied to serial/parallel converters (hereinafter referred to as “S/P”s)895a-895d, and the code lengths are supplied to code-length counters836a-836d. The code-length counters836a-836dare for obtaining the code lengths (the sum totals of the numbers of bits of the respective coded words) of the present block in the respective coding circuits. Each counter is reset at the beginning of the block, after which the code lengths supplied by thecoding circuits835a-835dare accumulated for one block. The results are supplied to acoding selecting circuit837. The S/Ps895a-895daccumulate G bits (where G is the maximum information quantity of one block) of the serial data inputted from each coding circuit, and the outputs of the S/Ps895a-895dare delivered to the terminals of asignal changeover switch897 at a predetermined timing.
Thecoding selecting circuit837 compares the code lengths of the present block in the coding circuits supplied by the code-length counters836a-836dwith the predetermined value G, determines which coding circuit provides a value closest to G without exceeding G and supplies the result of the determination, namely the index of the selected coding circuit, to thesignal changeover switch897 and anindex memory899. Thesignal changeover switch897 connects, to a common terminal e, the terminal connected to the S/P storing the data from the coding circuit corresponding to the index, thereby writing the data of the present block at the corresponding address of aframe memory898. At the same time, the index of the selected coding circuit is written in theindex memory899 at a portion corresponding to the address of theframe memory898.
By repeating the forgoing operation, one frame (page) of data is accumulated in theframe buffer898.
Then, when a synchronizing signal enters theinput terminal26 from an external apparatus (e.g., a printer engine), not shown, the coded block data is sequentially read out, in G-bit units and in synchronization with the synchronizing signal, from the leading address of the frame memory by amemory control circuit890. At the same time, the index of the corresponding block is read out of theindex memory899, and adelay circuit892 applies a delay equivalent to a time period needed for decoding. The index signal delayed by thedelay circuit892 is then applied to the control terminal of asignal changeover switch893. Decoding circuits891a-891dcorrespond to thecoding circuits835a-835d, respectively, decode the data read out of theframe memory898 and output the decoded data to respective terminals a−d of thesignal changeover switch893. Accordingly, a common terminal e of theswitch893 outputs the decoded data from the decoding circuit corresponding to the index, and the decoded data is delivered to arasterization circuit894. The latter makes a conversion from block scanning to raster scanning to obtain expanded image pixel data. This data is delivered from theoutput terminal25.
With the method described above, however, a code-length counter is necessary for each coding circuit. In addition, thecoding selecting circuit837 compares the code-length count of each coding circuit with the predetermined value S and retrieves a value closest to S without exceeding S. Consequently, when the number of coding circuits increases, the number of comparators for making the aforementioned determination increases exponentially. The result is not only a complicated hardware configuration but also a prolonged period of time necessary for making the determination.
Accordingly, the present embodiment, described below, has been designed in order to realize an image processing apparatus capable of performing the above-mentioned coding-circuit selection through simple circuitry and without complicated hardware even when the number of coding circuits is increased.
FIG. 17 is a block diagram illustrating an example of the construction of an image processing apparatus according to this embodiment.
As shown inFIG. 17, image pixel data enters from aninput terminal100. The image pixel data is cut into DCT blocks (e.g., of 8×8 pixels each), subjected to an orthogonal transformation such as DCT, by ablock forming circuit802, and the resulting data is supplied to coding circuits803a-803d. The coding circuits803a-803dare coding circuits set in such a manner that the information quantities differ from one another.
The coded words are supplied to S/Ps805a-805dcontinuously at a constant rate, and masking signals for preventing erroneous detection are supplied to respective EOB (end-of-block) detectingcircuits804a-804d. TheEOB detecting circuits804a-804d, which are for detecting the EOB signals outputted by the decoding circuits803a-803d, are as constructed as shown inFIG. 18, described below. Other components inFIG. 17 are substantially the same as those shown in FIG.25 and need not be described again.
FIG. 18 is a circuit diagram which illustrates an example of the construction of theEOB detecting circuits804a-804dof this embodiment. TheEOB detecting circuit804 ofFIG. 18 is one example of theEOB detecting circuits804a-804d, all of which have the same construction. In this embodiment, the EOB code is “LLLL”.
Four bits at the beginning of the coded word outputted by the coding circuit enters ashift register817, each of whose output bits is supplied to anOR gate818. This makes it possible to detect the EOB code. However, depending upon the state of the shift register, there are cases where the content of the register happens to become “LLLL” while the coded word is being stored in the register. Accordingly, in this embodiment, a signal which masks a signal that becomes “LLLL” at a bit other than the fourth bit of the coded word is inputted to theOR gate818, thereby preventing the above-described erroneous detection. Though the above-mentioned masking signal is supplied by each of the coding circuits803a-803daccording to this embodiment, this does not impose a limitation. For example, it is permissible to adopt an arrangement in which a signal resulting from taking the OR of each bit of theshift register817 is latched immediately after the first four bits are stored in theshift register817.
The detected EOB signals are supplied to acoding selecting circuit806.
FIG. 19 is a circuit diagram showing an example of the construction of thecoding selecting circuit806 according to this embodiment.
EOB signals100a-100ddetected by the EOB detecting circuits803a-803dare supplied to logic circuits819a-819d, respectively. The logic circuits819a-819dare for preventing collision of signals when EOB signals overlap. As shown inFIG. 19, the logic circuits819a-819dcomprise, in combination, three-input NAND gates831-834 and OR gates835-838, respectively. The order of priority at the time of collision is set as follows, in descending order: output signal101a→output signal101b→output signal101c→output signal101d.
The results of the operations performed by the logic circuits819a-819dare supplied to a four-input NAND gate820, respective index signal circuits821a-821dand a logic circuit having a higher order of priority. TheNAND gate820 is a circuit for generating the clock of a D-type flip-flop (hereinafter referred to as a “DF/F”)822. More specifically, when, depending upon the logic circuits819a-819d, a corresponding index signal gate opens, an index corresponding to the coding circuit which has outputted the EOB signal is outputted on asignal line102, and the index is stored in the DF/F22 by theNAND gate820. This operation is carried out whenever the EOB signal is detected. Therefore, what is obtained from the output of the DF/F822 after a certain predetermined time is the index of the coding circuit for which the EOB signal was detected last within the above-mentioned predetermined time.
FIG. 20 is a timing chart associated with thecoding selecting circuit806 of the eighth embodiment. InFIG. 20,numerals100a-100drepresent the outputted results, namely the EOB signals, of the respectiveEOB detecting circuits804a-804d,numerals101a-101drepresent the outputted results, namely the output signals, of the respective logic circuits819a-819d, andnumerals102′103′ denote the input signal and output signal of input/output signal lines102,103 of the DF/F822.
In the first half of the chart (the half corresponding to block j in FIG.20), the EOB signals100a-100ddo not overlap. Consequently, the output of the DF/F822 is as indicated byoutput signal103′ in FIG.20. It will be understood that if latching is performed at a predetermined time t1, the index of the coding circuit for which the EOB signal was detected last will be obtained.
In the second half of the chart (the half corresponding to block k in FIG.20), overlapping occurs at two locations. In other words, EOB signals100a,100coverlaps, and so do EOB signals100b,110d.
As shown at (a) inFIG. 20, output signal110aof logic circuit819aassumes the “L” level at the trailing edge of the EOB signal100a. Then, when EOB signal100cdecays, the output of theNAND gate831 of logic circuit819aattains the “H” level. As shown at (b) inFIG. 20, theoutput signal101aofOR gate835 is reset and, at the same time, theoutput signal101coflogic circuit819cassumes the “L” level, so that the index “11” is supplied fromgate circuit821cto DF/F822.
Next, theoutput signal101boflogic circuit819btemporarily decays in response to the EOB signal100b. However, since the EOB signal100dalso decays at the same time, theoutput signal101bis reset to the “H” level instantaneously (c) by theNAND gate832 oflogic circuit819b.
Accordingly, if theoutput signal103′ of the DF/F822 is latched at a predetermined time t2, as shown inFIG. 20, the index of the coding circuit for which the EOB signal was detected last is obtained. In this example, theoutput signal101dhas a higher order of priority than theoutput signal101b, and therefore the selected index is “10”. The selected index is supplied to theindex memory809 andsignal changeover switch807.
Meanwhile, the S/Ps805a-805d(FIG. 17) accumulate S bits of the serial data inputted from the coding circuits803a-803d(where S is the maximum information quantity of one block) and output the data to the terminals a-d of thesignal changeover switch807 at a predetermined timing. Thesignal changeover switch807 connects, to the common terminal e, the terminal connected to the S/P corresponding to the selected index, whereby the data is written in the corresponding address of theframe memory808. Theindex memory809 writes the selected index in the portion corresponding to the write address offrame memory808.
The components fromframe memory808 onward are the same as in FIG.25 and need not be described again.
In accordance with the foregoing description, the arrangement of the present embodiment is such that when coded data is outputted at a fixed rate, use is made of the fact that the quantity of information is proportional to the time required for output. Specifically, the information quantity is detected, as the coding circuit is selected, based upon the time from the data at the beginning of the block until the EOB signal is outputted. As a result, a counter and comparator need not be provided, and selection of coding can be realized through simple circuitry even if the number of coding circuits is increased.
[Ninth Embodiment]
FIG. 21 is a block diagram showing the construction of theimage memory unit11 of in accordance with a ninth embodiment of the invention. Blocks inFIG. 21 having functions identical with those of the eighth embodiment shown inFIG. 17 are designated by like reference characters and need not be described again. Only portions that differ from the eighth embodiment shown inFIG. 17 will be described.
Coding circuit823a-823dare set so as to have information quantities that differ from one another. The coded words are supplied to S/Ps805a-805d, and the EOB signals are applied to thecoding selecting circuit806. The EOB signals are signals which indicate the positions at which EOB codes have been entered. For example, an EOB signal is a signal which assumes the “L” level when the EOB code is outputted. Thecoding selecting circuit806 is a circuit of the kind shown in FIG.19. This arrangement results in a further simplification in hardware.
[Tenth Embodiment]
FIG. 22 is a block diagram showing the construction of theimage memory unit11 of in accordance with a tenth embodiment of the invention. Blocks inFIG. 22 having functions identical with those of the ninth embodiment shown inFIG. 21 are designated by like reference characters and need not be described again.
In this embodiment, portions not related to control of information quantity, such as the orthogonal transformation circuit, etc., are shared by the coding circuits, thereby making it possible to reduce hardware. Only portions that differ from the ninth embodiment shown inFIG. 21 will be described.
In a case where coding in which DCT and VLC are combined is applied to the present invention, as in the baseline method of the JEPG, the DCT circuit and the scan converting circuit which performs zigzag scanning are not related to control of information quantity and are shared by the coding circuits. Accordingly, in this embodiment, theDCT circuit24 and thescan converting circuit25 are placed outside the coding circuitry and are shared by the coding circuits, thereby simplifying the hardware.
The image pixel data placed in block form by theblock forming circuit802 is subjected to a discrete cosine transformation by aDCT circuit824. The transformation coefficients of DCT are zigzag-scanned in order from coefficients of lower order by ascan converting circuit825, and the output of thescan converting circuit825 is supplied tocoding circuits826a-826d.
FIG. 23 is a block diagram which illustrates the construction of thecoding circuits826a-826dof the tenth embodiment. In acoding circuit826, which illustrates one example of thecoding circuits826a-826d, the zigzag-scanned transformation coefficients are quantized by aquantization unit831 and then supplied to aVLC circuit832. TheVLC circuit832 codes the quantized transformation coefficients by well-known variable-length coding such as Huffman coding, delivers a coded word to a corresponding one of the S/Ps805a-805d, and supplies an EOB signal to thecoding selecting circuit806. The EOB signal indicates the position at which an EOB code has been inserted, as mentioned above. The EOB signal assumes the “L” level when the EOB code is outputted, and attains the “H” level at all other times.
Thecoding selecting circuit806, which is a circuit similar to that shown inFIG. 19, supplies theindex memory809 andsignal changeover switch807 with an index illustrating the selected coding circuit. The selected coded data and the index are written in theframe memory808 and in the corresponding address of theindex memory809.
By repeating the forgoing operation, one frame (page) of data is accumulated in theframe memory808. Then, when a synchronizing signal enters theinput terminal26 from an external apparatus (e.g., a printer engine), not shown, the coded data is sequentially read out, in S-bit units and in synchronization with the synchronizing signal, from the leading address of the frame memory bymemory control circuit810. At the same time, the index of the corresponding block is read out of theindex memory809.
Decodingcircuits827a-827dcorrespond to thecoding circuit826a-826d, respectively. These are constructed as shown in FIG.24.
FIG. 24 is a block diagram showing the construction of thedecoding circuits827a-827dof the tenth embodiment. Adecoding circuit827, which illustrates one example of thedecoding circuits827a-827d, receives the read coded data, and a variable-length decoding circuit833 obtains a quantized code of the DCT transformation coefficients. Aninverse quantization unit834 effects a transformation into a quantized representative. The outputs of thedecoding circuits827a-827dare supplied to the terminals a-d of thesignal changeover switch813.
Meanwhile, the index read out of theindex memory809 is delayed by adelay circuit828 for a period of time necessary for decoding, and then is supplied to the output terminal of thesignal changeover circuit828. Accordingly, the common terminal e of theswitch813 outputs a quantized representative of the DCT transformation coefficient from the decoding circuit corresponding to the index, and ascan converting circuit829 effects a scan conversion from zigzag scanning to an order suited to aninverse DCT circuit830. Arasterization circuit814 effects a conversion from scanning in block units to the original raster scanning, whereby expanded image pixel data is obtained from theoutput terminal25.
In the eighth to tenth embodiments described above, the index outputted by the coding selecting circuit is written in the index memory. However, if the accessing of the index memory is fast enough, an arrangement may be adopted in which each coding circuit applies an override to the same address of the index memory in the order in which the EOB signals are outputted. In other words, a corresponding index is written in the same address of the index memory in the order in which the EOB signals are outputted. Therefore, if the data at the corresponding address of the index memory is read out after a predetermined period of time, the index of the coding circuit for which the EOB signal was outputted last will be obtained within a predetermined period of time. Accordingly, if the coded data is written in the frame memory by the index, or if the coded data is overridden to the frame memory in the order in which the EOB signal is outputted, in the same manner as the index memory, then effects exactly the same as those of the foregoing embodiment can be obtained in the frame memory.
The coding method is not limited to ADCT. It is permissible to adopt another coding method, such as variable-length coding every block.
The eighth and ninth embodiments can be implemented by computer software as well as by a hardware configuration using the various gate circuits.
[11th Embodiment]
FIG. 26 is a block diagram showing the detailed construction of theimage memory unit11 according to an 11th embodiment of the invention.
InFIG. 26, image pixel data processed according to this embodiment enters from aninput terminal100. Described first will be coding processing of the image pixel data which enters from theinput terminal100, and processing for storing data in aframe memory908.
The image pixel data which has entered from theinput terminal100 is partitioned into blocks of, say, 8×8 pixels each by ablock forming circuit902 composed of line memories for applying a delay of several lines. The resulting data is supplied to a plurality of coding circuits903a-903d.
The coding circuits903a-903dare coding circuits which include variable-length coding set in such a manner that the information quantities differ from one another. The coded words are supplied to buffers905a-905d, and the code lengths are supplied to code-length counters904a-904d.
The code-length counters904a-904dare for obtaining the sum totals of the code lengths of the coded words in one block. Each counter is reset at the beginning of the block, after which the code lengths supplied by the coding circuits903a-903dare accumulated for one block. The results are supplied to acoding selecting circuit906. The buffers905a-905dstore one block of data.
Thecoding selecting circuit906 compares the code lengths within one block in the coding circuits903a-903dsupplied by the code-length counters904a-904dwith a predetermined value (G), determines which coding circuit provides a value closest to (G) without exceeding (G) and supplies the result of the determination (the index) to thesignal changeover switch907 and anindex memory909.
Thesignal changeover switch907 connects, to a common terminal e, the terminal (any one of a-d) connected to the buffer storing the coded word selected by thecoding selecting circuit906, thereby storing one block of data planted in the particular one of the buffers905a-905dat the corresponding address of aframe memory908. At this time the index is stored in theindex memory909 at a portion corresponding to the address of theframe memory908.
In this embodiment, the coding circuits are of four types, namely903a-903d, and therefore the index is composed of two bits per block (in case of a fixed length).
By repeating the foregoing operation, one block of data is accumulated in theframe memory908.
Control for decoding the coded data in this embodiment constructed as set forth above will now be described.
When a synchronizing signal such as an ITOP signal from an external unit, e.g., theoutput control unit13 ofFIG. 1, enters from theinput terminal26 to which the external unit is connected, amemory control circuit910 controls theframe memory908 in accordance with the synchronizing signal in such a manner that written data stored, coded and compressed as by the control described above is read out from the beginning of theframe memory908 in G-bit units. At the same time, thememory control circuit910 controls theindex memory909 in such a manner that the index corresponding to the block rad out of theframe memory908 is read out of theindex memory909.
The compressed image data read out of theindex memory909 is decoded by decoding circuits911a-911dcorresponding to respective ones of the coding circuits903a-903d, and the decoded data is supplied to terminals a-d of asignal changeover switch913.
Meanwhile, the index read out of theindex memory909 is delayed by adelay circuit912 for a period of time necessary for the decoding circuits911a-911dto perform decoding, and the delayed index is supplied to the output terminal of thesignal changeover circuit913. Accordingly, a common terminal e of theswitch913outputs 8×8 pixels of image pixel data expanded by one of the decoding circuits911a-911dcorresponding to whichever of the coding circuits903a-903dhas been selected by thecoding selecting circuit906. The image pixel data in block form is then subjected to a scan conversion by arasterization circuit914 to be converted into the original raster scanning data. This data is delivered from theoutput terminal25.
In accordance with the present embodiment as described above, an operation between blocks in the coding circuits903a-903dis eliminated, and it is possible to perform decoding in simple blocks.
At the same time, the plurality of coding circuits903a-903dhaving different output information quantities are provided, whichever coding circuit gives a coded length in one block (namely the sum total of the code length in one block after variable-length coding) that is maximum without exceeding the predetermined value G is selected, and the data is stored in theframe memory908 in units of the predetermined value S. As a result, addressing of overlaid blocks is facilitated and it is possible to combine images in theframe memory908.
[12th Embodiment]
FIG. 27 is a block diagram illustrating the construction of theimage memory unit11 according to a 12th embodiment of the present invention. Blocks having functions identical with those inFIG. 26 are designated by like reference characters and need not be described in detail again.
In the 12th embodiment, portions not related to control of information quantity, such as the orthogonal transformation circuit, etc., are shared by the coding circuits, thereby making it possible to reduce hardware. Only portions that differ from the 11th embodiment shown inFIG. 26 will be described.
In a case where coding processing in which DCT processing and VLC processing are combined as shown inFIG. 34 is applied to the present embodiment, the DCT circuit and the scan converting circuit which performs zigzag scanning are not related to control of information quantity and are capable of being shared by the coding circuits. Therefore, in the embodiment ofFIG. 27, aDCT circuit917 and ascan converting circuit918 are placed outside the coding circuitry and are shared by the coding circuits, thereby simplifying the hardware.
Accordingly,coding circuits919a-919dandcorresponding decoding circuit920a-920dare constructed as shown inFIGS. 28 and 29, respectively.
FIG. 28 is a block diagram illustrating the detailed construction of thecoding circuits919a-919daccording to the 12th embodiment.
As shown inFIG. 28, the transformation coefficients zigzag-scanned by thescan converting circuit918 as shown inFIG. 34 are quantized by aquantization unit923 and then supplied to aVLC circuit924. TheVLC circuit924 variable-length codes (e.g., Huffman-codes) the quantized transformation coefficients and supplies a coded word to a corresponding one of the buffers905a-905d. At the same time, the coded word is delivered to a corresponding one of the run-length counters904a-904d.
FIG. 29 is a block diagram illustrating the detailed construction of the decoding circuits according to the 12th embodiment. The compressed image data of G bits read out of theframe memory908 is decoded into a quantized transformation coefficient by the variable-length decoding circuit925, the coefficient is converted into a quantized representative by an inverse quantization unit (representative setting circuit)926, and the result is supplied to one of the terminals a-d of the signal changeover switch of FIG.27.
Thesignal changeover switch913 selects the transformation coefficient decoded by whichever of thedecoding circuits920a-920dcorresponds to the respective one of thecoding circuits919a-919dselected at the time of coding, the zigzag-scanned transformation coefficient are converted into the original order by ascan converting circuit921, and spatial image pixel data is obtained at ainverse DCT circuit922. The data is restored to the original raster scanning data by therasterization circuit914, and the resulting data is delivered from theoutput terminal25.
In accordance with the embodiment described above, the hardware configuration can be simplified over that of the 12th embodiment.
[13th Embodiment]
FIG. 30 is a block diagram illustrating the construction of theimage memory unit11 according to a 13th embodiment of the present invention. Blocks having functions identical with those of the 12th embodiment shown inFIG. 27 are designated by like reference characters and need not be described in detail again. Only portions that differ from those of the 12th embodiment inFIG. 27 will be described.
Transformation coefficients from thescan converting circuit918 zigzag-scanned as shown inFIG. 34 are quantized by aquantization unit927, the DC transformation coefficients are supplied to amultiplexing circuit933, and the AC transformation coefficients are supplied to ahierarchical division circuit928.
Thehierarchical division circuit928 divides the AC transformation coefficients into an n-layered hierarchy. The division into the n-layered hierarchy is performed by well-known means such as a spectrum (degree) or bit slice of the transformation coefficients. The transformation coefficients so divided are subjected to variable-length coding by variable-length coding (VLC) circuits930-1 through930-n, one block of data is accumulated by buffers931-1 through931-n, and the data is then supplied to themultiplexer933 at a predetermined timing.
Variable-length counters932-1 through932-n, which are for obtaining the sum totals of the code lengths within one block of each layer, are reset at the beginning of the block, the code lengths supplied by the VLC circuits930-1 through930-nare accumulated for one block, and the result is supplied to a layer-count discriminating circuit935.
The layer-count discriminating circuit935 successively totals, from the most significant layer, the sum totals of the code lengths of the present block of each layer supplied by the code-length counters932-1 through932-n, and determines the number of layers before that at which the information quantity (the sum total of the code length) of the present block exceeds the predetermined value G.
More specifically, letting f0represent the number of quantized bits of the DC coefficients after quantization, and letting f(i) represent the sum total of the code lengths of the i-th layer, the maximum value of k (0≦k≦n) which satisfies
is found, and the result of the determination is supplied to themultiplexer933.
In accordance with the result k of determination performed by the layer-count discriminating circuit935, the codes of the DC coefficients and the codes of the AC coefficients oflayers1 through k are multiplexed, and the codes are written in theframe memory908 in G-bit units.
FIG. 31 is a block diagram showing the detailed construction of adecoding circuit934 according to the 13th embodiment shown in FIG.30.
In thedecoding circuit934 of the 13th embodiment shown inFIG. 31, S-bit compressed image data read out of theframe memory908 is separated into DC coefficients and codes of AC coefficients in each layer by asignal separation circuit936. The DC coefficients are supplied to aninverse quantization unit939 and the codes of the AC coefficients are supplied to variable-length decoding circuits937-1 through937-k.
Thesignal separation circuit936 incorporates an AC-coefficient coding hierarchical counter. The counter is reset by the data at the beginning of a block and is counted up whenever data of one layer of theAC1 coefficients is outputted to the variable-length decoding circuits937-1 through937-n. The counted value is supplied to ahierarchical combining circuit938.
The variable-length decoding circuits937-1 through937-ndecode the codes, which are supplied by thesignal separation circuit936, into data of each hierarchical layer, and the result is supplied to thehierarchical combining circuit938. In accordance with the counted value in the hierarchical counter supplied by thesignal separation circuit936, thehierarchical combining circuit938 successively combines the decoded hierarchical layer data supplied by the variable-length decoding circuits937-1 through937-n.
When the hierarchical layer data of the least significant layer k of the block stored in the frame memory has been decoded and the combining of the hierarchical layer data has been completed, thehierarchical combining circuit938 supplies the decoded AC transformation coefficients to theinverse quantization unit939. Theinverse quantization unit939 successively supplies the quantization representatives, which correspond to the decoded quantized DC and AC transformation coefficients, to thescan converting circuit921 shown inFIG. 30, and image data expanded via theinverse DCT circuit922 andrasterization circuit914 is outputted from theoutput terminal25.
In order to assure the accuracy of the DC transformation coefficients in this embodiment, they are separated from progressive buildup. However, the invention is not limited to the foregoing example. It goes without saying that an arrangement in which progressive buildup is carried out in a form where the DC transformation coefficients are included also is covered by the scope of the invention.
It is of course permissible to place the quantization unit after thehierarchical division circuit928.
Furthermore, according to this embodiment, coding after hierarchical division is parallel processed. However, serial processing in which processing is performed successively from the most significant layer also is possible. In this case, acoding circuit929 of the hierarchical portion and the variable-length coding circuit of each layer of thedecoding circuit934 can be constructed as one system, thereby achieving a further simplification of hardware.
Furthermore, in the 11th and 12th embodiments, theindex memory909 is provided separate from theframe memory908. However, if an arrangement is adopted in which data is stored in the frame memory after the indices are multiplexed with respect to the compressed image data, it will be possible to dispense with the index memory. In this case, it will suffice if G−d (where d is the number of bits needed to store the index) is used instead of the predetermined value G employed as the criterion in thecoding selecting circuit906.
In accordance with each of the 11th through 13th embodiments, as described above, operations between blocks in the coding unit are eliminated and decoding is possible is simple blocks. At the same time, a plurality of coding circuits having different output information quantities are provided, coding which gives a coded length in one block (namely the sum total of the code length in one block after variable-length coding) that is maximum without exceeding the predetermined value G is selected, and the data is stored in memory in units of a predetermined value. As a result, addressing of overlaid blocks is facilitated and it is possible to combine images in the memory.
In addition, since length is fixed in block units, the time needed for decoding can be rendered constant for every block. Consequently, there is no need for a buffer for rendering constant the transmission rate of data after the decoding necessary for variable-length coding. This makes possible a great simplification in hardware.
Further, the coding method is not limited to ADCT. For example, other forms of variable-length coding, such as arithmetic coding or predictive coding, can be employed.
Further, the plurality of coding circuits can change code length by making the parameters which constitute the quantization tables and the parameters which constitute the Huffman code tables different.
Further, rather than arranging the plurality of coding circuits in parallel, as described above, a desired coding method can be decided by performing operations in serial fashion by means of a computer, by way of example.
The present invention can be applied to a system constituted by a plurality of devices, or to an apparatus comprising a single device. Furthermore, it goes without saying that the invention is applicable also to a case where the object of the invention is attained by supplying a program to a system or apparatus.
As many apparently widely different embodiments of the present invention can be made without departing from the spirit and scope thereof, it is to be understood that the invention is not limited to the specific embodiments thereof except as defined in the appended claims.