Movatterモバイル変換


[0]ホーム

URL:


US8189932B2 - Image processing apparatus and image processing method - Google Patents

Image processing apparatus and image processing method
Download PDF

Info

Publication number
US8189932B2
US8189932B2US11/742,813US74281307AUS8189932B2US 8189932 B2US8189932 B2US 8189932B2US 74281307 AUS74281307 AUS 74281307AUS 8189932 B2US8189932 B2US 8189932B2
Authority
US
United States
Prior art keywords
low
pass
vertical
subbands
horizontal
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related, expires
Application number
US11/742,813
Other versions
US20070286510A1 (en
Inventor
Takahiro Fukuhara
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sony Corp
Original Assignee
Sony Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sony CorpfiledCriticalSony Corp
Assigned to SONY CORPORATIONreassignmentSONY CORPORATIONASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS).Assignors: FUKUHARA, TAKAHIRO
Publication of US20070286510A1publicationCriticalpatent/US20070286510A1/en
Application grantedgrantedCritical
Publication of US8189932B2publicationCriticalpatent/US8189932B2/en
Expired - Fee Relatedlegal-statusCriticalCurrent
Adjusted expirationlegal-statusCritical

Links

Images

Classifications

Definitions

Landscapes

Abstract

An image processing apparatus includes a horizontal analysis filtering unit that receives image data in units of lines and generates a low-frequency component and a high-frequency component by performing horizontal low-pass analysis filtering and horizontal high-pass analysis filtering every time the number of samples in a horizontal direction reaches a predetermined value; and a vertical analysis filtering unit that generates coefficient data of a plurality of subbands by performing vertical low-pass analysis filtering and vertical high-pass analysis filtering every time the number of lines in a vertical direction of low-frequency and high-frequency components generated by the horizontal analysis filtering unit reaches a predetermined value.

Description

CROSS REFERENCES TO RELATED APPLICATIONS
The present invention contains subject matter related to Japanese Patent Application JP 2006-136875 filed in the Japanese Patent Office on May 16, 2006, the entire contents of which are incorporated herein by reference.
BACKGROUND OF THE INVENTION
1. Field of the Invention
The present invention relates to a band analysis apparatus and method for performing, using a filter bank, band analysis of an input image and dividing the image into a plurality of subbands, to a band synthesis apparatus and method for performing, using a filter bank, band synthesis of an image divided into a plurality of subbands, to an image encoding apparatus and method for performing, using a filter bank, band analysis of an input image and encoding the image that has been subjected to band analysis to generate an encoded code-stream, to an image decoding apparatus and method for decoding an encoded code-stream and performing, using a filter bank, band synthesis of the decoded code-stream, to a program, and to a recording medium.
2. Description of the Related Art
As a typical method for compressing images, a Joint Photographic Experts Group (JPEG) method, which is standardized by the International Organization for Standardization (ISO), is available. The JPEG method uses discrete cosine transform (DCT) and provides excellent encoded images and decoded images at a relatively high bit rate. However, when the encoding bit rate is reduced to a predetermined value or less, block noise, which is specific to DCT transform, is significantly increased. Thus, deterioration becomes conspicuous from a subjective point of view.
In recent years, research and development of methods for dividing an image into a plurality of subbands using a filter bank, which is a combination of a low-pass filter and a high-pass filter, and performing encoding of each of the plurality of subbands has been actively conducted. In such circumstances, wavelet-transform encoding has been regarded as a new promising technique that will take the place of DCT transform since wavelet-transform encoding does not have a disadvantage that block noise becomes conspicuous at high compression, unlike DCT transform.
The JPEG 2000, for which international standardization was completed in January 2001, adopts a method in which the above-mentioned wavelet transform and high-efficiency entropy coding (bit modeling and arithmetic coding for each bit-plane) are combined together. The JPEG 2000 achieves a significant improvement in encoding efficiency, compared with any other JPEG method.
For example, a technique described in C. Chrysafis and A. Ortega, “Line Based, Reduced Memory, Wavelet Image Compression”, IEEE Trans. Image Processing, Vol. 9, pp. 378-389, March 2000 is available.
SUMMARY OF THE INVENTION
In wavelet transform, basically, analysis filtering is performed for the entity of an image. Thus, it is necessary to store and hold a number of wavelet transform coefficients whose number corresponds to the number of pixels of the entire image. Thus, a higher-capacity memory is necessary for an image having a higher resolution, resulting in severe constraints in hardware development and the like.
In order to solve this problem, some wavelet transform methods with reduced memory requirement have been suggested. Line-based wavelet transform is one of the most important methods from among such wavelet transform methods with reduced memory requirement (see, for example, C. Chrysafis and A. Ortega, “Line Based, Reduced Memory, Wavelet Image Compression”, IEEE Trans. Image Processing, Vol. 9, pp. 378-389, March 2000). In the technique described in C. Chrysafis and A. Ortega, “Line Based, Reduced Memory, Wavelet Image Compression”, IEEE Trans. Image Processing, Vol. 9, pp. 378-389, March 2000, wavelet transform is performed immediately after the number of input lines of an image reaches a predetermined value. Thus, a necessary memory capacity can be significantly reduced while wavelet transform coefficients that are the same as wavelet transform coefficients obtained when wavelet transform is performed for the entire image are obtained. In addition, a delay time necessary for starting wavelet transform can be reduced.
However, for example, in order to realize an apparatus that encodes and transmits an image in real time and that receives and decodes the image, it is necessary to further reduce a delay time for processing from encoding of the image to decoding of an encoded code-stream to reconstruct the image. In addition, in the field of hardware development, further reduced memory requirement has been desired.
Accordingly, it is desirable to provide a band analysis apparatus and method for performing band analysis of an image with a reduced memory requirement and with low delay, a band synthesis apparatus and method for performing band synthesis of an image with a reduced memory requirement and with low delay, an image encoding apparatus and method for encoding an image while performing such band analysis, an image decoding apparatus and method for decoding an image while performing such band synthesis, a program, and a recording medium.
An image processing apparatus according to an embodiment of the present invention includes horizontal analysis filtering means for receiving image data in units of lines and for generating a low-frequency component and a high-frequency component by performing horizontal low-pass analysis filtering and horizontal high-pass analysis filtering every time the number of samples in a horizontal direction reaches a predetermined value; and vertical analysis filtering means for generating coefficient data of a plurality of subbands by performing vertical low-pass analysis filtering and vertical high-pass analysis filtering every time the number of lines in a vertical direction of low-frequency and high-frequency components generated by the horizontal analysis filtering means reaches a predetermined value.
An image processing apparatus according to another embodiment of the present invention includes input means for inputting coefficient data of a plurality of subbands generated by performing horizontal low-pass and high-pass analysis filtering and vertical low-pass and high-pass analysis filtering of image data; vertical synthesis filtering means for generating a low-frequency component and a high-frequency component by performing, every time the number of lines in a vertical direction reaches a predetermined value, vertical low-pass synthesis filtering and vertical high-pass synthesis filtering of the coefficient data of the plurality of subbands input by the input means; and horizontal synthesis filtering means for synthesizing a predetermined number of subbands by performing horizontal low-pass synthesis filtering and horizontal high-pass synthesis filtering every time the number of samples in a horizontal direction of low-frequency and high-frequency components generated by the vertical synthesis filtering means reaches a predetermined value.
Accordingly, band analysis and band synthesis of image data with reduced memory requirement and low delay can be achieved. In addition, image data can be encoded and decoded while such band analysis and band synthesis is performed.
BRIEF DESCRIPTION OF THE DRAWINGS
FIG. 1 schematically shows a configuration of a band analysis apparatus according to a first embodiment;
FIGS. 2A and 2B schematically show elements of data of an HDTV video signal;
FIG. 3 shows a case where the band analysis apparatus performs vertical filtering after performing horizontal filtering;
FIG. 4 illustrates buffering performed for each M columns;
FIG. 5 illustrates horizontal filtering in analysis filtering atdivision level1;
FIG. 6 illustrates vertical filtering in analysis filtering atdivision level1;
FIG. 7 shows a result obtained by performing analysis filtering untildivision level2;
FIG. 8 shows a result obtained by performing analysis filtering for an actual image untildivision level3;
FIG. 9 shows a lifting structure of a 5×3-analysis filter;
FIG. 10 is an illustration for explaining a data stream of Y, Cb, and Cr that are multiplexed together and timing of analysis filtering;
FIG. 11 includes signal distribution diagrams showing an interlace signal from among signals based on the SMPTE 274M standard and shows a position where a vertical synchronizing signal is inserted;
FIG. 12 schematically shows a configuration of a band analysis apparatus of the related art;
FIG. 13 shows a case where the band analysis apparatus of the related art performs horizontal filtering after performing vertical filtering;
FIG. 14 schematically shows an image encoding apparatus according to a second embodiment;
FIG. 15 schematically shows a band synthesis apparatus according to a third embodiment;
FIG. 16 shows a lifting structure of a 5×3 synthesis filter; and
FIG. 17 schematically shows a configuration of an image decoding apparatus according to a fourth embodiment.
DESCRIPTION OF THE PREFERRED EMBODIMENTS
Embodiments of the present invention will be described with reference to the drawings.
First Embodiment
A band analysis apparatus according to a first embodiment that performs, using an analysis filter bank, band analysis of an input video signal to divide the video signal into a plurality of subbands will be described.
FIG. 1 schematically shows a configuration of aband analysis apparatus10 according to the first embodiment. Referring toFIG. 1, theband analysis apparatus10 includes an imageline input unit11, acolumn buffer unit12, a horizontalanalysis filter unit13, and a verticalanalysis filter unit14.
The imageline input unit11 receives a video signal D10 for each line, and supplies a data stream D11 for the image line to thecolumn buffer unit12.
Video signals are normally defined by a standard. For example, currently, television broadcasting is performed in accordance with a National Television Standards Committee (NTSC) system. In addition, a high definition television (HDTV) system is standardized as a standard number “SMPTE 274M” by the Society of Motion Picture and Television Engineers (SMPTE), which is a standard-setting organization in the United States. In the description below, the HDTV system (a resolution of 1920×1080) will be described as an example.
FIG. 2A shows a configuration of elements of data of an HDTV video signal. A luminance signal Y has actual 1920 samples per line. Sample data of an end of active video (EAV) signal and sample data of a start of active video (SAV) signal, which are 280 samples in total, are placed before the actual sample data of the luminance signal Y. Color-difference signals Cb and Cr have a similar configuration. However, the color-difference signals Cb and Cr have a 4:2:2 format, and the number of actual samples of each of the color-difference signals Cb and Cr is half the number of actual samples of the luminance signal Y. Thus, the total number of actual samples of the color-difference signals Cb and Cr is equal to the number of actual samples of the luminance signal Y. By multiplexing the luminance signal Y and the color-difference signals Cb and Cr, data including 560 samples of the EAV and SAV signals and 3840 samples of the luminance signal Y and the color-difference signals Cb and Cr is generated, as shown inFIG. 2B.
Thus, when a signal based on the SMPTE 274M standard of the HDTV system, which is commonly called as the “HD-SDI” standard, is input as the video signal D10, the data stream D11 for the image line is obtained as multiplexed sample data, as shown inFIG. 2B. The description will be given below on the assumption mentioned above.
Thecolumn buffer unit12 stores and holds data streams D11 for individual columns, and continues to store and hold data streams D11 until data streams D11 for M columns are stored, as shown inFIGS. 3 and 4. The value M corresponds to the number of taps of horizontal filtering. The value M increases as the number of taps increases.
The horizontalanalysis filter unit13 sequentially reads column data D12 for M columns, and performs horizontal low-pass analysis filtering and horizontal high-pass analysis filtering. Due to the horizontal filtering, a low-frequency component (L) and a high-frequency component (H) D13, which are obtained by of horizontal division, are generated, as shown inFIG. 5.
Immediately after the number of lines of low-frequency and high-frequency components D13 reaches N, the verticalanalysis filter unit14 performs vertical low-pass analysis filtering and vertical high-pass analysis filtering, as shown inFIGS. 3 and 5. The value N corresponds to the number of taps of vertical filtering. The value N increases as the number of taps increases. Due to the vertical filtering, a low-frequency component (1LL) D14 and high-frequency components (1HL,1LH, and1HH) D15, which are obtained by vertical division, are generated, as shown inFIGS. 3 and 6. Concerning the order of letters “L” and “H” inFIGS. 3 and 6, the first letter indicates a band obtained after horizontal filtering is performed, and the last letter indicates a band obtained after vertical filtering is performed. In addition, the number disposed before the letter “L” or “H” indicates division level.
As a result of analysis filtering atdivision level1, the verticalanalysis filter unit14 generates the low-frequency component (1LL) D14 and the high-frequency components (1HL,1LH, and1HH) D15, as described above.
In wavelet transform, normally, a high-frequency component generated in the course of analysis filtering is not further analyzed. Thus, in the first embodiment, the high-frequency components (1HL,1LH, and1HH) D15 are output without being further analyzed. In contrast, the low-frequency component (1LL) D14 is supplied to thecolumn buffer unit12 so as to be further analyzed by an analysis filter bank. Immediately after the number of columns necessary for horizontal analysis filtering is buffered in thecolumn buffer unit12, analysis filtering atdivision level2 is performed. A low-frequency component is repeatedly divided as described above, since most of the energy of an image signal is concentrated in the low-frequency component.
In the analysis filtering atdivision level2, the horizontalanalysis filter unit13 sequentially reads column data D12 for M columns, and performs horizontal low-pass analysis filtering and horizontal high-pass analysis filtering. Then, immediately after the number of lines of low-frequency and high-frequency components D13 reaches N/2, the verticalanalysis filter unit14 performs vertical low-pass analysis filtering and high-pass analysis filtering, as shown inFIG. 6. Due to the vertical filtering, a low-frequency component (2LL) and high-frequency components (2HL,2LH, and2HH) are generated, as shown inFIG. 7. Referring toFIG. 7, a subband1LL atdivision level1 is divided into four subbands,2LL,2HL,2LH, and2HH.
In order to further increase the division level, analysis filtering can be repeatedly performed for a low-frequency component.FIG. 8 shows an example in which subband division by analysis filtering is performed for an actual image untildivision level3.
As the most general arithmetic method of the above-mentioned analysis filtering, a method called convolutional operation is available. The convolutional operation is the most fundamental procedure for achieving a digital filter. As the convolutional operation, convolution multiplication of a filter tap coefficient by actual input data is performed. However, the convolutional operation generates a problem in which the calculation load increases as the tap length increases.
As a technique for solving the above-mentioned problem, a lifting technique for wavelet transform described in W. Sweldens, “The Lifting Scheme: A Custom-design Construction of Biorthogonal Wavelets”, Appl. Comput. Harmon. Anal., vol. 3, n0.2, pp. 186-200, 1996 is known.
FIG. 9 shows a lifting structure of a 5×3-analysis filter, which is adopted in the JPEG 2000 standard. Analysis filtering in which the lifting technique is applied to the 5×3-analysis filter will be schematically explained with reference toFIG. 9.
Referring toFIG. 9, pixels of an input image are shown in the uppermost row, high-frequency component outputs are shown in the intermediate row, and low-frequency component outputs are shown in the lowest row. Pixels of an input image are not necessarily shown in the uppermost row. Coefficients obtained by the above-mentioned analysis filtering may be shown in the uppermost row. In this embodiment, pixels of an input image are shown in the uppermost row. Even-numbered pixels or lines are represented as squares, and odd-numbered pixels or lines are represented as circles.
As the first step, a high-frequency component coefficient di1is generated from input pixels, using the following equation:
di1=di0−½(si0+si+10)  (1).
Then, as the second step, a low-frequency component coefficient si1is generated in accordance with the generated high-frequency component coefficient and odd-numbered pixels from among the input pixels, using the following equation:
si1=si0+¼(di−11+di1)  (2).
As described above, in analysis filtering, after a high-frequency component is generated, a low-frequency component is generated. Two types of filter banks used for such analysis filtering have only two taps, which can be represented using Z transform representation as “P(z)=(1+z−1)/2” and “U(z)=(1+z−1)/4”. That is, although five taps are originally necessary, only two taps are necessary in this embodiment. Thus, the amount of calculation can be significantly reduced. Therefore, it is desirable that such a lifting technique is used for horizontal filtering and vertical filtering in theband analysis apparatus10.
A data stream D11 for an image line includes a luminance signal Y and color-difference signals Cb and Cr that are multiplexed together, as described above. In this case, the use of the sampling rate of each component (Y,Cb, Cr) effectively achieves analysis filtering. That is, as shown inFIG. 10, Y data is input every two cycles, whereas each of Cb data and Cr data is input every four cycles. Thus, by sequentially performing analysis filtering utilizing a time difference between input Y data, Cb data, and Cr data, Y,Cr,Cb filtering can be performed using a single analysis filter bank without delay.
Since the above-mentioned analysis filtering is performed in units of pictures (fields/frames) forming a video signal, it is necessary to detect an end point of a picture and to stop and reset an operation of analysis filtering.
The imageline input unit11 may include a unit to detect a vertical synchronizing signal of a video signal, so that the end point of the picture can be detected.FIG. 11 includes signal distribution diagrams showing an interlace signal from among signals based on the SMPTE 274M standard. InFIG. 11, an upper diagram shows a first field and a lower diagram shows a second field. Referring toFIG. 11, a vertical synchronizing signal for 22 lines is disposed at the beginning of the first field, and a vertical synchronizing signal for 23 lines is disposed at the beginning of the second field. Thus, with such a vertical synchronizing signal, the end point of a picture can be easily detected. Then, immediately after detection is performed, an operation of analysis filtering can be stopped.
As described above, as wavelet transform of the related art, line-based wavelet transform described in C. Chrysafis and A. Ortega, “Line Based, Reduced Memory, Wavelet Image Compression”, IEEE Trans. Image Processing, Vol. 9, pp. 378-389, March 2000 is known.
FIG. 12 schematically shows a configuration of aband analysis apparatus100 that achieves such line-based wavelet transform. Referring toFIG. 12, theband analysis apparatus100 includes an imageline input unit101, aline buffer unit102, a verticalanalysis filter unit103, and a horizontalanalysis filter unit104.
The imageline input unit101 receives a video signal D100 for each line, and supplies a data stream D101 for the image line to theline buffer unit102.
Theline buffer unit102 stores and holds data streams D101 for individual lines, and continues to store and hold data streams D101 until data streams D101 for N lines are stored, as shown inFIG. 13.
The verticalanalysis filter unit103 sequentially reads line data D102 for N lines, and performs vertical low-pass analysis filtering and vertical high-pass analysis filtering, as shown inFIG. 13. Due to the vertical filtering, a low-frequency component (L) and a high-frequency component (H) D103, which are obtained by vertical division, are generated.
Immediately after the number of columns of low-frequency and high-frequency components D103 reaches M, the horizontalanalysis filter unit104 performs horizontal low-pass analysis filtering and horizontal high-pass analysis filtering, as shown inFIG. 13. Due to the horizontal filtering, a low-frequency component (1LL) D104 and high-frequency components (1HL,1LH, and1HH) D105, which are obtained by horizontal division, are generated.
As described above, a subband that is generated by line-based wavelet transform described in C. Chrysafis and A. Ortega, “Line Based, Reduced Memory, Wavelet Image Compression”, IEEE Trans. Image Processing, Vol. 9, pp. 378-389, March 2000 is the same as a subband that is generated by wavelet transform performed by theband analysis apparatus10.
However, for the line-based wavelet transform, buffering corresponding to the value obtained by multiplying the size of an image in the horizontal direction by N (lines) is necessary, as shown inFIG. 13. In addition, since vertical filtering is performed after such buffering is completed, a delay time, which is a time until vertical filtering is started, is generated.
In contrast, only 1 (line)×M (columns) column buffers are necessary for wavelet transform to be performed by theband analysis apparatus10. Thus, the necessary memory capacity can be significantly reduced compared with a case where a line buffer is used. Moreover, since horizontal analysis filtering can be started immediately after data for the number of column buffers is input, a delay time until wavelet transform is started can be significantly reduced compared with line-based wavelet transform.
Second Embodiment
Theband analysis apparatus10 that divides a video signal into a plurality of subbands by performing wavelet transform has been described in the first embodiment. Normally, wavelet transform is often used as preprocessing of image compression. An image encoding apparatus according to a second embodiment that compresses and encodes coefficient data generated by wavelet transform will be described.
FIG. 14 schematically shows a configuration of animage encoding apparatus20 according to the second embodiment. Theimage encoding apparatus20 includes ananalysis filter bank21, aquantization unit22, an entropy-coding unit23, and arate controller24.
Theanalysis filter bank21 has a configuration similar to theband analysis apparatus10 shown inFIG. 1. That is, theanalysis filter bank21 performs analysis filtering of an input video signal D20, and supplies coefficient data D21 obtained by analysis to thequantization unit22. For example, in analysis filtering atdivision level2, by performing wavelet transform of four lines of a subband1LL generated by analysis filtering atdivision level1, two lines of subbands2LL,2HL,2LH, and2HH are obtained. In analysis filtering atdivision level3, by performing wavelet transform of two lines of a subband2LL, a line of subbands3LL,3HL,3LH, and3HH is obtained. When analysis filtering atdivision level3 is the final analysis filtering, the subband3LL is a lowest-frequency subband.
Thequantization unit22 performs quantization by dividing the coefficient data D21 generated by theanalysis filter bank21 by, for example, a quantization step size, and generates quantized coefficient data D22.
Thequantization unit22 may form line blocks each including a line of a generated lowest-frequency subband (3LL, in the above-mentioned case) and a plurality of lines of other subbands necessary for generating the line of the lowest-frequency subband, and may set a quantization step size for each line block. Since a line block includes coefficients of all the subbands of an image area (ten subbands from3LL to1HH in the example shown inFIG. 8), if quantization is performed for each line block, an advantage of multiple resolution analysis, which is a feature of wavelet transform, can be exploited. In addition, since only the number of line blocks of the entire screen is determined, the load imposed on theimage encoding apparatus20 can be reduced.
In addition, since the energy of an image signal is generally concentrated in a low-frequency component and deterioration in the low-frequency component is conspicuous due to human visual characteristics, it is effective to perform weighting in quantization such that the quantization step size of the low-frequency component is smaller. Due to such weighting, a relatively larger amount of information can be allocated to the low-frequency component, thus improving the subjective quality of the entire image.
The entropy-coding unit23 performs source encoding of the quantized coefficient data D22 generated by thequantization unit22, and generates a compressed encoded code-stream D23. As source encoding, for example, Huffman coding adopted in the JPEG and the Moving Picture Experts Group (MPEG) or high-precision arithmetic coding adopted in the JPEG 2000 can be used.
Determination of a range of coefficients to be subjected to entropy coding is a very important factor that directly relates to compression efficiency. For example, in the JPEG and the MPEG, information is compressed by performing DCT transform for an 8×8 block and then performing Huffman coding for generated 64 DCT transform coefficients. That is, the 64 DCT transform coefficients form the range of entropy coding.
Theanalysis filter bank21 performs wavelet transform in units of lines, unlike DCT transform performed for an 8×8 block. Thus, the entropy-coding unit23 performs source coding for individual subbands and for each P lines of a subband.
The value P is at least 1. The amount of necessary reference information decreases as the number of lines decreases. Thus, the necessary memory capacity can be reduced. In contrast, the amount of necessary information increases as the number of lines increases. Thus, encoding efficiency can be increased. However, if the value P exceeds the number of lines forming a line block for a subband, a line forming the next line block is also necessary. Thus, a delay time until quantized coefficient data of the next line block is generated by wavelet transform and quantization is generated. Thus, in order to achieve low delay, it is necessary to have a value P that is equal to or smaller than the number of lines forming a line block. For example, in the example shown inFIG. 8, since the number of lines forming a line block for the subbands3LL,3HL,3LH, and3HH is 1, the value P is set to 1. In addition, since the number of lines forming a line block for the subbands2HL,2LH, and2HH is 2, the value P is set to 1 or 2.
Therate controller24 performs control so as to achieve a desired bit rate or compression rate. After performing rate control, therate controller24 outputs an encoded code-stream D24 whose rate has been controlled. For example, in order to achieve a higher bit rate, therate controller24 transmits to the quantization unit22 a control signal D25 for decreasing the quantization step size. In contrast, in order to achieve a lower bit rate, therate controller24 transmits to the quantization unit22 a control signal D25 for increasing the quantization step size.
Third Embodiment
A band synthesis apparatus according to a third embodiment that corresponds to theband analysis apparatus10 according to the first embodiment will be described. In the third embodiment, synthesis filtering of an image that has been subjected to subband division untildivision level3, as shown inFIG. 8, is performed.
FIG. 15 schematically shows a configuration of aband synthesis apparatus30 according to the third embodiment. Referring toFIG. 15, theband synthesis apparatus30 includes aline buffer unit31, a verticalsynthesis filter unit32, acolumn buffer unit33, a horizontalsynthesis filter unit34, and a vertical synchronizingsignal insertion unit35.
Theline buffer unit31 stores and holds a low-frequency component (3LL) D30 and high-frequency components (3HL,3LH, and3HH) D31 for each line. Theline buffer unit31 continues to store and hold low-frequency components D30 and high-frequency components D31 until low-frequency components D30 and high-frequency components D31 for N lines are stored. A low-frequency component D30 only for a lowest-frequency subband3LL is input to theline buffer unit31. Then, low-frequency components D35 generated by synthesis filtering are supplied from the horizontalsynthesis filter unit34.
The verticalsynthesis filter unit32 sequentially reads line data D32 for N lines, and performs vertical low-pass synthesis filtering and vertical high-pass synthesis filtering. Due to the vertical filtering, low-frequency and high-frequency components D33, which are obtained by vertical synthesis, are generated.
Thecolumn buffer unit33 stores and holds low-frequency and high-frequency components D33, which are obtained by vertical synthesis, for individual columns, and continues to store and hold low-frequency and high-frequency components D33 until low-frequency and high-frequency components D33 for M columns are stored.
The horizontalsynthesis filter unit34 sequentially reads column data D34 for M columns, and performs horizontal low-pass synthesis filtering and horizontal high-pass synthesis filtering. Due to the horizontal filtering, a low-frequency component (2LL) D35, which is obtained by horizontal synthesis, is generated.
As a result of synthesis filtering atdivision level3, the horizontalsynthesis filter unit34 generates the low-frequency component (2LL).
Similarly, in synthesis filtering atdivision level2, a low-frequency component (1LL) D35 is generated from the low-frequency component (2LL) D35 and the high-frequency components (2HL,2LH, and2HH) D31. In addition, in synthesis filtering atdivision level1, an image data stream is generated from the low-frequency component (1LL) D35 and the high-frequency components (1HL,1LH, and1HH). The generated image data stream is supplied to the vertical synchronizingsignal insertion unit35.
The vertical synchronizingsignal insertion unit35 inserts a vertical synchronizing signal into the image data stream at a predetermined timing, as shown inFIG. 11, and outputs a generated video signal D36.
A lifting technique can also be applied to the above-mentioned synthesis filtering.
FIG. 16 shows a lifting structure of a 5×3-synthesis filter, which is adopted in the JPEG 2000 standard. Synthesis filtering in which the lifting technique is applied to the 5×3-synthesis filter will be schematically explained with reference toFIG. 16.
Referring toFIG. 16, coefficients generated by wavelet transform are shown in the uppermost row. High-frequency component coefficients are represented as circles, and low-frequency component coefficients are represented as squares.
As the first step, an even-numbered coefficient si0(the first coefficient is regarded as being the 0th coefficient) is generated in accordance with input low-frequency and high-frequency component coefficients, using the following equation:
si0=si1−¼(di−11+di1)  (3).
Then, as the second step, an odd-numbered coefficient di0is generated in accordance with the even-numbered coefficient si0generated in the first step and the input high-frequency component coefficient di1, using the following equation:
di0=di1−½(si0+si+10)  (4).
As described above, in synthesis filtering, after an even-numbered coefficient is generated, an odd-numbered coefficient is generated. Two types of filter banks used for such synthesis filtering have two taps although five taps are originally necessary. Thus, the amount of calculation can be significantly reduced.
Fourth Embodiment
An image decoding apparatus according to a fourth embodiment that corresponds to theimage encoding apparatus20 according to the second embodiment will be described.
FIG. 17 schematically shows a configuration of animage decoding apparatus40 according to the fourth embodiment. Referring toFIG. 17, theimage decoding apparatus40 includes an entropy-decodingunit41, adequantization unit42, and asynthesis filter bank43.
The entropy-decodingunit41 performs source decoding of a received encoded code-stream D40, and generates quantized coefficient data D41. As source decoding, Huffman decoding or high-efficiency arithmetic decoding can be used, as described above. In addition, if an image encoding apparatus performs source coding for each P lines, as described above, the entropy-decodingunit41 also performs source decoding for individual subbands and for each P lines of a subband.
Thedequantization unit42 performs dequantization by multiplying the quantized coefficient data D41 by a quantization step size, and generates coefficient data D42. The quantization step size is normally described in the header of an encoded code-stream. If an image encoding apparatus sets a quantization step size for each line block, as described above, thedequantization unit42 also performs dequantization by setting a dequantization step size for each line block.
Thesynthesis filter bank43 has a configuration similar to theband synthesis apparatus30 shown inFIG. 15. That is, thesynthesis filter bank43 performs synthesis filtering for the coefficient data D42 to generate an image data stream, inserts a vertical synchronizing signal into the generated image data stream, and outputs a generated video signal D43.
The present invention is not limited to any of the first to fourth embodiments described above. Various changes and modification can be made to the present invention without departing from the spirit and scope of the present invention.
For example, band analysis or band synthesis is performed for a video signal in each of the foregoing embodiments. However, the present invention is also applicable to band analysis or band synthesis of a static image.
Although hardware configurations have been described in the foregoing embodiments, a series of processing may be performed by software. In this case, a program constituting the software may be incorporated in advance in dedicated hardware of a computer, such as a read-only memory (ROM) or a hard disk, or installed from a network or a recording medium on a general-purpose personal computer capable of performing various functions by installing various programs. As the recording medium, for example, a package medium including a magnetic disk (flexible disk), an optical disk, such as compact disk-read only memory (CD-ROM) or a digital versatile disc (DVD), a magnetic optical disk, such as mini-disk (MD) (trademark), or a semiconductor memory can be used.
It should be understood by those skilled in the art that various modifications, combinations, sub-combinations and alterations may occur depending on design requirements and other factors insofar as they are within the scope of the appended claims or the equivalents thereof.

Claims (19)

1. An image processing apparatus comprising:
horizontal analysis filtering means for receiving image data in units of lines and for generating a low-frequency component and a high-frequency component by performing horizontal low-pass analysis filtering and horizontal high-pass analysis filtering every time a number of samples in a horizontal direction reaches a predetermined value;
vertical analysis filtering means for generating coefficient data of a plurality of subbands by performing vertical low-pass analysis filtering and vertical high-pass analysis filtering every time a number of lines in a vertical direction of low-frequency and high-frequency components generated by the horizontal analysis filtering means reaches a predetermined value, and
quantization means for determining an adaptive quantization step size with respect to each line block and for quantizing the coefficient data of the plurality of subbands generated by the vertical analysis filtering means by weights, the weights being chosen such that a weighted quantization step size is smaller for a low-frequency component and is higher for a high-frequency component for each line block, the quantizing generating quantized coefficient data,
wherein the line block includes a line of a lowest-frequency subband and a plurality of lines of other subbands necessary for generating the line of the lowest-frequency subband.
12. An image processing method comprising the steps of:
receiving image data in units of lines and generating a low-frequency component and a high-frequency component by performing horizontal low-pass analysis filtering and horizontal high-pass analysis filtering every time a number of samples in a horizontal direction reaches a predetermined value;
generating coefficient data of a plurality of subbands by performing vertical low-pass analysis filtering and vertical high-pass analysis filtering every time a number of lines in a vertical direction of generated low-frequency and high-frequency components reaches a predetermined value; and
determining an adaptive quantization step size with respect to each line block and quantizing the coefficient data of the plurality of subbands generated by said step of generating coefficient data by weights, the weights being chosen such that a weighted quantization step size is smaller for a low-frequency component and is higher for a high-frequency component for each line block, the quantizing generating quantized coefficient data,
wherein the line block includes a line of a lowest-frequency subband and a plurality of lines of other subbands necessary for generating the line of the lowest-frequency subband.
13. An image processing apparatus comprising:
input means for inputting coefficient data of a plurality of subbands generated by performing horizontal low-pass and high-pass analysis filtering and vertical low-pass and high-pass analysis filtering of image data;
decoding means for generating the coefficient data of the plurality of subbands by decoding an encoded stream that is generated by encoding the coefficient data of the plurality of subbands generated by performing the horizontal low-pass and high-pass analysis filtering and the vertical low-pass and high-pass analysis filtering of the input image data;
vertical synthesis filtering means for generating a low-frequency component and a high-frequency component by performing, every time a number of lines in a vertical direction reaches a predetermined value, vertical low-pass synthesis filtering and vertical high-pass synthesis filtering, of the coefficient data of the plurality of subbands input by the input means;
horizontal synthesis filtering means for synthesizing a predetermined number of subbands by performing horizontal low-pass synthesis filtering and horizontal high-pass synthesis filtering every time a number of samples in a horizontal direction of low-frequency and high-frequency components generated by the vertical synthesis filtering means reaches a predetermined value;
entropy-decoding means for performing entropy decoding of the encoded stream to generate quantized coefficient data of the plurality of subbands; and
dequantization means for dequantizing the quantized coefficient data generated by the entropy-decoding means with respect to each line block to generate the dequantized coefficient data of the plurality of subbands,
wherein the dequantization means forms line blocks each including a line of a lowest-frequency subband and a plurality of lines of other subbands necessary for generating the line of the lowest-frequency subband, and sets an adaptive quantization step size of each line block, a weighted quantization step size being set smaller for a low-frequency component and being set higher for a high-frequency component for each line block.
17. An image processing method comprising the steps of:
inputting coefficient data of a plurality of subbands generated by performing horizontal low-pass and high-pass analysis filtering and vertical low-pass and high-pass analysis filtering of image data;
generating the coefficient data of the plurality of subbands by decoding an encoded stream that is generated by encoding the coefficient data of the plurality of subbands generated by performing the horizontal low-pass and high-pass analysis filtering and the vertical low-pass and high-pass analysis filtering of the image data of said step of inputting;
generating a low-frequency component and a high-frequency component by performing, every time a number of lines in a vertical direction reaches a predetermined value, vertical low-pass synthesis filtering and vertical high-pass synthesis filtering of the input coefficient data of the plurality of subbands;
synthesizing a predetermined number of subbands by performing horizontal low-pass synthesis filtering and horizontal high-pass synthesis filtering every time a number of samples in a horizontal direction of generated low-frequency and high-frequency components reaches a predetermined value;
performing entropy decoding of the encoded stream to generate quantized coefficient data of the plurality of subbands; and
dequantizing the quantized coefficient data generated by said step of performing entropy decoding with respect to each line block to generate the dequantized coefficient data of the plurality of subbands,
wherein the step of dequantizing forms line blocks each including a line of a lowest-frequency subband and a plurality of lines of other subbands necessary for generating the line of the lowest-frequency subband, and sets an adaptive quantization step size of each line block, a weighted quantization step size being set smaller for a low-frequency component and being set higher for a high-frequency component for each line block.
18. An image processing apparatus including a hardware processor, the apparatus comprising:
a horizontal analysis filtering unit operating on the hardware processor that receives image data in units of lines and generates a low-frequency component and a high-frequency component by performing horizontal low-pass analysis filtering and horizontal high-pass analysis filtering every time a number of samples in a horizontal direction reaches a predetermined value;
a vertical analysis filtering unit operating on the hardware processor that generates coefficient data of a plurality of subbands by performing vertical low-pass analysis filtering and vertical high-pass analysis filtering every time a number of lines in a vertical direction of low-frequency and high-frequency components generated by the horizontal analysis filtering unit reaches a predetermined value, and
a quantization unit operating on the hardware processor that determines an adaptive quantization step size with respect to each line block and that quantizes the coefficient data of the plurality of subbands generated by the vertical analysis filtering unit by weights, the weights being chosen such that a weighted quantization step size is smaller for a low-frequency component and is higher for a high-frequency component for each line block, the quantization generating quantized coefficient data,
wherein the line block includes a line of a lowest-frequency subband and a plurality of lines of other subbands necessary for generating the line of the lowest-frequency subband.
19. An image processing apparatus including a hardware processor, the apparatus comprising:
an input unit that inputs coefficient data of a plurality of subbands generated by performing horizontal low-pass and high-pass analysis filtering and vertical low-pass and high-pass analysis filtering of image data;
a decoding unit that generates the coefficient data of the plurality of subbands by decoding an encoded stream that is generated by encoding the coefficient data of the plurality of subbands generated by performing the horizontal low-pass and high-pass analysis filtering and the vertical low-pass and high-pass analysis filtering of the image data from the input unit;
a vertical synthesis filtering unit operating on the hardware processor that generates a low-frequency component and a high-frequency component by performing, every time a number of lines in a vertical direction reaches a predetermined value, vertical low-pass synthesis filtering and vertical high-pass synthesis filtering of the coefficient data of the plurality of subbands input by the input unit;
a horizontal synthesis filtering unit operating on the hardware processor that synthesizes a predetermined number of subbands by performing horizontal low-pass synthesis filtering and horizontal high-pass synthesis filtering every time a number of samples in a horizontal direction of low-frequency and high-frequency components generated by the vertical synthesis filtering unit reaches a predetermined value;
entropy-decoding unit operating on the hardware processor for performing entropy decoding of the encoded stream to generate quantized coefficient data of the plurality of subbands; and
dequantization unit for dequantizing the quantized coefficient data generated by the entropy-decoding unit with respect to each line block to generate the dequantized coefficient data of the plurality of subbands,
wherein the dequantization unit forms line blocks each including a line of a lowest-frequency subband and a plurality of lines of other subbands necessary for generating the line of the lowest-frequency subband, and sets an adaptive quantization step size of each line block, a weighted quantization step size being set smaller for a low-frequency component and being set higher for a high-frequency component for each line block.
US11/742,8132006-05-162007-05-01Image processing apparatus and image processing methodExpired - Fee RelatedUS8189932B2 (en)

Applications Claiming Priority (2)

Application NumberPriority DateFiling DateTitle
JP2006-1368752006-05-16
JP2006136875AJP4360379B2 (en)2006-05-162006-05-16 Image processing apparatus, image processing method, program, and recording medium

Publications (2)

Publication NumberPublication Date
US20070286510A1 US20070286510A1 (en)2007-12-13
US8189932B2true US8189932B2 (en)2012-05-29

Family

ID=38822057

Family Applications (1)

Application NumberTitlePriority DateFiling Date
US11/742,813Expired - Fee RelatedUS8189932B2 (en)2006-05-162007-05-01Image processing apparatus and image processing method

Country Status (5)

CountryLink
US (1)US8189932B2 (en)
JP (1)JP4360379B2 (en)
KR (1)KR20070111363A (en)
CN (1)CN101076117B (en)
TW (1)TWI379593B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US20110135208A1 (en)*2009-12-032011-06-09Qualcomm IncorporatedDigital image combining to produce optical effects

Families Citing this family (20)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
JP4356032B2 (en)*2007-05-172009-11-04ソニー株式会社 Information processing apparatus and method
JP4356033B2 (en)*2007-05-172009-11-04ソニー株式会社 Image data processing apparatus and method
JP4525704B2 (en)*2007-05-172010-08-18ソニー株式会社 Encoding apparatus and method, recording medium, and program.
JP4356028B2 (en)*2007-05-172009-11-04ソニー株式会社 Information processing apparatus and method
JP4356030B2 (en)*2007-05-172009-11-04ソニー株式会社 Information processing apparatus and method
JP4356029B2 (en)*2007-05-172009-11-04ソニー株式会社 Information processing apparatus and method
JP4356031B2 (en)*2007-05-172009-11-04ソニー株式会社 Information processing apparatus and method
JP4793320B2 (en)*2007-05-172011-10-12ソニー株式会社 Information processing apparatus and method
CN101686389B (en)*2008-09-282012-10-03富士通株式会社Image down sampling method and system
JP4670947B2 (en)*2008-12-052011-04-13ソニー株式会社 Information processing apparatus and method
JP4626707B2 (en)*2008-12-082011-02-09ソニー株式会社 Information processing apparatus and method
JP5640370B2 (en)*2009-12-182014-12-17ソニー株式会社 Image processing apparatus, image processing method, and imaging apparatus
JP2011147050A (en)*2010-01-182011-07-28Sony CorpImage processing apparatus and method
JP2011160075A (en)*2010-01-292011-08-18Sony CorpImage processing device and method
US9210426B2 (en)*2011-06-302015-12-08Mitsubishi Electric CorporationImage coding device, image decoding device, image coding method, and image decoding method
EP2575364B1 (en)*2011-09-302020-03-18BlackBerry LimitedMethods and devices for data compression using a non-uniform reconstruction space
CN102611907B (en)*2012-03-162014-04-09清华大学Multi-resolution video in-situ filtering method and multi-resolution video in-situ filtering device
CN108156462A (en)*2017-12-282018-06-12上海通途半导体科技有限公司A kind of compression of images, decompression method, system and its ME of application frameworks
CN111968031B (en)*2020-07-142024-07-16浙江大华技术股份有限公司Image stitching method and device, storage medium and electronic device
CN112085094B (en)*2020-09-082024-04-05中国平安财产保险股份有限公司Document image reproduction detection method, device, computer equipment and storage medium

Citations (16)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
JPH1028031A (en)1996-07-101998-01-27Sony CorpDigital filter
JPH1063643A (en)1996-08-191998-03-06Fuji Photo Film Co LtdMethod for transforming picture
JPH11127439A (en)1997-10-241999-05-11Seiko Epson CorpImage encoding device, its method, image decoding device and its method
JP2001285643A (en)2000-03-302001-10-12Canon Inc Image conversion apparatus and method
US6332043B1 (en)*1997-03-282001-12-18Sony CorporationData encoding method and apparatus, data decoding method and apparatus and recording medium
US20020048405A1 (en)*1994-09-202002-04-25Ahmad ZandiMethod for compression using reversible embedded wavelets
US20020168113A1 (en)*2001-03-132002-11-14Tadayoshi NakayamaFilter processing apparatus
US6560369B1 (en)*1998-12-112003-05-06Canon Kabushiki KaishaConversion of wavelet coded formats depending on input and output buffer capacities
US20030147463A1 (en)*2001-11-302003-08-07Sony CorporationMethod and apparatus for coding image information, method and apparatus for decoding image information, method and apparatus for coding and decoding image information, and system of coding and transmitting image information
US20030190082A1 (en)*1999-11-032003-10-09Egbert AmmichtMethods and apparatus for wavelet-based image compression
US6658379B1 (en)*1997-02-102003-12-02Sony CorporationWavelet processing with leading and trailing edge extrapolation
US20040141652A1 (en)*2002-10-252004-07-22Sony CorporationPicture encoding apparatus and method, program and recording medium
JP2005150846A (en)2003-11-112005-06-09Canon Inc Image processing method and apparatus
US20050259880A1 (en)*2000-03-102005-11-24Takahiro FukuharaBlock area wavelet transform picture encoding apparatus
US6973127B1 (en)*1999-12-232005-12-06Xvd CorporationApparatus and method for memory saving wavelet based video coding
US20060053004A1 (en)*2002-09-172006-03-09Vladimir CeperkovicFast codec with high compression ratio and minimum required resources

Patent Citations (19)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US20020048405A1 (en)*1994-09-202002-04-25Ahmad ZandiMethod for compression using reversible embedded wavelets
JPH1028031A (en)1996-07-101998-01-27Sony CorpDigital filter
JPH1063643A (en)1996-08-191998-03-06Fuji Photo Film Co LtdMethod for transforming picture
US6658379B1 (en)*1997-02-102003-12-02Sony CorporationWavelet processing with leading and trailing edge extrapolation
US6332043B1 (en)*1997-03-282001-12-18Sony CorporationData encoding method and apparatus, data decoding method and apparatus and recording medium
JPH11127439A (en)1997-10-241999-05-11Seiko Epson CorpImage encoding device, its method, image decoding device and its method
US6560369B1 (en)*1998-12-112003-05-06Canon Kabushiki KaishaConversion of wavelet coded formats depending on input and output buffer capacities
US6788820B2 (en)*1999-11-032004-09-07Lucent Technologies Inc.Methods and apparatus for wavelet-based image compression
US20030190082A1 (en)*1999-11-032003-10-09Egbert AmmichtMethods and apparatus for wavelet-based image compression
US6973127B1 (en)*1999-12-232005-12-06Xvd CorporationApparatus and method for memory saving wavelet based video coding
US20050259880A1 (en)*2000-03-102005-11-24Takahiro FukuharaBlock area wavelet transform picture encoding apparatus
US20050265617A1 (en)*2000-03-102005-12-01Takahiro FukuharaBlock area wavelet transform picture encoding apparatus
US7016546B2 (en)*2000-03-102006-03-21Sony CorporationBlock area wavelet transform picture encoding apparatus
JP2001285643A (en)2000-03-302001-10-12Canon Inc Image conversion apparatus and method
US20020168113A1 (en)*2001-03-132002-11-14Tadayoshi NakayamaFilter processing apparatus
US20030147463A1 (en)*2001-11-302003-08-07Sony CorporationMethod and apparatus for coding image information, method and apparatus for decoding image information, method and apparatus for coding and decoding image information, and system of coding and transmitting image information
US20060053004A1 (en)*2002-09-172006-03-09Vladimir CeperkovicFast codec with high compression ratio and minimum required resources
US20040141652A1 (en)*2002-10-252004-07-22Sony CorporationPicture encoding apparatus and method, program and recording medium
JP2005150846A (en)2003-11-112005-06-09Canon Inc Image processing method and apparatus

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
Christos Chrysafis, et al., "Line-Based, Reduced Memory, Wavelet Image Compression", IEEE Transactions on Image Processing vol. 9, No. 3, Mar. 2000, pp. 378-389.
Chrysafis, C.-"Line based reduced memory, wavelet image compression"-IEEE 1998, pp. 398-407.*
Wim Sweldens, "The Lifting Scheme: A Custom-Design Construction of Biorthogonal Wavelets", Applied and Computational Harmonic Analysis 3, Article No. 0015, 1996, pp. 186-200.

Cited By (2)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US20110135208A1 (en)*2009-12-032011-06-09Qualcomm IncorporatedDigital image combining to produce optical effects
US8798388B2 (en)*2009-12-032014-08-05Qualcomm IncorporatedDigital image combining to produce optical effects

Also Published As

Publication numberPublication date
US20070286510A1 (en)2007-12-13
CN101076117A (en)2007-11-21
CN101076117B (en)2010-06-16
TW200812394A (en)2008-03-01
JP4360379B2 (en)2009-11-11
KR20070111363A (en)2007-11-21
JP2007311923A (en)2007-11-29
TWI379593B (en)2012-12-11

Similar Documents

PublicationPublication DateTitle
US8189932B2 (en)Image processing apparatus and image processing method
US7907785B2 (en)Image processing apparatus and image processing method
JP4656190B2 (en) Information processing apparatus and method
US7359561B2 (en)Picture encoding with wavelet transform and block area weights
KR100664928B1 (en) Video coding method and apparatus
US6519285B2 (en)Video encoding and decoding apparatus
CN101106719B (en)Wavelet transformation device, wavelet inverse transformation device and method, program, and recording medium
US8098947B2 (en)Method and apparatus for processing image data by rearranging wavelet transform data
US8422806B2 (en)Information processing apparatus and information processing method for reducing the processing load incurred when a reversibly encoded code stream is transformed into an irreversibly encoded code stream
US8605793B2 (en)Information processing device and method, and program
JP2006115459A (en) System and method for increasing SVC compression ratio
US8411984B2 (en)Image processing device and method
US6507673B1 (en)Method and apparatus for video encoding decision
JP2008514139A (en) Compression rate control system and method for variable subband processing
US9241163B2 (en)VC-2 decoding using parallel decoding paths
WO2001071650A1 (en)Method and apparatus for run-length encoding video data
WO2006106356A1 (en)Encoding and decoding a signal
JP2004135070A (en)Encoding method and picture processor

Legal Events

DateCodeTitleDescription
ASAssignment

Owner name:SONY CORPORATION, JAPAN

Free format text:ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:FUKUHARA, TAKAHIRO;REEL/FRAME:019232/0718

Effective date:20070413

STCFInformation on status: patent grant

Free format text:PATENTED CASE

FEPPFee payment procedure

Free format text:PAYOR NUMBER ASSIGNED (ORIGINAL EVENT CODE: ASPN); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

FPAYFee payment

Year of fee payment:4

FEPPFee payment procedure

Free format text:MAINTENANCE FEE REMINDER MAILED (ORIGINAL EVENT CODE: REM.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

LAPSLapse for failure to pay maintenance fees

Free format text:PATENT EXPIRED FOR FAILURE TO PAY MAINTENANCE FEES (ORIGINAL EVENT CODE: EXP.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

STCHInformation on status: patent discontinuation

Free format text:PATENT EXPIRED DUE TO NONPAYMENT OF MAINTENANCE FEES UNDER 37 CFR 1.362

FPLapsed due to failure to pay maintenance fee

Effective date:20200529


[8]ページ先頭

©2009-2025 Movatter.jp