Movatterモバイル変換


[0]ホーム

URL:


US7403183B2 - Image data processing method, and image data processing circuit - Google Patents

Image data processing method, and image data processing circuit
Download PDF

Info

Publication number
US7403183B2
US7403183B2US10/797,154US79715404AUS7403183B2US 7403183 B2US7403183 B2US 7403183B2US 79715404 AUS79715404 AUS 79715404AUS 7403183 B2US7403183 B2US 7403183B2
Authority
US
United States
Prior art keywords
image data
frame image
preceding frame
amount
change
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active, expires
Application number
US10/797,154
Other versions
US20040189565A1 (en
Inventor
Jun Someya
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Trivale Technologies LLC
Original Assignee
Mitsubishi Electric Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Mitsubishi Electric CorpfiledCriticalMitsubishi Electric Corp
Assigned to MITSUBISHI DENKI KABUSHIKI KAISHAreassignmentMITSUBISHI DENKI KABUSHIKI KAISHAASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS).Assignors: SOMEYA, JUN
Publication of US20040189565A1publicationCriticalpatent/US20040189565A1/en
Application grantedgrantedCritical
Publication of US7403183B2publicationCriticalpatent/US7403183B2/en
Assigned to TRIVALE TECHNOLOGIESreassignmentTRIVALE TECHNOLOGIESASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS).Assignors: MITSUBISHI ELECTRIC CORPORATION
Activelegal-statusCriticalCurrent
Adjusted expirationlegal-statusCritical

Links

Images

Classifications

Definitions

Landscapes

Abstract

Consecutive frames of image data are processed for display by, for example, a liquid crystal display. The image data are compressed, delayed, and decompressed to generate primary reconstructed data representing the preceding frame, and the amount of change from the preceding frame to the current frame is determined. Secondary reconstructed data are generated from the current frame image data according to the amount of change. Compensated image data are generated from the current frame image data and the primary and secondary reconstructed data; in this process, either the primary or the secondary reconstructed data may be selected according to the amount of change, or the primary and secondary reconstructed data may be combined according to the amount of change. The amount of memory needed to delay the image data can thereby be reduced without introducing compression artifacts when the amount of change is small.

Description

BACKGROUND OF THE INVENTION
1. Field of the Invention
The present invention relates, in the driving of a liquid crystal display device, to a processing method and a processing circuit for compensating image data in order to improve the response speed of the liquid crystal; more particularly, the invention relates to a processing method and a processing circuit for compensating the voltage level of a signal for displaying an image in accordance with the response speed characteristic of the liquid crystal display device and the amount of change in the image data.
2. Description of the Related Art
Liquid crystal panels are thin and lightweight, and their molecular orientation can be altered, thus changing their optical transmittance to enable gray-scale display of images, by the application of a driving voltage, so they are extensively used in television receivers, computer monitors, display units for portable information terminals, and so on. However, the liquid crystals used in liquid crystal panels have the disadvantage of being unable to handle rapidly changing images, because the transmittance varies according to a cumulative response effect. One known solution to this problem is to improve the response speed of the liquid crystal by applying a driving voltage higher than the normal liquid crystal driving voltage when the gray level of the image data changes.
For example, a video signal input to a liquid crystal display device may be sampled by an analog-to-digital converter, using a clock having a certain frequency, and converted to image data in a digital format, the image data being input to a comparator as image data of the current frame, and also being delayed in an image memory by an interval corresponding to one frame, then input to the comparator as image data of the previous frame. The comparator compares the image data of the current frame with the image data of the previous frame, and outputs a brightness change signal representing the difference in brightness between the image data of the two frames, together with the image data of the current frame, to a driving circuit. If the brightness value of a pixel has increased in the brightness change signal, the driving circuit drives the picture element on the liquid crystal panel by supplying a driving voltage higher than the normal liquid crystal driving voltage; if the brightness value has decreased, the driving circuit supplies a driving voltage lower than the normal liquid crystal driving voltage. When there is a change in brightness between the image data of the current frame and the image data of the previous frame, the response speed of the liquid crystal display element can be improved by varying the liquid crystal driving voltage by more than the normal amount in this way (see, for example,document 1 below).
Because the improvement of liquid crystal response speed described above involves delaying the image data in order to detect brightness changes by comparing the image data of the current frame with the image data of the previous frame, the image memory needs to be large enough to store one frame of image data. The number of pixels displayed on liquid crystal panels is increasing, due especially to increased screen size and higher definition in recent years, and the amount of image data per frame is increasing accordingly, so a need has arisen to increase the size of the image memory used for the delay; this increase in the size of the image memory raises the cost of the display device.
One known method of restraining the increase in the size of the image memory is to reduce the image memory size by allocating one address in the image memory to a plurality of pixels. For example, the size of the image memory can be reduced by decimating the image data, excluding every other pixel horizontally and vertically, so that one address in the image memory is allocated to four pixels; when pixel data are read from the image memory, the same image data as for the stored pixel are read repeatedly for the data of the excluded pixels, (see, for example,document 2 below).
Document 1: Japanese Patent No. 2616652 (pages 3-5, FIG. 1)
Document 2: Japanese Patent No. 3041951 (pages 2-4, FIG. 2)
A problem is that when the image data stored in the frame memory are reduced by a simple rule such as removing every other pixel vertically and horizontally, as indocument 2 above, amounts of temporal change in the image data reconstructed by replacing the eliminated pixel data with adjacent pixel data may not be calculated correctly, in which case, since the amount of change used in compensation of the image data is erroneous, the compensation of the image data is not performed correctly, and the effectiveness with which the response speed of the liquid crystal display device is improved is reduced.
The present invention addresses this problem, with the object of enabling amounts of change in the image data to be detected accurately while requiring only a small amount of image memory to delay the image data, thereby enabling image data compensation to be performed accurately.
SUMMARY OF THE INVENTION
To attain the above object, the present invention provides an image data processing method for determining a voltage applied to a liquid crystal in a liquid crystal display device based on image data representing a plurality of frame images successively displayed on the liquid crystal display device, comprising:
calculating an amount of change between reconstructed current frame image data representing an image of a current frame and primary reconstructed preceding frame image data representing an image of a preceding frame which precedes the current frame by one frame interval, the reconstructed current frame image data being obtained by encoding and decoding original current frame image data representing the image of the current frame, the primary reconstructed preceding frame image data being obtained by encoding, delaying by one frame interval, and then decoding the original current frame image data;
generating secondary reconstructed preceding frame image data representing the image of the preceding frame, based on the original current frame image data and said amount of change;
generating reconstructed preceding frame image data representing an image of the preceding frame, based on an absolute value of said amount of change, the primary reconstructed preceding frame image data, and the secondary reconstructed preceding frame image data; and
generating compensated image data having compensated values representing the image of the current frame, based on the original current frame image data and the reconstructed preceding frame image data.
According to the present invention, the data are compressed before being delayed, so the size of the image memory forming the delay unit can be reduced, and changes in the image data-can be detected accurately.
Moreover, optimal processing is carried out both when there is considerable change in the image data, and when there is little or practically no change, so accurate compensation can be carried out regardless of the degree of change in the image.
BRIEF DESCRIPTION OF THE DRAWINGS
In the attached drawings:
FIG. 1 is a block diagram showing the configuration of a liquid crystal display driving device according to a first embodiment of the present invention;
FIGS. 2A and 2B are block diagrams showing examples of the compensated image data generator inFIG. 1 in more detail;
FIGS. 3A to 3H are diagrams showing values of image data for explaining effects of encoding and decoding errors on the compensated image data, in particular the effects when the absolute value of the amount of change is small;
FIG. 4 is a diagram showing examples of the response characteristics of a liquid crystal;
FIG. 5A is a diagram showing variations in a current frame image data value;
FIG. 5B is a diagram showing variations in the compensated image data value obtained by compensation with compensation data;
FIG. 5C is a diagram showing the response characteristic of the liquid crystal responsive to an applied voltage corresponding to the compensated image data;
FIGS. 6A and 6B constitute a flowchart schematically showing an example of the image data processing method of the image data processing circuit shown inFIG. 1;
FIG. 7 is a flowchart schematically showing another example of the image data processing method of the image data processing circuit shown inFIG. 1;
FIG. 8 is a block diagram showing an example of a compensated image data generator used in a second embodiment of the present invention;
FIG. 9 is a diagram schematically illustrating the structure of the lookup table used in the second embodiment;
FIG. 10 is a diagram showing an example of response times of a liquid crystal, depending on changes in image brightness between the preceding frame and the current frame;
FIG. 11 is a diagram showing an example of amounts of compensation for the current frame image data obtained from the response times of the liquid crystal inFIG. 10;
FIG. 12 is a flowchart showing an example of the image data processing method of the second embodiment;
FIG. 13 is a block diagram showing another example of the compensated image data generator used in the second embodiment;
FIG. 14 is a diagram showing an example of compensated image data obtained from the amounts of compensation for the current frame image data shown inFIG. 11;
FIG. 15 is a flowchart schematically showing an example of the image data processing method of a third embodiment of the present invention;
FIG. 16 is a block diagram showing the internal structure of the compensated image data generator in a fourth embodiment of the present invention;
FIG. 17 is a diagram schematically showing an example of operations performed when a lookup table is used in the compensated image data generator;
FIG. 18 is a diagram illustrating a method of calculating compensated image data by interpolation;
FIG. 19 is a flowchart schematically showing an example of the image data processing method of the fourth embodiment;
FIG. 20 is a block diagram showing the configuration of a liquid crystal display driving device according to a fifth embodiment of the present invention; and
FIGS. 21A and 21B constitute a flowchart schematically showing an example of the image data processing method of the image data processing circuit shown inFIG. 20.
BEST MODE OF PRACTICING THE INVENTIONFirst Embodiment
FIG. 1 is a block diagram showing the configuration of a liquid crystal display driving device according to a first embodiment of the present invention;
Theinput terminal1 is a terminal through which an image signal is input to display an image on a liquid crystal display device. Areceiving unit2 performs tuning, demodulation, and other processing of the image signal received at theinput terminal1 and thereby successively outputs image data representing a one-frame portion of the present image, that is, the image data Di1 of the present frame (the current frame). The image data Di1 of the current frame, which have not undergone processing such as encoding in the processing circuit, will also be referred to as the original current frame image data.
The imagedata processing circuit3 comprises an encoding unit4, adelay unit5, decoding units6 and7, an amount-of-change calculation unit8, a secondary preceding frameimage data reconstructor9, a reconstructed preceding frameimage data generator10, and a compensatedimage data generator11. The imagedata processing circuit3 generates compensated image data Dj1 for the current frame, corresponding to the original current frame image data Di1. The compensated current frame image data Dj1 will also be referred to simply as compensated image data.
Thedisplay unit12, which comprises an ordinary liquid crystal display panel, performs display operations by applying a signal voltage corresponding to the image data, such as a brightness signal voltage, to the liquid crystal to display an image.
The encoding unit4 encodes the original current frame image data Di1 and outputs encoded image data Da1. The encoding involves data compression, and can reduce the amount of data in the image data Di1. Block truncation coding methods such as FBTC (fixed block truncation encoding) or GBTC (generalized block truncation encoding) can be used to encode the image data Di1. Any still-picture encoding method can also be used, including orthogonal transform encoding methods such as JPEG, predictive encoding methods such as JPEG-LS, and wavelet transform methods such as JPEG2000. These sorts of still-image encoding methods can be used even though they are non-reversible encoding methods in which the decoded image data do not perfectly match the image data before encoding.
Thedelay unit5 receives the encoded image data Da1, delays the received data for an interval equivalent to one frame, and outputs the delayed data. The output of thedelay unit5 is previous frame image data Da0 in which are encoded the image data one frame before the current frame image data Di1, i.e., the previous frame image data (preceding frame image data).
Thedelay unit5 comprises a memory that stores the encoded image data Da1 for one frame interval; the higher the encoding ratio (data compression ratio) of the image data is, the more the size of the memory can be reduced.
Decoding unit6 decodes the encoded image data Da1 and outputs decoded image data Db1 corresponding to the current frame image. The decoded image data Db1 will also be referred to as reconstructed current frame image data.
Decoding unit7 outputs decoded image data Db0 corresponding to the image of the preceding frame by decoding the encoded image data Da0 delayed by thedelay unit5. The decoded image data Db0 will also be referred to as primary reconstructed preceding frame image data, for a reason that will be explained later. The encoding unit4, thedelay unit5 and the decoding unit7 in combination form a primary preceding frame image data reconstructor.
The output of decoded image data Db1 by decoding unit6 is substantially simultaneous with the output of decoded image data Db0 by decoding unit7.
The amount-of-change calculation unit8 subtracts the decoded image data Db1 corresponding to the image of the current frame from the decoded image data Db0 corresponding to the image of the preceding frame to obtain their difference, obtaining an amount of change Av1 and its absolute value |Av1|. More specifically, it calculates and outputs amount-of-change data Dv1 and absolute amount-of-change data |Dv1| representing the amount of change and its absolute value. The amount of change Av1 will also be referred to as the first amount of change, to distinguish it from a second amount of change Dw1 that will be described later. For the same reason, the amount-of-change data Dv1 and absolute amount-of-change data |Dv1| will also be referred to as the first amount-of-change data and first absolute amount-of-change data.
The amount-of-change calculation unit8, in combination with the decoding unit6, forms an amount-of-change calculation circuit which calculates an amount of change between the image of the current frame and the image of the preceding frame.
The secondary preceding frameimage data reconstructor9 calculates secondary reconstructed preceding frame image data Dp0 corresponding to the image in the preceding frame by adding the amount-of-change data Dv1 to the current frame image data Di1 (in effect, adding the amount of change Av1 to the value of the original current frame image data Di1). The output of decoding unit7 is referred to as the primary reconstructed preceding frame image data to distinguish it from the secondary reconstructed preceding frame image data output from the secondary preceding frameimage data reconstructor9. The encoding unit4, thedelay unit5 and the decoding unit7 in combination form a reconstructed preceding frame image data generator.
The reconstructed preceding frameimage data generator10 generates reconstructed preceding frame image data Dq0 based on the absolute amount-of-change data |Dv1| output by the amount-of-change calculation unit8, the primary reconstructed preceding frame image data Db0 from decoding unit7, and the secondary reconstructed preceding frame image data Dp0 from the secondary preceding frameimage data reconstructor9, and outputs the reconstructed preceding frame image data Dq0 to the compensatedimage data generator11.
For example, either the primary reconstructed preceding frame image data Db0 or the secondary reconstructed preceding frame image data Dp0 may be selected and output, based on the absolute amount of change data |Dv1|. More specifically, the primary reconstructed preceding frame image data Db0 is selected and output as the reconstructed preceding frame image data Dq0 when the absolute amount-of-change data |Dv1| is greater than a threshold SH0, which may be set arbitrarily, and the secondary reconstructed preceding frame image data Dp0 is selected and output as the reconstructed preceding frame image data Dq0 when the absolute amount of change data |Dv1| is less than the threshold SH0.
The compensatedimage data generator11 generates and outputs compensated image data Dj1 based on the original current frame image data Di1 and the reconstructed preceding frame image data Dq0.
The compensation is performed to compensate for the delay due to the response speed characteristic of the liquid crystal display device; when the brightness value of an image changes between the current frame and the preceding frame, for example, the voltage levels of the signal that determines the brightness values of the image corresponding to the current frame image data Di1 are compensated so that the liquid crystal will achieve the transmittance corresponding to the brightness values of the current frame image before the elapse of one frame interval from the display of the preceding frame image.
The compensatedimage data generator11 compensates the voltage levels of the signal for displaying the image corresponding to the image data of the current frame in correspondence to the response speed characteristic indicating the time from the input of image data to thedisplay unit12 of the liquid crystal display device to the display thereof and the amount of change between the image data of the preceding frame and the image data of the current frame input to the liquid crystal display driving device.
FIGS. 2A and 2B are block diagrams showing examples of the compensatedimage data generator11 in more detail. The compensatedimage data generator11 inFIG. 2A has a subtractor11a, acompensation value generator11b, and acompensation unit11c.
The subtractor11acalculates the difference between the reconstructed preceding frame image data Dq0 and the original current frame image data Di1; that is, it calculates the second amount of change Dw1. The reconstructed preceding frame image data Dq0 is either the primary reconstructed preceding frame image data Db0 or the secondary reconstructed preceding frame image data Dp0, selected according to the value of the absolute amount-of-change data |Dv1|.
Thecompensation value generator11bcalculates a compensation value Dc1 from the response time of the liquid crystal corresponding to the second amount of change Dw1, and outputs the compensation value Dc1.
Dc1=Dw1*a can be used as an exemplary formula showing the operation of thecompensation value generator11b. The quantity a, which is determined from the characteristics of the liquid crystal used in thedisplay unit12, is a weighting coefficient for determining the compensation value Dc1.
Thecompensation value generator11bdetermines the compensation value Dc1 by multiplying the amount of change Dw1 output from the subtractor11aby the weighting coefficient a.
The compensation value Dc1 can also be calculated by use of the formula Dc1=Dw1*a (Di1) by changing thecompensation value generator11bto thecompensation value generator11b′ configured as shown inFIG. 2B. Here, a (Di1) is a weighting coefficient for determining the compensation value Dc1, but the weighting coefficient is generated on the basis of the original current frame image data Di1. This function is determined according to the characteristics of the liquid crystal; the function may, for example, strengthen the weights of high-brightness parts, or strengthen the weights of medium-brightness parts; a quadratic function or a function of higher degree may be used.
Thecompensation unit11cuses the compensation data Dc1 to compensate the original current frame image data Di1, and outputs the compensated image data Dj1. Thecompensation unit11cgenerates the compensated image data Dj1 by, for example, adding the compensation value Dc1 to the original current frame image data Di1.
Instead of this type of compensation unit, one that generates the compensated image data Dj1 by multiplying the original current frame image data Di1 by the compensation value Dc1 may be used.
Thedisplay unit12 uses a liquid crystal panel and applies a voltage corresponding to the compensated image data Dj1 to the liquid crystal to change its transmittance, thereby changing the displayed brightness of the pixels, whereby the image is displayed.
The difference between the effect when the primary reconstructed preceding frame image data Db0 output from decoding unit7 are used as the reconstructed preceding frame image data Dq0 and the effect when the secondary reconstructed preceding frame image data Dp0 output from the secondary preceding frameimage data reconstructor9 are used as the reconstructed preceding frame image data Dq0 will now be described.
First, suppose that the reconstructed preceding frameimage data generator10 always outputs the primary reconstructed preceding frame image data Db0 as the reconstructed preceding frame image data Dq0, regardless of the amount of change Av1. In this case, the compensatedimage data generator11 always generates the compensated image data Dj1 from the original current frame image data Di1 and the decoded image data Db0.
Among a series of images input successively from theinput terminal1, if there is a difference of a certain value or more between the images of preceding and following frames, that is, if there is a large temporal change, the compensatedimage data generator11 performs compensation responsive to the temporal changes in the image data, but the decoded image data Db0 include encoding and decoding error due to the encoding unit4 and the decoding unit7, so this error will be included in the compensated image data Dj1 as compensation error. This encoding and decoding error can be tolerated when there are comparatively large changes in the image. That is, when there are large changes in the image, there is no great problem in using the decoded image data, i.e., the primary reconstructed preceding frame image data Db0, as the reconstructed preceding frame image data Dq0.
If there is no large difference between the images of preceding and following frames, that is, if there is little or no temporal change, it would be desirable for the compensatedimage data generator11 to output the original current frame image data Di1 as the compensated image data Dj1 without compensating the image data. Since the decoded image data Db0 include encoding and decoding error as explained above, however, even when the image does not change, the decoded image data Db0 may not match the original current frame image data Di1. The result is that the compensatedimage data generator11 adds unnecessary compensation to the original current frame image data Di1. If the image does not change, since the error of this compensation is added as noise to the current frame image, the error cannot be ignored. When the image does not change, that is, it is not appropriate to use the decoded image data, i.e., the primary reconstructed preceding frame image data Db0, as the reconstructed preceding frame image data Dq0.
Next, suppose that the reconstructed preceding frameimage data generator10 always outputs the secondary reconstructed preceding frame image data Dp0 as the reconstructed preceding frame image data Dq0, regardless of the amount of change Av1.
Since the secondary reconstructed preceding frame image data Dp0 are calculated from the original current frame image data Di1 and the amount-of-change data Dv1, the encoding and decoding error of the decoded image data Db1 corresponding to the current frame image, that is, the encoding and decoding error due to the encoding unit4 and decoding unit6, and the encoding and decoding error of the decoded image data Db0 corresponding to the preceding frame image, that is, the encoding and decoding error due to the encoding unit4 and decoding unit7, are included in a combined form (mutually reinforcing or canceling) in the secondary reconstructed preceding frame image data Dp0.
When there is a comparatively large temporal change in the image data input from theinput terminal1, the above combined error may be larger or smaller than the above-described the encoding and decoding error of the decoded image data Db0 alone, i.e., the encoding and decoding error due to the encoding unit4 and decoding unit7, but in general the error tends to be larger. When there is thus a comparatively large temporal change in the image, encoding and decoding error of the decoded image data Db0 and decoded image data Db1 is included in the secondary reconstructed preceding frame image data Dp0, and accordingly in the compensated image data Dj1; this error tends to be larger than the encoding and decoding error of the decoded image data Db0 alone, so when there is a large change in the image, it is inappropriate to use the secondary reconstructed preceding frame image data Dp0 as the reconstructed preceding frame image data Dq0.
When the input image data do not change, both the decoded image data Db1 corresponding to the current frame image and the decoded image data Db0 corresponding to the preceding frame image contain coding or decoding error, but the encoding and decoding errors included in these two decoded image data are the same. If the image does not change at all, accordingly, the errors in the two reconstructed preceding frame image data Db0 and Db1 completely cancel out; the amount-of change data Dv1 are zero, as if encoding and decoding had not been performed, and the secondary reconstructed preceding frame image data Dp0 are identical to the original current frame image data Di1. In the reconstructed preceding frameimage data generator10, the secondary reconstructed preceding frame image data Dp0 are output as the reconstructed preceding frame image data Dq0 to the compensatedimage data generator11, and in the compensatedimage data generator11, as described above, no unnecessary compensation is performed, as would be performed if the primary reconstructed preceding frame image data Db0 were always output. Accordingly, when the image does not change, it is appropriate to use the secondary reconstructed preceding frame image data Dp0 as the reconstructed preceding frame image data Dq0.
From the above, it can be seen that the encoding and decoding error included in the compensated image data Dj1 output from the compensatedimage data generator11 can be reduced in the reconstructed preceding frameimage data generator10 by selecting the secondary reconstructed preceding frame image data Dp0, which is advantageous when the image does not change, in the reconstructed preceding frameimage data generator10 if the absolute amount-of-change data |Dv1| is less than a threshold SH0, and selecting the primary reconstructed preceding frame image data Db0, which is advantageous when the image changes greatly, if the absolute amount-of-change data |Dv1| is greater than the threshold SH0.
The encoding unit4 and decoding units6 and7 of the first embodiment are not configured for reversible encoding. If the encoding unit4 and decoding units6 and7 were to be configured for reversible encoding, the above-described effects of encoding and decoding error would vanish, making the decoding unit6, the amount-of-change calculation unit8, the secondary preceding frameimage data reconstructor9, and the reconstructed preceding frameimage data generator10 unnecessary. In that case, decoding unit7 could always input reconstructed preceding frame image data Db0 to the compensatedimage data generator11 as the reconstructed preceding frame image data Dq0, simplifying the circuit. The present embodiment applies to a non-reversible encoding unit4 and decoding units6 and7, rather than to units of the reversible coding type.
Error due to encoding and decoding will be described below with reference toFIGS. 3A to 3H.
FIGS. 3A to 3H show an example of the effect of encoding and decoding error on the compensated image data Dj1, especially the effect when the absolute amount-of-change data |Dv1| is small (smaller than the threshold SH0). The letters A to D inFIGS. 3A,3C,3D,3F,3G, and3H designate columns to which pixels belong; the letters a to d designate rows to which pixels belong.
FIG. 3A shows exemplary values of the original preceding frame image data Di0, that is, the image data representing the image one frame before the current frame.FIG. 3B shows exemplary encoded image data Da0 obtained by coding the preceding frame image data Di0 shown inFIG. 3A.FIG. 3C shows exemplary reconstructed preceding frame image data Db0 obtained by decoding the encoded image data Da0 shown inFIG. 3B.
FIG. 3D shows exemplary values of the original current frame image data Di1.FIG. 3E shows exemplary encoded image data Da1 obtained by coding the original current frame image data Di1 shown inFIG. 3D.FIG. 3F shows exemplary current frame decoded image data Db1 obtained by decoding the encoded image data Da1 shown inFIG. 3E.
FIG. 3G shows exemplary values of the amount-of-change data Dv1 obtained by taking the difference between the decoded image data Db0 shown inFIG. 3C and the decoded image data Db1 shown inFIG. 3F.FIG. 3H shows exemplary values of the reconstructed preceding frame image data Dq0 output from the reconstructed preceding frameimage data generator10 to the compensatedimage data generator11.
The values of the current frame image data Di1 shown inFIG. 3D are unchanged from the values of the preceding frame image data Di0 shown inFIG. 3A.FIGS. 3B and 3E show encoded image data obtained by FTBC encoding, using eight-bit representative values La, Lb, with one bit being assigned to each pixel.
As can be seen from comparisons of the image data before encoding, shown inFIGS. 3A and 3D, with the image data that have been encoded and decoded, shown inFIGS. 3C and 3F, the values of the decoded image data shown inFIGS. 3C and 3F contain errors. As can be seen fromFIGS. 3C and 3F, the data Db0 and Db1 that have been encoded and decoded are mutually equal. Thus even when encoding and decoding error arises in the decoded image data Db1 and Db0, since the decoded image data Db1 and the decoded image data Db0 are mutually equal, the values (FIG. 3G) of the differences between them are zero.
In the present embodiment, the secondary reconstructed preceding frame image data Dp0 are the sum of the values of the original current image data Di1 inFIG. 3D and the amount-of-change data Dv1 inFIG. 3G, but since the values of the amount-of-change data Dv1 inFIG. 3G are zero, the values of the secondary reconstructed preceding frame image data Dp0 are the same as the values of the original current frame image data Di1. Accordingly, the values of the preceding frame image data Dq0 shown inFIG. 3H, output from the reconstructed preceding frameimage data generator10, are the same as the values of the original current frame image data Di1 inFIG. 3D; these values are output to the compensatedimage data generator11.
The original current frame image data Di1 input to the compensatedimage data generator11 have not undergone an image encoding process in the encoding unit4. The compensatedimage data generator11, to which the unchanging data inFIGS. 3D and 3H are input, receives the original current frame image data Di1 and the reconstructed preceding frame image data Dq0, which have the same values, and can output the compensated image data Dj1 to thedisplay unit12, without compensating the original current frame image data Di1 (in other words, it outputs data obtained by compensation with compensating values of zero), as is desirable when the image does not change.
FIG. 4 shows an example of the response speed of a liquid crystal, showing changes in transmittance when voltages V50 and V75 are applied in the 0% transmittance state.FIG. 4 shows that there are cases in which an interval longer than one frame interval is needed for the liquid crystal to reach the proper transmittance value. When the brightness value of the image data changes, the response speed of the liquid crystal can be improved by applying a larger voltage, so that the transmittance reaches the desired value within one frame interval.
If voltage V75 is applied, for example, the transmittance of the liquid crystal reaches 50% when one frame interval has elapsed. Therefore, if the target value of the transmittance is 50%, the transmittance of the liquid crystal can reach the desired value within one frame interval if the voltage applied to the liquid crystal is V75. Thus when the image data Di1 changes from 0 to 127, the transmittance can be brought to the desired value within one frame interval by inputting191 as compensated image data as Dj1 to thedisplay unit12.
FIGS. 5A to 5C illustrate the operation of the liquid crystal driving circuit of the present embodiment.FIG. 5A illustrates changes in the values of the current frame image data Di1.FIG. 5B illustrates changes in the values of the compensated image data Dj1 obtained by compensation with the compensation data Dc1.FIG. 5C shows the response characteristic (solid curve) of the liquid crystal when a voltage corresponding to the compensated image data Dj1 is applied.FIG. 5C also shows the response characteristic (dashed curve) of the liquid crystal when the uncompensated image data (the current frame image data) Di1 are applied. When the brightness value increases or decreases as shown inFIG. 5B, a compensation value V1 or V2 is added to or subtracted from the original current frame image data Di1 according to the compensation data Dc1 to generate the compensated image data Dj1. A voltage corresponding to the compensated image data Dj1 is applied to the liquid crystal in thedisplay unit12, thereby driving the liquid crystal to the predetermined transmittance value within substantially one frame interval (FIG. 5C).
FIGS. 6A and 6B are a flowchart schematically showing an example of the image data processing method of the image data processing circuit shown inFIG. 1.
First, when the current frame image data Di1 is input from theinput terminal1 through the receivingunit2 to the image data processing circuit3 (St1), the encoding unit4 compressively encodes the current frame image data Di1 and outputs the encoded image data Da1, the data size of which has been reduced (St2). The encoded image data Da1 are input to thedelay unit5, which outputs the encoded image data Da1 with a delay of one frame. The output of thedelay unit5 is the encoded image data Da0 of the preceding frame (St3). The encoded image data Da0 are input to the decoding unit7, which outputs the preceding frame decoded image data Db0 by decoding the input encoded image data Da0 (St4).
The encoded image data Da1 output from the encoding unit4 are also input to the decoding unit6, which outputs decoded image data of the current frame, that is, the reconstructed current frame image data Db1, by decoding the input encoded image data Da1 (St5) The preceding frame decoded image data Db0 and the current frame decoded image data Db1 are input to the amount-of-change calculation unit8, and the difference obtained by, for instance, subtracting the current frame decoded image data Db1 from the preceding frame decoded image data Db0 and the absolute value of the difference are output as amount-of-change data Dv1 and first absolute amount-of-change data |Dv1| expressing the amount of change Av1 of each pixel and its absolute value |Av1| (St6). The amount of change Dv1 accordingly indicates the temporal change Av1 of the image data for each pixel in the frame by using the decoded image data of two temporally differing frames, such as the preceding frame decoded image data Db0 and the current frame decoded image data Dbl.
The first amount-of-change data Dv1 is input to the secondary preceding frameimage data reconstructor9, which reconstructs and outputs the secondary reconstructed preceding frame image data Dp0 by adding the amount-of-change data Dv1 to the original current frame image data Di1, which are input separately (St7).
The absolute amount-of-change data |Dv1| are input to the reconstructed preceding frameimage data generator10, which decides whether the first absolute amount-of-change data |Dv1| are greater than a first threshold (St8). If the absolute amount-of-change data |Dv1| are greater than the first threshold (St8: YES), the reconstructed preceding frameimage data generator10 selects the primary reconstructed preceding frame image data Db0, which are input separately, rather than the secondary reconstructed preceding frame image data Dp0 and outputs the reconstructed preceding frame image data Db0 to the compensatedimage data generator11 as the reconstructed preceding frame image data Dq0 (St9). When the absolute amount-of-change data |Dv1| are not greater than the first threshold (St8: NO), the reconstructed preceding frameimage data generator10 selects the secondary reconstructed preceding frame image data Dp0 rather than the primary reconstructed preceding frame image data Db0 and outputs the secondary reconstructed preceding frame image data Dp0 to the compensatedimage data generator11 as the preceding frame image data Dq0 (St10).
When the primary reconstructed preceding frame image data Db0 are input to the compensatedimage data generator11 as the reconstructed preceding frame image data Dq0, the subtractor11agenerates the difference between the primary reconstructed preceding frame image data Db0 and the original current frame image data Di1, that is, the second amount of change Dw1 (1) (St11), thecompensation value generator11bcalculates compensation values Dc1 from the response time of the liquid crystal corresponding to the second amount of change Dw1 (1), and thecompensation unit11cgenerates and outputs the compensated image data Dj1 (1) by using the compensation values Dc1 to compensate the original current frame image data Di1 (St13).
When the secondary reconstructed preceding frame image data Dp0 are input to the compensatedimage data generator11 as the reconstructed preceding frame image data Dq0, the subtractor11agenerates the difference between the secondary reconstructed preceding frame image data Dp0 and the original current frame image data Di1, that is, the second amount of change Dw1 (2) (St12), the compensation value generator lib calculates compensation values Dc1 from the response time of the liquid crystal corresponding to the second amount of change Dw1 (2), and thecompensation unit11cgenerates and outputs the compensated image data Dj1 (2) by using the compensation values Dc1 to compensate the original current frame image data Di1 (St14).
The compensation in steps St13 and St14 compensates the voltage level of a brightness signal or other display signal corresponding to the image data of the current frame in accordance with the response speed characteristic representing the time from input of image data to the liquid crystal display device in thedisplay unit12 until display of the image, and the amount of change from the preceding frame to the current frame in the image data input to the liquid crystal display driving device.
When the first amount of change Av1 is zero, the second amount of change is also zero and the compensation value Dc1 is zero, so the original current frame image data Di1 are not compensated but are output without alteration as the compensated image data Dj1.
Thedisplay unit12 displays the compensated image data Dj1 by, for example, applying a voltage corresponding to a brightness value expressed thereby to the liquid crystal.
FIG. 7 is a flowchart schematically showing another example of the image data processing method in the compensatedimage data generator11 inFIG. 1. The process through steps St11 and St12 inFIG. 7 is the same as in the example shown inFIGS. 6A and 6B; steps St1 to St8 are omitted from the drawing.
Steps St9, St10, St11, and St12 inFIG. 7 are the same as inFIG. 6B. In steps St11 and St12, however, in addition to the second amount of change Dw1, its absolute value |Dw1| is also generated.
Upon receiving input of the second amount of change Dw1 (1) and its absolute value from step St11 or the second amount of change Dw1 (2) and its absolute value from step St12 inFIG. 7, the compensatedimage data generator11 decides whether the absolute value of the second amount of change Dw1 is greater than a second threshold or not (St15); if the absolute value of the second amount of change Dw1 is greater than the second threshold (St15: YES), it generates and outputs compensated image data Dj1 (1) by compensating the original current frame image data Di1 (St13).
If the absolute value of the second amount of change Dw1 is not greater than the second threshold (St15: NO), the compensated image data Dj1 (2) are generated and output by compensating the original current frame image data Di1 by a restricted amount, or the compensated image data Dj1 (2) are generated and output without performing any compensation, so that the amount of compensation is zero (St14).
Thedisplay unit12 displays the compensated image data Dj1 by, for example, applying a voltage corresponding to a brightness value expressed thereby to the liquid crystal.
The above-described steps from St11 to St15 are carried out for each pixel and each frame.
In the description given above, the reconstructed preceding frameimage data generator10 selects either the secondary reconstructed preceding frame image data Dp0 or the reconstructed preceding frame image data Db0, in accordance with threshold SH0 which can be specified as desired, but the processing in the reconstructed preceding frameimage data generator10 is not limited to this.
For example, two values SH0 and SH1 may be provided as second thresholds, and the reconstructed preceding frameimage data generator10 may be configured to output the reconstructed preceding frame image data Dq0 as follows, according to the relationships among these thresholds SH0 and SH1 and the absolute amount-of-change data |Dv1|.
The relationship between SH0 and SH1 is given by the following expression (1):
SH1>SH0  (1)
When |Dv1|<SH0,
Dq0=Dp0  (2)
WhenSH0Dv1SH1,Dq0=Db0×(Dv1-SH0)/(SH1-SH0)+Dp0×{1-(Dv1-SH0)/(SH1-SH0)}(3)
When SH1<|Dv1|,
Dq0=Db0  (4)
When the absolute amount-of-change data Dv1 are between the thresholds SH0 and SH1, the preceding frame image data Dq0 are calculated from the primary reconstructed preceding frame image data Db0 and the secondary reconstructed preceding frame image data Dp0 as in equations (2) to (4). That is, when the primary reconstructed preceding frame image data Db0 and the secondary reconstructed preceding frame image data Dp0 are combined in a ratio corresponding to the position of the absolute amount-of-change data |Dv1| in the range between threshold SH0 and threshold SH1 (calculated by adding their values multiplied by coefficients corresponding to closeness to the thresholds) and output as the reconstructed preceding frame image data Dq0. Accordingly, a step-like transition in the reconstructed preceding frame image data Dq0 can be avoided at the boundary between the range in which the amount of change is small and can be appropriately processed as if there were no change, and the range that is appropriately processed as if there was a large change in the image, and near this boundary, processing can be carried out as a compromise between the processing when there is no change and the processing when there is a large change.
When generating the compensated image data Dj1, the image data processing circuit of the present embodiment is adapted to use the secondary reconstructed preceding frame image data Dp0 output by the secondary preceding frameimage data reconstructor9 as the reconstructed preceding frame image data when the absolute value of the amount of change is small, and to use the primary reconstructed preceding frame image data Db0 output by decoding unit7 as the reconstructed preceding frame image data Dq0 when the absolute value of the amount of change is large, so it is possible both to prevent the occurrence of error when the input image data do not change, and to reduce the error when the input image data change.
Since the original current frame image data Di1 are encoded by the encoding unit4 so as to compress the amount of data and the compressed data are delayed, the amount of memory needed for delaying the original ddi1 by one frame interval can be reduced.
Since the original current frame image data Di1 are encoded and decoded without decimating the pixel information, compensated image data Dj1 with appropriate values can be generated and the response speed of the liquid crystal can be precisely controlled.
Since the image sensor generates the compensated image data Dj1 on the basis of the original current frame image data Di1 and the reconstructed preceding frame image data Dq0, the compensated image data Dj1 are not affected by encoding and decoding errors.
Second Embodiment
In the first embodiment, the compensatedimage data generator11 calculates a second amount of change between the primary reconstructed preceding frame image data Db0 or the secondary reconstructed preceding frame image data Dp0 and the original current frame image data Di1, and then compensates the voltage-level of the brightness signal or other signal corresponding to the image data of the current frame in accordance with the response speed characteristic and the amount of change in the image data between the current frame and preceding frame, but calculating these image data for each pixel places an increased computational load on the processing unit, which is a problem. The load may be tolerable if the formulas for calculating the compensation data are simple, but if the formulas are complex, the computational load may be too great to handle. In the second embodiment, shown below, the compensation values and amounts to be applied to the image data of the current frame are pre-calculated from the response times of the liquid crystal corresponding to the image data values in the current frame and the preceding frame, and the compensation amounts thus obtained are stored in a lookup table; the amounts of compensation can then be found by use of this table, and the compensated image data are generated and output by use of these compensation amounts.
Aside from storing a table of compensation amounts in the compensatedimage data generator11 and outputting compensation amounts obtained by use of the table, this embodiment is similar to the first embodiment described above, so redundant descriptions will be omitted.
FIG. 8 shows the details of an example of the compensatedimage data generator11 used in the second embodiment. This compensatedimage data generator11 has acompensation unit11cand a lookup table (LUT)11d.
As will be explained in more detail below, the lookup table11dtakes the reconstructed preceding frame image data Dq0 and current frame image data Di1 as inputs, and outputs data prestored at an address (memory location) specified thereby as a compensation value Dc1. The lookup table11dis set up in advance so as to output an amount of compensation for the image data of the current frame, based on the response time of the liquid crystal display, corresponding to arbitrary preceding frame image data and arbitrary current frame image data.
Thecompensation unit11cis similar to the one shown inFIG. 2; it uses the compensation values Dc1 to compensate the original current frame image data Di1 and outputs the compensated image data Dj1. Thecompensation unit11cgenerates the compensated image data Dj1 by, for example, adding the compensation values Dc1 to the original current frame image data Di1.
Instead of this type of compensation unit, one that generates the compensated image data Dj1 by multiplying the original current frame image data Di1 by the compensation values Dc1 may be used.
FIG. 9 schematically shows the structure of the lookup table11d.
The part shown as a matrix inFIG. 9 is the lookup table11d; the original current frame image data Di1 and preceding frame image data Dq0, which are given as addresses, are 8-bit image data taking on values from 0 to 255. The lookup table shown inFIG. 9 has a two-dimensional array of 256×256 data items, and outputs a compensation amount Dc1 =dt(Di1, Dq0) corresponding to the combination of the original current frame image data Di1 and the reconstructed preceding frame image data Dq0.
In this embodiment, as explained inFIG. 4, there are cases in which an interval longer than one frame interval is needed for the liquid crystal to reach the proper transmittance value, so when a brightness value in the current frame image changes, the response speed of the liquid crystal is improved by applying an increased or reduced voltage, so as to bring the transmittance to the desired value within one frame interval.
FIG. 10 shows an example of the response times of a liquid crystal corresponding to changes in image brightness between the preceding frame and the current frame.
InFIG. 10, the x axis represents the value of the current frame image data Di1 (the brightness value in the image in the current frame), the y axis represents the value of the preceding frame image data Di0 (the brightness value in the image in the previous frame), and the z axis represents the response time required by the liquid crystal to reach the transmittance corresponding to the brightness value of the current frame image data Di1 from the transmittance corresponding to the brightness value of the preceding frame image data Di0.
Whereas the preceding frame image data Di0 shown inFIG. 10 indicate the image data actually input one frame before the current frame image data Di1, the reconstructed preceding frame image data Dq0 shown inFIG. 9 are generated from the primary reconstructed preceding frame image data Db0 and the secondary reconstructed preceding frame image data Dp0 (by selecting one or the other, for example), and are thus obtained by reconstruction. The reconstructed preceding frame image data Dq0 are input to the lookup table, but the reconstructed preceding frame image data Dq0 include encoding and decoding error; the values of the preceding frame image data Di0 used inFIG. 10, and inFIGS. 11 and 14 which will be described below, have not been encoded and decoded and accordingly do not include encoding and decoding error.
If the brightness values of the current frame image inFIG. 10 are 8-bit values, there are 256×256 combinations of brightness values in the current frame image and the preceding frame image, and consequently 256×256 response times, butFIG. 10 has been simplified to show only 9×9 response speeds corresponding to combinations of brightness values.
As shown inFIG. 10, the response time varies greatly with the combination of brightness values in the current frame image and the preceding frame image, but when the images in the current and preceding frames have the same brightness value, the response time is zero, as shown in the diagonal direction from front to back in the quadrilateral in the z=0 plane inFIG. 10.
FIG. 11 shows an example of amounts of compensation of the current frame image data Di1 determined from the liquid crystal response times inFIG. 10.
The compensation amount Dc1 shown inFIG. 11 is the compensation amount that should be added to the current frame image data Di1 in order for the liquid crystal to reach the transmittance corresponding to the value of the current frame image data Di1 when one frame interval has elapsed; the x and y axes are the same as inFIG. 10, but the z axis differs fromFIG. 10 by representing the amount of compensation.
The amount of compensation may be positive (+) or negative (−), because the value of the current frame image data may be greater or less than the value of the preceding frame image data. The amount of compensation is positive on the left side inFIG. 11 and negative on the right side, and is zero in the case in which the images in the current and preceding frames have the same brightness value, shown in the diagonal direction from front to back in the quadrilateral in the z=0 plane as inFIG. 10. Also as inFIG. 10, if the brightness values of the current frame image are 8-bit values, there are 256×256 compensation amounts corresponding to combinations of brightness values in the current frame image and the preceding frame image, and consequently 256×256 response times, butFIG. 11 has been simplified to show only 9×9 compensation amounts corresponding to combinations of brightness values.
Because the response time of a liquid crystal depends on the brightness values of the images of the current frame and the preceding frame as shown inFIG. 10, and the compensation amount cannot always be obtained by a simple formula, it is sometimes advantageous to determine the compensation amount by use of a lookup table, rather than by computation; data for 256×256 compensation amounts corresponding to the brightness values of both the current frame image data Di1 and the preceding frame image data Di0 are stored in the lookup table in the compensatedimage data generator11, as shown inFIG. 11.
The compensation amounts shown inFIG. 11 are set so that the larger compensation amounts correspond to the combinations of brightness values for which the response speed of the liquid crystal is slow. The response speed of a liquid crystal is particularly slow (the response time is particularly long) in changing from an intermediate brightness (gray) to a high brightness (white). Accordingly, the response speed can be effectively improved by assigning strongly positive or negative values to compensation amounts corresponding to combinations of preceding frame image data Di0 representing intermediate brightness and current frame image data Di1 representing high brightness.
FIG. 12 is a flowchart schematically showing an example of the image data processing method in the compensatedimage data generator11 in the present embodiment. The process up to steps St9 and St10 inFIG. 12 is the same as in the example shown inFIGS. 6A and 6B; steps St1 to St8 are omitted from the drawing.
Upon receiving input of the current frame image data Di1 and the primary reconstructed preceding frame image data Db0, the compensatedimage data generator11 detects the compensation amount from the lookup table11d(St16) and decides whether the compensation amount data are zero or not (St17).
When the compensation amount data are not zero. (St17: NO) the compensated image data Dj1 (1) are generated and output by compensating the original current frame image data Di1, which are input separately, with the compensation amount data (St18).
When the compensation amount data are zero (St17: YES), the compensation by the zero compensation amount data is not applied to the current frame image data Di1 (compensation value=0 is applied), and the current frame image data Di1 are output without alteration as the compensated image data Dj1 (2) (St19).
Thedisplay unit12 displays the compensated image data Dj1 by, for example, applying a voltage corresponding to a brightness value expressed thereby to the liquid crystal.
The compensation in the second embodiment is thus carried out by using a lookup table lid in which pre-calculated compensation amounts are stored, so that when the voltage level of a brightness signal or other signal in the image data of the current frame is compensated, the increase in the computational load placed on the processing unit necessary in order to calculate the image data for each pixel is less than in the first embodiment.
Third Embodiment
In the second embodiment it was shown that it is possible to reduce the computational load by using a lookup table11dcontaining pre-calculated compensation values when compensating the voltage level of a brightness or other signal in the image data of the current frame, but the computational load can be further reduced by having the lookup table store compensated image data obtained by compensating the image data of the current frame with the compensation values. Accordingly, in the third embodiment described below, compensated image data obtained by compensating the image data of the current frame with the compensation values are stored in a lookup table, and the compensated image data of the current frame are output by use of the table.
Except for storing a table of compensated image data obtained by compensating the current frame image data in advance in the compensatedimage data generator11 and using the compensated image data as the output of the compensatedimage data generator11, the third embodiment is similar to the second embodiment, and redundant descriptions will be omitted.
FIG. 13 shows the details of an example of the compensatedimage data generator11 used in the third embodiment. This compensatedimage data generator11 has a lookup table11e.
The lookup table11etakes the reconstructed preceding frame image data Dq0 and current frame image data Di1 as inputs, and outputs data prestored at an address (memory location) specified thereby as compensated image data Dj1, as will be explained in more detail below.
The lookup table11eis set up in advance so as to output the values of the compensated image data Dj1 corresponding to arbitrary preceding frame image data and arbitrary current frame image data, based on the response time of the liquid crystal display.
FIG. 14 shows an example of the compensated image data output obtained from the compensation amounts given inFIG. 11 for the original current frame image data Di1.
FIG. 14 shows compensated image data Dj1 in which the current frame image data Di1 have been compensated so that the liquid crystal will reach the transmittance corresponding to the value of the original current frame image data Di1 when one frame interval has elapsed; of the coordinate axes, only the vertical axis, which shows the values of the compensated image data Dj1, differs from FIG.11.
Because the response time of a liquid crystal depends on the brightness values of the images of the current frame and the preceding frame as shown inFIG. 10, and the compensation amount cannot always be obtained by a simple formula, compensated image data Dj1 obtained by adding 256×256 compensation amounts, corresponding to the brightness values of both the current frame image data Di1 and the preceding frame image data Di0 as shown inFIG. 11, are stored in the lookup table11eshown inFIG. 13. The compensated image data Dj1 are set so as not to exceed the displayable range of brightnesses of thedisplay unit12.
The values of the compensated image data Dj1 are set equal to the values of the current frame image data Di1 in the part of the lookup table11ein which the current frame image data Di1 and the preceding frame image data Di0 are equal, that is, the part in which the image does not vary with time.
FIG. 15 is a flowchart schematically showing an example of the image data processing method in the compensatedimage data generator11 in the present embodiment. The process up to steps St9 and St10 inFIG. 15 is the same as in the example shown inFIG. 6; steps St1 to St8 are omitted from the drawing.
Regardless of whether the primary reconstructed preceding frame image data Db0 (St9) or the secondary reconstructed preceding frame image data Dp0 (St10) are selected as the reconstructed preceding frame image data Dq0, the compensatedimage data generator11 accesses the lookup table11ewith the original current frame image data Di1 and the reconstructed preceding frame image data Dq0 as addresses, reads (detects) the compensated image data Dj1 from the lookup table11e, and outputs the compensated image data Dj1 to the display unit12 (St20). Thedisplay unit12 displays the compensated image data Dj1 by, for example, applying a voltage corresponding to the brightness value thereof to the liquid crystal.
In this type of embodiment, since a lookup table including pre-calculated compensated image data Dj1 is used, there is no need to compensate the original current frame image data with compensation values output from the lookup table, so the load on the processing device can be further reduced.
Fourth Embodiment
The second and third embodiments described above shows examples of the reduction of the computational load by using a lookup table when compensating the current frame image data, but a lookup table is a type of memory device, and it is desirable to reduce the size of the memory device.
The present embodiment enables the size of the lookup table to be reduced; the present embodiment is similar to the third embodiment described above except for the internal processing of the compensatedimage data generator11, so redundant descriptions will be omitted.
FIG. 16 is a block diagram showing the internal structure of the compensatedimage data generator11 in the present embodiment. This compensatedimage data generator11 hasdata converters13 and14, a lookup table15, and aninterpolator16.
Data converter13 linearly quantizes the current frame image data Di1 from the receivingunit2, reducing the number of bits from eight to three, for example, outputs current frame image data De1 with the reduced number of bits, and outputs an interpolation coefficient k1 that it obtains when reducing the number of bits.
Similarly,data converter14 linearly quantizes the reconstructed preceding frame image data Dq0 input from the reconstructed preceding frameimage data generator10, reducing the number of bits from eight to three, for example, outputs preceding frame image data De0 with the reduced number of bits, and outputs an interpolation coefficient k0 that it obtains when reducing the number of bits.
Bit reduction is carried out in thedata converters13 and14 by discarding low-order bits. When 8-bit input data are converted to 3-bit data as noted above, the five low-order bits are discarded.
If the five low-order bits were to be filled with zeros when the 3-bit data were restored to 8 bits, the restored 8-bit data would have smaller values than the 8-bit data before the bit reduction. Theinterpolator16 performs a correction on the output of the lookup table15 according to the low-order bits discarded in the bit reduction, as described below.
The lookup table15 inputs the 3-bit current frame image data De1 and 3-bit preceding frame image data De0 and outputs four intermediate compensated image data Df1 to Df4. The lookup table15 differs from the lookup table11ein the third embodiment in that its input data are data with a reduced number of bits, and besides outputting intermediate compensated image data Df1 corresponding to the input data, it outputs three additional intermediate compensated image data Df2, Df3, and Df4 corresponding to combinations of data (data specifying a memory location as an address) having values greater by one.
Theinterpolator16 generates the compensated image data Dj1 from the intermediate compensated image data Df1 to Df4 and the interpolation coefficients k0 and k1.
FIG. 17 shows the structure of the lookup table15. Image data De0 and De1 are 3-bit image data (with eight gray levels) taking on eight values from zero to seven. The lookup table15 stores nine rows and nine columns of data arranged two-dimensionally. Of the nine rows and nine columns, eight rows and eight columns are specified by the input data; the ninth row and ninth column store output data (intermediate compensated image data) corresponding to data with a value greater by one.
The lookup table15 outputs data dt(De1, De0) corresponding to the three-bit values of the image data Del and De0 as intermediate compensated image data Df1, and also outputs three data dt(De1+1, De0), dt(De1, De0+1), and dt(De1+1, De0+1) from the positions adjacent to the intermediate compensated image data Df1 as intermediate compensated image data Df2, Df3, and Df4, respectively.
Theinterpolator16 uses the intermediate compensated image data Df1 to Df4 and the interpolation coefficients k1 and k0 to calculate the compensated image data Dj1 by the equation (5) below.
Dc1=(1-k0)×{(1-k1)×Df1+k1×Df2}+k0×{(1-k1)×Df3+k1×Df4}(5)
FIG. 18 illustrates the method of calculation of the compensated image data Dj1 represented by equation (5) above. Values s1 and s2 are thresholds used when the number of bits of the original current frame image data Di1 is converted bydata conversion unit13. Values s3 and s4 are thresholds used when the number of bits of the preceding frame image data Dq0 is converted bydata conversion unit14. Threshold s1 corresponds to the current frame image data De1 with the converted number of bits, threshold s2 corresponds to the image data De1+1 that is one gray level (with the converted number of bits) greater than image data De1, threshold s3 corresponds to the preceding frame image data De0 with the converted number of bits, and threshold s4 corresponds to the image data De0+1 that is one gray level (with the converted number of bits) greater than image data De0.
The interpolation coefficients k1 and k0 are calculated from the relation of the value before bit reduction to the bit reduction thresholds s1, s2, s3, s4, in other words, on the relation of the value expressed by the discarded low-order bits to the thresholds; the calculation is carried out by, for example, equations (6) and (7) below.
k1=(Di1s1)/(s2s1)  (6)
where, s1<Di1≦s2.
k0=(Dq0s3)/(s4s3)  (7)
where, s3<Dq0<s4.
The compensated image data Dj1 calculated by the interpolation operation shown in equation (5) above are output to thedisplay unit12. The rest of the operation is identical to that described in connection with the second or third embodiment.
FIG. 19 is a flowchart schematically showing an example of the image data processing method in the compensatedimage data generator11 in the present embodiment. The process up to steps St9 and St10 inFIG. 19 is the same as in the example shown inFIG. 6; steps St1 to St8 are omitted from the drawing.
Regardless of whether the primary reconstructed preceding frame image data Db0 (St9) or the secondary reconstructed preceding frame image data Dp0 (St10) are selected as the reconstructed preceding frame image data Dq0, indata converter14, the compensatedimage data generator11 outputs truncated preceding frame image data De0 obtained by reducing the number of bits of the reconstructed preceding frame image data Dq0, and outputs the interpolation coefficient k0 obtained in the bit reduction (St21). Indata converter13, it outputs truncated current frame image data De1 obtained by reducing the number of bits of the original current frame image data Di1, and outputs the interpolation coefficient k1 obtained in the bit reduction (St22).
Next, the compensatedimage data generator11 detects and outputs from the lookup table15 the intermediate compensated image data Df1 corresponding to the combination of the truncated preceding frame image data De0 and the truncated current frame image data Del, and the intermediate compensated image data Df2 to Df4 corresponding to the combination of data De0+1 having one added to the data value De0 and data De1, the combination of data De0 and data De1+1 having one added to the data value De1, and the combination of De1+1 having one added to the data value De1 and data De0+1 having one added to the data value De0 (St23).
Interpolation is then performed in theinterpolator16, according to the compensated data Df1 to Df4, interpolation coefficient k0, and interpolation coefficient k1, as explained with reference toFIG. 18, to generate the interpolated compensated image data Dj1. The compensated image data Dj1 thus generated become the output of the compensated image data generator11 (St24).
Calculating the compensated image data Dj1 by performing interpolation using the interpolation coefficients k0 and k1 and the four compensated data Df1, Df2, Df3, Df4 corresponding to the data (De0, De1) obtained by converting the number of bits of the original current frame image data Di1 and the reconstructed preceding frame image data Dq0 and the adjacent data (De1+1, De0), (De1, De0+1), and (De1+1, De0+1) as explained above can reduce the effect of quantization error in thedata converters13,14 on the compensated image data Dj1.
The number of bits after data conversion by thedata conversion units13 and14 is not limited to three; any number of bits may be selected provided the number of bits enables compensated image data Dj1 to be obtained with an accuracy that is acceptable in practice (according to the purpose of use) by interpolation in theinterpolator16. The number of data items in the lookuptable memory unit15 naturally varies depending on the number of bits after quantization. The number of bits after data conversion by thedata converters13 and14 may differ, and it is also possible not to implement one or the other of the data converters.
Furthermore, in the example above, thedata converters13 and14 performed bit reduction by linear quantization, but nonlinear quantization may also be performed. In that case, theinterpolator16 is adapted to calculate the compensated image data Dj1 by use of an interpolation operation employing a higher-order function, instead of by linear interpolation.
When the number of bits is converted by nonlinear quantization, the error in the compensated image data Dj1 accompanying bit reduction can be reduced by raising the quantization density in areas in which the compensated image data change greatly (areas in which there are large differences between adjacent compensated image data.
In the present embodiment, compensated image data can be determined accurately even if the size of the lookup table used for determining the compensated image data is reduced.
In the fourth embodiment as described above, the lookup table is adapted to output intermediate compensated image data Df1, Df2, Df3, and Df4, and the compensated image data Dj1 are calculated by performing interpolation using these intermediate compensated image data. A lookup table that outputs intermediate compensation values instead of intermediate compensated image data may be used, however, and compensation values may be determined by performing interpolation using the intermediate compensation values, subsequent operations being carried out as in the second embodiment to calculate compensated image data Dj1 in which the original current frame image data Di1 are compensated by using these compensation values.
Fifth Embodiment
FIG. 20 is a block diagram showing the structure of a liquid crystal display driving device according to a fifth embodiment of the present invention.
The driving device in the fifth embodiment is generally the same as the driving device in the first embodiment. The differences are that the encoding unit4 of the first embodiment is replaced by a quantizingunit24, the amount-of-change calculation unit8, secondary preceding frameimage data reconstructor9, and reconstructed preceding frameimage data generator10 are replaced by another amount-of-change calculation unit26, secondary preceding frame image data reconstructor27, and reconstructed preceding frameimage data generator28, the decoding units6 and7 of the first embodiment are omitted, and bitrestoration units29 and30 are provided.
In the first embodiment, the encoding unit4 was used to compress the data and the compressed image data were delayed in thedelay unit5, and the decoders6 and7 were used to decompress the data, whereby the size of the frame memory used in thedelay unit5 could be reduced, but in the fifth embodiment, the image data are compressed by use of the quantizingunit24, and decompressed by use of thebit restoration units29 and30.
The quantizingunit24 reduces the number of bits in the original current frame image data Di1 by performing linear or nonlinear quantization, and outputs the quantized data, denoted data Dg1, which have a reduced number of bits. If the number of bits is reduced by quantization, the amount of data to be delayed in thedelay unit25 is reduced; accordingly, the size of the frame memory constituting the delay unit can be reduced.
An arbitrary number of bits can be selected as the number of bits after quantization, to produce a predetermined amount of image data after bit reduction. If 8-bit data for each of the colors red, green, and blue are output from the receivingunit2, the amount of image data can by reduced by half by reducing each to four bits. The quantizing unit may also quantize the red, green, and blue data to different numbers of bits. The amount of image data can be reduced effectively by, for example, quantizing blue, to which human visual sensitivity is generally low, to fewer bits than the other colors.
In the description below, the original current frame image data Di1 are 8-bit data, linear quantization is carried out by extracting a certain number of high-order bits, such as the four upper bits, and 4-bit data are generated.
The quantized image data Dg1 output from the quantizingunit24 are input to thedelay unit25 and amount-of-change calculation unit26.
Thedelay unit25 receives the quantized data Dg1, and outputs image data preceding the original current frame image data Di1 by one frame; that is, it outputs quantized image data Dg0 in which the image data of the preceding frame are quantized.
Thedelay unit25 comprises a memory that stores the quantized image data Dg1 of the preceding frame for one frame interval. Accordingly, the fewer bits of image data there are after quantization of the original current frame image data Di1, the smaller the size of the memory constituting thedelay unit25 can be.
The amount-of-change calculation unit26 subtracts the quantized image data Dg1 expressing the image of the current frame from the quantized image data Dg0 expressing the image of the preceding frame to obtain an amount of change Bv1 therebetween and its absolute value |Bv1|. That is, it generates and outputs amount-of-change data Dt1 and absolute amount-of-change data |Dt1| representing, with a reduced number of bits, the amount of change and its absolute value. The amount of change Bv1 will also be referred to as the first amount of change, and the amount-of-change data Dt1 and absolute amount-of-change data |Dt1| will similarly be referred to as the first amount-of-change data and first absolute amount-of-change data.
Thus, the amount-of-change calculation unit26 performs a function corresponding to the amount-of-change calculation circuit comprising the combination of the amount-of-change calculation unit8 and the decoding unit6 in the first embodiment.
Bit restoration unit29 outputs amount-of-change data Du1 expressing the amount of change Bv1 in the same number of bits as the original image data Di1, based on the amount-of-change data Dt1 output from the amount-of-change calculation unit26.
The amount-of-change data Du1 are obtained by bit restoration, as will be described below.
Bit restoration unit30 outputs bit-restored original image data Dh0 by adjusting the number of bits of the quantized image data Dg0 output from thedelay unit25 to the number of bits of the original current frame image data Di1. The bit-restored original image data Dh0 correspond to the decoded image data Db0 in the first embodiment etc., and like the decoded image data Db0 in the first embodiment, will also be referred to as primary reconstructed preceding frame image data.
The secondary preceding frame image data reconstructor27 receives the original current frame image data Di1 and the bit-restored amount-of-change data Du1, and generates and outputs secondary reconstructed preceding frame image data Dp0 corresponding to the image in the preceding frame by adding the amount-of-change data Du1 to the image data Di1.
Because the number of bits of the amount-of-change data Dt1 is, like the number of bits of the quantized image data Dg0 and Dg1, less than in the original current frame image data Di1, before being added to the original current frame image data Di1, the number of bits in the amount-of-change data Dt1 must be made equal to the number of bits in the original current frame image data Di1.Bit restoration unit29 is provided for this purpose; it generates the bit-restored amount-of-change data Du1 by performing a process that adjusts the number of bits of the data Dt1 expressing the amount of change Bv1 according to the number of bits in the original current frame image data Di1.
If the quantizingunit24 quantizes 8-bit data to 4-bit data, for example, the amount-of-change data Dt1 are obtained by a subtraction operation on the 4-bit quantized data Dg0 and Dg1, so the amount-of-change data Dt1 are represented by a sign bit s and four data bits b7, b6, b5, b4.
In the amount-of-change data Dt1, these bits are arranged in the order s, b7, b6, b5, b4, s being the most significant bit.
If 0's are inserted into the lower four bits to adjust the number of bits for the purpose of bit restoration in thebit restoration unit29, the data after bit restoration are s, b7, b6, b5, b4, 0, 0, 0, 0; if 1's are inserted, the data are s, b7, b6, b5, b4, 1, 1, 1, 1. If the same value as in the upper bits is inserted into the lower bits, s, b7, b6, b5, b4, b7, b6, b5, b4, can be used.
The amount-of-change data Du1 obtained in this way after bit restoration are added to the original current frame image data Di1 to obtain the secondary reconstructed preceding frame image data Dp0; if the original current frame image data Di1 are 8-bit data, then the secondary reconstructed preceding frame image data Dp0 must be restricted to the interval from 0 to 255.
If the data ate quantized to a number of bits other than four bits in the quantizingunit24, the number of bits can be adjusted in a way similar to the above, or by using a combination of the ways described above.
Based on the absolute amount-of-change data |Dt1| output by the amount-of-change calculation unit26, the reconstructed preceding frameimage data generator28 outputs the bit-restored primary reconstructed preceding frame image data Dh0 output bybit restoration unit30 as the reconstructed preceding frame image data Dq0 when the absolute amount-of-change data |Dt1| is greater than a threshold SH0, which may be set arbitrarily, and outputs the secondary reconstructed preceding frame image data Dp0 output by the secondary preceding frame image data reconstructor27 as the reconstructed preceding frame image data Dq0 when the absolute amount-of-change data |Dt1| is less than SH0.
Bit restoration unit30 adjusts the number of bits of the quantized image data Dg0 to the number of bits of the current frame image data Di1 and outputs the bit-restored primary reconstructed preceding frame image data Dh0 as noted above; it is provided because it is desirable to adjust the preceding frame quantized image data Dg0 to the number of bits of the current frame image data Di1 before input to the reconstructed preceding frameimage data generator28.
Available methods of adjusting the number of bits inbit restoration unit30 include setting the lacking low-order bits to 0 or to 1, or inserting the same value as a plurality of upper bits into the lower bits.
The case in which thequantizing unit24 quantizes 8-bit data to 4-bit data, for example, and the quantized 4-bit data are adjusted to 8 bits inbit restoration unit30 will be described. If the 4-bit data after quantization are, from the most significant bit, b7, b6, b5, b4, then inserting 0's into the lower four bits produces b7, b6, b5, b4, 0, 0, 0, 0 and inserting 1's produces b7, b6, b5, b4, 1, 1, 1, 1. If the same value as in the upper bits is inserted into the lower bits, b7, b6, b5, b4, b7, b6, b5, b4, can be used.
From the current frame image data Di1 and the reconstructed preceding frame image data Dq0, the compensatedimage data generator11 outputs compensated image data Dj1 compensated so that when a brightness value in the current frame image changes from the image data of the preceding frame image, the liquid crystal will achieve the transmittance corresponding to the brightness value in the current frame image within one frame interval.
The voltage level of a signal for displaying the image in the original current frame image data Di1 is compensated here so as to compensate for the delay due to the response speed characteristic of thedisplay unit12 of the liquid crystal display device.
The compensatedimage data generator11 compensates the voltage level of the signal for displaying the image corresponding to the image data of the current frame, in correspondence to the response speed characteristic indicating the time from the input of image data to the liquidcrystal display unit12 to the display thereof and the amount of change between the image data of the preceding frame and the image data of the current frame input to the liquid crystal display driving device.
Other operations are the same as in the first embodiment, so a detailed description will be omitted.
FIG. 21 is a flowchart schematically showing an example of the image data processing method of the image data processing circuit shown inFIG. 20.
First, when the original current frame image data Di1 is input from theinput terminal1 through the receivingunit2 to the image data processing circuit23 (St31), the quantizingunit24 compressively quantizes the original current frame image data Di1 and outputs the quantized image data Dg1, the data size of which has been reduced (St32). The quantized image data Dg1 are input to thedelay unit25, which outputs the quantized image data Da1 with a delay of one frame. Accordingly, when the quantized image data Dg1 are input, the quantized image data Dg0 of the preceding frame are output from the delay unit25 (St33).
By restoring bits to the quantized image data Dg0 output from thedelay unit25,bit restoration unit30 generates bit-restored image data, more specifically, primary reconstructed preceding frame image data Dh0 (St34).
The quantized image data Dg1 output from the quantizingunit24 and the quantized image data Dg0 output from thedelay unit25 are input to the amount-of-change calculation unit26, and the difference obtained, for instance, by subtracting quantized image data Dg1 from quantized image data Dg0 is output as amount-of-change data Dt1 for each pixel, the absolute value of the difference also being output as absolute amount-of-change data |Dt1| (St35). The amount-of-change data Dt1 indicates the temporal change of each item of image data in the frame by using the quantized image data of two temporally differing frames, such as quantized image data Dg0 and quantized image data Dg1.
Bit restoration unit29 generates and outputs bit-restored amount-of-change data Du1 by restoring bits to the amount-of-change data Dt1 (St36).
The bit-restored amount-of-change data Du1 are input to the secondary preceding frame image data reconstructor27, which generates and outputs the secondary reconstructed preceding frame image data Dp0 by adding the bit-restored amount-of-change data Du1 and the original current frame image data Di1, which are input separately (St37).
The bit-reduced absolute amount-of-change data |Dt1| are input to the reconstructed preceding frameimage data generator28, which decides whether the first absolute amount-of-change data |Dt1| are greater than a first threshold (St38). If the absolute amount-of-change data |Dt1| are greater than the first threshold (St38: YES), the reconstructed preceding frameimage data generator10 selects, from the bit-restored image data, that is, the primary reconstructed preceding frame image data Dh0 and the secondary reconstructed preceding frame image data Dp0, the primary reconstructed preceding frame image data Dh0 and outputs the primary reconstructed preceding frame image data Dh0 to the compensatedimage data generator11 as the reconstructed preceding frame image data Dq0 (St39). When the absolute amount-of-change data |Dt1| are not greater than the first threshold (St38: NO), the reconstructed preceding frameimage data generator10 selects the secondary reconstructed preceding frame image data Dp0 rather than the primary reconstructed preceding frame image data Dh0 and outputs the secondary reconstructed preceding frame image data Dp0 to the compensatedimage data generator11 as the reconstructed preceding frame image data Dq0 (St40).
When the primary reconstructed preceding frame image data Dh0 are input as the reconstructed preceding frame image data Dq0, the compensatedimage data generator11 calculates the difference between the primary reconstructed preceding frame image data Dh0 and the original current frame image data Di1, that is, the second amount of change Dw1 (1) (St41), calculates a compensation value from the response time of the liquid crystal corresponding to the second amount of change Dw1 (1), and generates and outputs compensated image data Dj1 (1) by using that compensation value to compensate the original current frame image data Di1 (St43).
When the secondary reconstructed preceding frame image data Dp0 are input as the reconstructed preceding frame image data Dq0, the compensatedimage data generator11 calculates the difference between the secondary reconstructed preceding frame image data Dp0 and the original current frame image data Di1, that is, the second amount of change Dw1 (2) (St42), calculates a compensation value from the response time of the liquid crystal corresponding to the second amount of change Dw1 (2), and generates and outputs the compensated image data Dj1 (2) by using the compensation value to compensate the original current frame image data Di1 (St44).
The compensation in steps St43 and St44 compensates the voltage level of a brightness signal or other display signal corresponding to the image data of the current frame in accordance with the response speed characteristic representing the time from input of image data to the liquidcrystal display unit12 until display of the image, and the amount of change from the preceding frame to the current frame in the image data input to the liquid crystal display driving device.
If the first amount-of-change data Dt1 are zero, the second amount of change Dw1 (2) is also zero and the compensation value is zero, so the original current frame image data Di1 are output without compensation as the compensated image data Dj1 (2).
Thedisplay unit12 displays the compensated image data Dj1 by, for example, applying a voltage corresponding to a brightness value expressed thereby to the liquid crystal.
In the description given above, the reconstructed preceding frameimage data generator28 selects either the secondary reconstructed preceding frame image data Dp0 or the primary reconstructed preceding frame image data Dh0 in accordance with a threshold SH0 which can be set arbitrarily, but the processing in the reconstructed preceding frameimage data generator28 is not limited to this.
For instance, two thresholds SH0 and SH1 may be provided in the reconstructed preceding frameimage data generator28, which may be configured to output the reconstructed preceding frame image data Dq0 as follows, according to the relationships among these thresholds SH0 and SH1 and the absolute amount-of-change data |Dt1|.
The relationship between SH0 and SH1 is given by the following expression (8):
SH1>SH0  (8)
When |Dt1|<SH0,
Dq0=Dp0  (9)
WhenSH0Dt1SH1,Dq0=Dh0×(Dt1-SH0)/(SH1-SH0)+Dp0×{1-(Dt1-SH0)/(SH1-SH0)}(10)
When SH1<|Dt1|,
Dq0=Dh0  (11)
When the absolute amount-of-change data Dt1 are between the thresholds SH0 and SH1, the preceding frame image data Dq0 are calculated according to the primary reconstructed preceding frame image data Db0 and the secondary reconstructed preceding frame image data Dp0 as in equations (9) to (11). That is, the primary reconstructed preceding frame image data Dh0 and the secondary reconstructed preceding frame image data Dp0 are combined in a ratio corresponding to the position of the absolute amount-of-change data |Dt1| in the range between threshold SH0 and threshold SH1 (calculated by adding their values multiplied by coefficients corresponding to closeness to the thresholds) and output as the reconstructed preceding frame image data Dq0. Accordingly, a step-like transition in the reconstructed preceding frame image data Dq0 can be avoided at the boundary between the range in which the amount of change is small and can be appropriately processed as if there were no change, and the range that is appropriately processed as if there was a large change, and near this boundary, processing can be carried out as a compromise between the processing when there is no change and the processing when there is a large change.
The quantizing unit used in the fifth embodiment can be realized with a simpler circuit than the encoding unit in the first embodiment, so the structure of the image data processing circuit in the fifth embodiment can be simplified.
Modifications can be made to the fifth embodiment similar to the modifications to the first embodiment that were described with reference to the second to fourth embodiments. In particular, lookup tables can be used as described in the second and third embodiments, and bit reduction and interpolation are possible as described in the fourth embodiment.
Data compression was carried out by encoding in the first to fourth embodiments and by quantization in the fifth embodiment, but data compression can also be carried out by other methods.
Those skilled in the art will recognize that further variations are possible within the scope of the invention, which is defined by the appended claims.

Claims (20)

1. An image data processing method for determining a voltage applied to a liquid crystal in a liquid crystal display device based on image data representing a plurality of frame images successively displayed on the liquid crystal display device, comprising:
generating primary reconstructed preceding frame image data representing an image of a preceding frame by compressing current frame image data representing an image of a current frame, delaying the compressed image data by one frame interval, and decompressing the delayed image data;
calculating an amount of change between the image of the current frame and the image of the preceding frame;
generating secondary reconstructed preceding frame image data representing the image of the preceding frame, based on the current frame image data and said amount of change;
generating reconstructed preceding frame image data representing the image of the preceding frame, based on an absolute value of said amount of change, the primary reconstructed preceding frame image data, and the secondary reconstructed preceding frame image data; and
generating compensated image data having compensated values representing the image of the current frame, based on the current frame image data and the reconstructed preceding frame image data.
5. The image data processing method according toclaim 1, wherein generating the reconstructed preceding frame image data comprises:
selecting the primary reconstructed preceding frame image data as the reconstructed preceding frame image data when the absolute value of said amount of change is larger than a first predetermined threshold;
selecting the secondary reconstructed preceding frame image data as the reconstructed preceding frame image data when the absolute value of said amount of change is smaller than a second predetermined threshold which is smaller than the first threshold; and
combining the primary reconstructed preceding frame image data and the secondary reconstructed preceding frame image data in proportion to distances of said amount of change from the first threshold and the second threshold, when said amount of change is between the first threshold and the second threshold.
8. An image data processing circuit for determining a voltage applied to a liquid crystal in a liquid crystal display device based on image data representing a plurality of frame images successively displayed on the liquid crystal display device, comprising:
a primary preceding frame image data reconstructor for generating primary reconstructed preceding frame image data representing an image of a preceding frame by compressing current frame image data representing an image of a current frame, delaying the compressed image data by one frame interval, and decompressing the delayed image data;
an amount-of-change calculation circuit for calculating an amount of change between the image of the current frame and the image of the preceding frame;
a secondary preceding frame image data reconstructor for generating secondary reconstructed preceding frame image data representing an image of the preceding frame, based on the current frame image data and said amount of change;
a reconstructed preceding frame image data generator for generating reconstructed preceding frame image data representing an image of the preceding frame, based on an absolute value of said amount of change, the primary reconstructed preceding frame image data, and the secondary reconstructed preceding frame image data; and
a compensated image data generator for generating compensated image data having compensated values representing the image of the current frame, based on the current frame image data and the reconstructed preceding frame image data.
12. The image data processing circuit according toclaim 8, wherein the reconstructed preceding frame image data generator
selects the primary reconstructed preceding frame image data as the reconstructed preceding frame image data when the absolute value of said amount of change is larger than a first predetermined threshold;
selects the secondary reconstructed preceding frame image data as the reconstructed preceding frame image data when the absolute value of said amount of change is smaller than a second predetermined threshold which is smaller than the first threshold; and
combines the primary reconstructed preceding frame image data and the secondary reconstructed preceding frame image data in proportion to distances of said amount of change from the first threshold and the second threshold, when said amount of change is between the first threshold and the second threshold.
US10/797,1542003-03-272004-03-11Image data processing method, and image data processing circuitActive2027-02-02US7403183B2 (en)

Applications Claiming Priority (4)

Application NumberPriority DateFiling DateTitle
JP20030876172003-03-27
JP2003-0876172003-03-27
JP2003319342AJP3594589B2 (en)2003-03-272003-09-11 Liquid crystal driving image processing circuit, liquid crystal display device, and liquid crystal driving image processing method
JP2003-3193422003-09-11

Publications (2)

Publication NumberPublication Date
US20040189565A1 US20040189565A1 (en)2004-09-30
US7403183B2true US7403183B2 (en)2008-07-22

Family

ID=32993043

Family Applications (1)

Application NumberTitlePriority DateFiling Date
US10/797,154Active2027-02-02US7403183B2 (en)2003-03-272004-03-11Image data processing method, and image data processing circuit

Country Status (5)

CountryLink
US (1)US7403183B2 (en)
JP (1)JP3594589B2 (en)
KR (1)KR100539857B1 (en)
CN (1)CN1265627C (en)
TW (1)TWI232680B (en)

Cited By (11)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US20050253876A1 (en)*2004-05-112005-11-17Au Optronics Corp.Method and apparatus of dynamic frame presentation improvement for liquid crystal display
US20060022928A1 (en)*2004-07-292006-02-02Jinoh KimCapacitive load charge-discharge device and liquid crystal display device having the same
US20070154065A1 (en)*2004-06-152007-07-05Ntt Docomo, Inc.Apparatus and method for generating a transmit frame
US20080019598A1 (en)*2006-07-182008-01-24Mitsubishi Electric CorporationImage processing apparatus and method, and image coding apparatus and method
US20080174612A1 (en)*2005-03-102008-07-24Mitsubishi Electric CorporationImage Processor, Image Processing Method, and Image Display Device
US20080260268A1 (en)*2004-06-102008-10-23Jun SomeyaLiquid-Crystal-Driving Image Processing Circuit, Liquid-Crystal-Driving Image Processing Method, and Liquid Crystal Display Apparatus
US20080294816A1 (en)*2007-05-222008-11-27Nec Electronics CorporationImage processing apparatus for reading compressed data from and writing to memory via data bus and image processing method
US20090021499A1 (en)*2007-07-162009-01-22Novatek Microelectronics Corp.Display driving apparatus and method thereof
US20090079769A1 (en)*2007-09-252009-03-26Seiko Epson CorporationDriving method, driving circuit, electro-optical device, and electronic apparatus
US20100214488A1 (en)*2007-08-062010-08-26Thine Electronics, Inc.Image signal processing device
US20100220938A1 (en)*2007-08-062010-09-02Thine Electonics, Inc.Image signal processing device

Families Citing this family (32)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
JP2006047767A (en)*2004-08-052006-02-16Toshiba Corp Information processing apparatus and video data luminance control method
KR101106439B1 (en)*2004-12-302012-01-18엘지디스플레이 주식회사 Image modulation device, modulation method thereof, liquid crystal display device having same and driving method thereof
JP4144598B2 (en)*2005-01-282008-09-03三菱電機株式会社 Image processing apparatus, image processing method, image encoding apparatus, image encoding method, and image display apparatus
JP4770290B2 (en)*2005-06-282011-09-14パナソニック株式会社 Liquid crystal display
JP2007292900A (en)*2006-04-242007-11-08Hitachi Displays Ltd Display device
JP5095181B2 (en)*2006-11-172012-12-12シャープ株式会社 Image processing apparatus, liquid crystal display apparatus, and control method of image processing apparatus
JP2008298926A (en)*2007-05-302008-12-11Nippon Seiki Co LtdDisplay device
JP5010391B2 (en)*2007-08-172012-08-29ザインエレクトロニクス株式会社 Image signal processing device
US20090153743A1 (en)*2007-12-182009-06-18Sony CorporationImage processing device, image display system, image processing method and program therefor
JP2009157169A (en)*2007-12-272009-07-16Casio Comput Co Ltd Display device
US7675805B2 (en)2008-01-042010-03-09Spansion LlcTable lookup voltage compensation for memory cells
KR20100025095A (en)*2008-08-272010-03-09삼성전자주식회사Method for compensating image data, compensating apparatus for performing the method and display device having the compensating apparatus
TWI395192B (en)*2009-03-182013-05-01Hannstar Display CorpPixel data preprocessing circuit and method
TWI493959B (en)*2009-05-072015-07-21Mstar Semiconductor IncImage processing system and image processing method
US20110063312A1 (en)*2009-09-112011-03-17Sunkwang HongEnhancing Picture Quality of a Display Using Response Time Compensation
KR101232086B1 (en)*2010-10-082013-02-08엘지디스플레이 주식회사Liquid crystal display and local dimming control method of thereof
JP5255045B2 (en)*2010-12-012013-08-07シャープ株式会社 Image processing apparatus and image processing method
JP2012137628A (en)*2010-12-272012-07-19Panasonic Liquid Crystal Display Co LtdDisplay device and image viewing system
KR101866389B1 (en)*2011-05-272018-06-12엘지디스플레이 주식회사Liquid crystal display device and method for driving the same
KR101910110B1 (en)2011-09-262018-12-31삼성디스플레이 주식회사Display device and driving method thereof
KR101920885B1 (en)2011-09-292018-11-22삼성디스플레이 주식회사Display device and driving method thereof
JP5998982B2 (en)*2013-02-252016-09-28株式会社Jvcケンウッド Video signal processing apparatus and method
KR102139693B1 (en)*2013-11-182020-07-31삼성디스플레이 주식회사Method of controlling luminance, luminance control unit, and organic light emitting display device having the same
KR102379182B1 (en)*2015-11-202022-03-24삼성전자주식회사Apparatus and method for continuous data compression
JP6702602B2 (en)*2016-08-252020-06-03Necディスプレイソリューションズ株式会社 Self image diagnostic method, self image diagnostic program, display device, and self image diagnostic system
JP2019184670A (en)*2018-04-032019-10-24シャープ株式会社Image processing apparatus and display
CN109036290B (en)*2018-09-042021-01-26京东方科技集团股份有限公司 Pixel driving circuit, driving method and display device
US12088223B2 (en)*2019-07-082024-09-10Tektronix, Inc.DQ0 and inverse DQ0 transformation for three-phase inverter, motor and drive design
KR102759787B1 (en)*2021-01-062025-02-03삼성전자주식회사Display apparatus and control method thereof
KR20230082730A (en)*2021-12-012023-06-09삼성디스플레이 주식회사Application processor and display device using the same
CN114245048B (en)*2021-12-272023-07-25上海集成电路装备材料产业创新中心有限公司Signal transmission circuit and image sensor
CN115731862B (en)*2022-11-212025-03-07京东方科技集团股份有限公司 Display method, display panel and display device

Citations (18)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US5345268A (en)*1991-11-051994-09-06Matsushita Electric Industrial Co., Ltd.Standard screen image and wide screen image selective receiving and encoding apparatus
JPH0981083A (en)1995-09-131997-03-28Toshiba Corp Display device
JP2616652B2 (en)1993-02-251997-06-04カシオ計算機株式会社 Liquid crystal driving method and liquid crystal display device
US5841475A (en)*1994-10-281998-11-24Kabushiki Kaisha ToshibaImage decoding with dedicated bidirectional picture storage and reduced memory requirements
US5909513A (en)*1995-11-091999-06-01Utah State UniversityBit allocation for sequence image compression
US5953488A (en)*1995-05-311999-09-14Sony CorporationMethod of and system for recording image information and method of and system for encoding image information
JP3041951B2 (en)1990-11-302000-05-15カシオ計算機株式会社 LCD drive system
US6091389A (en)*1992-07-312000-07-18Canon Kabushiki KaishaDisplay controlling apparatus
US20020024481A1 (en)*2000-07-062002-02-28Kazuyoshi KawabeDisplay device for displaying video data
US20020033813A1 (en)*2000-09-212002-03-21Advanced Display Inc.Display apparatus and driving method therefor
US20020050965A1 (en)*2000-10-272002-05-02Mitsubishi Denki Kabushiki KaishaDriving circuit and driving method for LCD
US20020126080A1 (en)*2001-03-092002-09-12Willis Donald HenryReducing sparkle artifacts with low brightness processing
US20020140652A1 (en)*2001-03-292002-10-03Fujitsu LimitedLiquid crystal display control circuit that performs drive compensation for high- speed response
US20030080983A1 (en)*2001-10-312003-05-01Jun SomeyaLiquid-crystal driving circuit and method
JP3470095B2 (en)2000-09-132003-11-25株式会社アドバンスト・ディスプレイ Liquid crystal display device and its driving circuit device
US20030231158A1 (en)*2002-06-142003-12-18Jun SomeyaImage data processing device used for improving response speed of liquid crystal display panel
US20040160617A1 (en)*2003-02-132004-08-19Noritaka OkudaCorrection data output device, frame data correction device, frame data display device, correction data correcting method, frame data correcting method, and frame data displaying method
US20080019598A1 (en)*2006-07-182008-01-24Mitsubishi Electric CorporationImage processing apparatus and method, and image coding apparatus and method

Patent Citations (24)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
JP3041951B2 (en)1990-11-302000-05-15カシオ計算機株式会社 LCD drive system
US5345268A (en)*1991-11-051994-09-06Matsushita Electric Industrial Co., Ltd.Standard screen image and wide screen image selective receiving and encoding apparatus
US6091389A (en)*1992-07-312000-07-18Canon Kabushiki KaishaDisplay controlling apparatus
JP2616652B2 (en)1993-02-251997-06-04カシオ計算機株式会社 Liquid crystal driving method and liquid crystal display device
US5841475A (en)*1994-10-281998-11-24Kabushiki Kaisha ToshibaImage decoding with dedicated bidirectional picture storage and reduced memory requirements
US5953488A (en)*1995-05-311999-09-14Sony CorporationMethod of and system for recording image information and method of and system for encoding image information
JPH0981083A (en)1995-09-131997-03-28Toshiba Corp Display device
US5909513A (en)*1995-11-091999-06-01Utah State UniversityBit allocation for sequence image compression
US20020024481A1 (en)*2000-07-062002-02-28Kazuyoshi KawabeDisplay device for displaying video data
JP3470095B2 (en)2000-09-132003-11-25株式会社アドバンスト・ディスプレイ Liquid crystal display device and its driving circuit device
US6943763B2 (en)*2000-09-132005-09-13Advanced Display Inc.Liquid crystal display device and drive circuit device for
US20020033813A1 (en)*2000-09-212002-03-21Advanced Display Inc.Display apparatus and driving method therefor
US20020050965A1 (en)*2000-10-272002-05-02Mitsubishi Denki Kabushiki KaishaDriving circuit and driving method for LCD
JP2002202763A (en)2000-10-272002-07-19Mitsubishi Electric Corp Driving circuit and driving method for liquid crystal display device
US20020126080A1 (en)*2001-03-092002-09-12Willis Donald HenryReducing sparkle artifacts with low brightness processing
US20020140652A1 (en)*2001-03-292002-10-03Fujitsu LimitedLiquid crystal display control circuit that performs drive compensation for high- speed response
US20030080983A1 (en)*2001-10-312003-05-01Jun SomeyaLiquid-crystal driving circuit and method
US6756955B2 (en)*2001-10-312004-06-29Mitsubishi Denki Kabushiki KaishaLiquid-crystal driving circuit and method
US20040217930A1 (en)*2001-10-312004-11-04Mitsubishi Denki Kabushiki KaishaLiquid-crystal driving circuit and method
US7327340B2 (en)*2001-10-312008-02-05Mitsubishi Denki Kabushiki KaishaLiquid-crystal driving circuit and method
US20030231158A1 (en)*2002-06-142003-12-18Jun SomeyaImage data processing device used for improving response speed of liquid crystal display panel
US7034788B2 (en)*2002-06-142006-04-25Mitsubishi Denki Kabushiki KaishaImage data processing device used for improving response speed of liquid crystal display panel
US20040160617A1 (en)*2003-02-132004-08-19Noritaka OkudaCorrection data output device, frame data correction device, frame data display device, correction data correcting method, frame data correcting method, and frame data displaying method
US20080019598A1 (en)*2006-07-182008-01-24Mitsubishi Electric CorporationImage processing apparatus and method, and image coding apparatus and method

Cited By (23)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US20050253876A1 (en)*2004-05-112005-11-17Au Optronics Corp.Method and apparatus of dynamic frame presentation improvement for liquid crystal display
US7548249B2 (en)*2004-05-112009-06-16Au Optronics Corp.Method and apparatus of dynamic frame presentation improvement for liquid crystal display
US20080260268A1 (en)*2004-06-102008-10-23Jun SomeyaLiquid-Crystal-Driving Image Processing Circuit, Liquid-Crystal-Driving Image Processing Method, and Liquid Crystal Display Apparatus
US8150203B2 (en)*2004-06-102012-04-03Mitsubishi Electric CorporationLiquid-crystal-driving image processing circuit, liquid-crystal-driving image processing method, and liquid crystal display apparatus
US7961974B2 (en)*2004-06-102011-06-14Mitsubishi Electric CorporationLiquid-crystal-driving image processing circuit, liquid-crystal-driving image processing method, and liquid crystal display apparatus
US20100177128A1 (en)*2004-06-102010-07-15Jun SomeyaLiquid-crystal-driving image processing circuit, liquid-crystal-driving image processing method, and liquid crystal display apparatus
US20070154065A1 (en)*2004-06-152007-07-05Ntt Docomo, Inc.Apparatus and method for generating a transmit frame
US7760661B2 (en)*2004-06-152010-07-20Ntt Docomo, Inc.Apparatus and method for generating a transmit frame
US20060022928A1 (en)*2004-07-292006-02-02Jinoh KimCapacitive load charge-discharge device and liquid crystal display device having the same
US7486286B2 (en)*2004-07-292009-02-03Sharp Kabushiki KaishaCapacitive load charge-discharge device and liquid crystal display device having the same
US8139090B2 (en)*2005-03-102012-03-20Mitsubishi Electric CorporationImage processor, image processing method, and image display device
US20080174612A1 (en)*2005-03-102008-07-24Mitsubishi Electric CorporationImage Processor, Image Processing Method, and Image Display Device
US7925111B2 (en)*2006-07-182011-04-12Mitsubishi Electric CorporationImage processing apparatus and method, and image coding apparatus and method
US20080019598A1 (en)*2006-07-182008-01-24Mitsubishi Electric CorporationImage processing apparatus and method, and image coding apparatus and method
US20080294816A1 (en)*2007-05-222008-11-27Nec Electronics CorporationImage processing apparatus for reading compressed data from and writing to memory via data bus and image processing method
US8078778B2 (en)*2007-05-222011-12-13Renesas Electronics CorporationImage processing apparatus for reading compressed data from and writing to memory via data bus and image processing method
US20090021499A1 (en)*2007-07-162009-01-22Novatek Microelectronics Corp.Display driving apparatus and method thereof
US8294695B2 (en)*2007-07-162012-10-23Novatek Microelectronics Corp.Display driving apparatus and method thereof
US20100214488A1 (en)*2007-08-062010-08-26Thine Electronics, Inc.Image signal processing device
US20100220938A1 (en)*2007-08-062010-09-02Thine Electonics, Inc.Image signal processing device
US8379997B2 (en)*2007-08-062013-02-19Thine Electronics, Inc.Image signal processing device
US20090079769A1 (en)*2007-09-252009-03-26Seiko Epson CorporationDriving method, driving circuit, electro-optical device, and electronic apparatus
US8179348B2 (en)*2007-09-252012-05-15Seiko Epson CorporationDriving method, driving circuit, electro-optical device, and electronic apparatus

Also Published As

Publication numberPublication date
TW200425734A (en)2004-11-16
TWI232680B (en)2005-05-11
KR100539857B1 (en)2005-12-28
JP3594589B2 (en)2004-12-02
CN1543205A (en)2004-11-03
JP2004310012A (en)2004-11-04
CN1265627C (en)2006-07-19
US20040189565A1 (en)2004-09-30
KR20040085007A (en)2004-10-07

Similar Documents

PublicationPublication DateTitle
US7403183B2 (en)Image data processing method, and image data processing circuit
US7327340B2 (en)Liquid-crystal driving circuit and method
US8150203B2 (en)Liquid-crystal-driving image processing circuit, liquid-crystal-driving image processing method, and liquid crystal display apparatus
US7034788B2 (en)Image data processing device used for improving response speed of liquid crystal display panel
US8285037B2 (en)Compression format and apparatus using the new compression format for temporarily storing image data in a frame memory
US7683908B2 (en)Methods and systems for adaptive image data compression
JP4169768B2 (en) Image coding apparatus, image processing apparatus, image coding method, and image processing method
US7289161B2 (en)Frame data compensation amount output device, frame data compensation device, frame data display device, and frame data compensation amount output method, frame data compensation method
US8139090B2 (en)Image processor, image processing method, and image display device
KR100917530B1 (en)Image processing device, image processing method, image coding device, image coding method and image display device
KR100896387B1 (en)Image processing apparatus and method, and image coding apparatus and method
JP3580312B2 (en) Image processing circuit for driving liquid crystal, liquid crystal display device using the same, and image processing method
Lim et al.Image Data Compression and Decompression Unit Considering Brightness Sensitivity
JP2003345318A (en) Liquid crystal driving circuit, liquid crystal driving method, and liquid crystal display device
KR20220100793A (en)Method of performing rate-distortion optimization
JP2000115692A (en) Image processing device
JPH09319730A (en)Product sum arithmetic circuit and its method

Legal Events

DateCodeTitleDescription
ASAssignment

Owner name:MITSUBISHI DENKI KABUSHIKI KAISHA, JAPAN

Free format text:ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:SOMEYA, JUN;REEL/FRAME:015077/0706

Effective date:20040219

FEPPFee payment procedure

Free format text:PAYOR NUMBER ASSIGNED (ORIGINAL EVENT CODE: ASPN); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

STCFInformation on status: patent grant

Free format text:PATENTED CASE

FPAYFee payment

Year of fee payment:4

FPAYFee payment

Year of fee payment:8

MAFPMaintenance fee payment

Free format text:PAYMENT OF MAINTENANCE FEE, 12TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1553); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

Year of fee payment:12

ASAssignment

Owner name:TRIVALE TECHNOLOGIES, CALIFORNIA

Free format text:ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:MITSUBISHI ELECTRIC CORPORATION;REEL/FRAME:057651/0234

Effective date:20210205


[8]ページ先頭

©2009-2025 Movatter.jp