CROSS-REFERENCE TO RELATED APPLICATIONSThis application is a continuation of application Ser. No. 08/516,038 filed Aug. 17, 1995. This application claims priority from British Application No. 9417138.6 filed Aug. 23, 1994.[0001]
TECHNICAL FIELDThis is invention relates to data rate conversion and particularly, though not exclusively, to video data frame/field rate conversion.[0002]
BACKGROUND OF THE INVENTIONThere are various ways of compressing video information. In particular, there are three standards under which compression may be carried out: JPEG, MPEG AND H.261. These are discussed, for example, in U.S. Pat. No. 5,212,742.[0003]
Video information is commonly formatted as a series of fields. The original information which is to be converted into, and displayed in, a video format may not be immediately compatible with the field rate at which the information is to be displayed. For example, a celluloid film is shot at a rate of 24 frame/sec. (24 Hz) while, for example, the NTSC television system has a field rate of almost 60 Hz. The technique of increasing the frame rate of the film images to match that of the television system film rate is known as pulldown conversion.[0004]
Continuing with the above example of displaying a film in a NTSC standard format, a ‘⅔ pulldown’ conversion could be used in which each film frame is repeated for either two or three consecutive field periods at the video field repetition rate. The number of repetitions alternates so that the first frame is displayed twice in two consecutive field periods, the second frame is displayed three times in three consecutive field periods and so on. Thus, in one second twelve film frames at 24 Hz will each have been generated twice (i.e. for 24 field periods) while the other twelve film frames will each have been generated three times (i.e. for 36 field periods). The total (24+36) equals the 60 fields in one second at a 60 Hz field rate.[0005]
Pulldown instructions may be generated remotely and signalled to the video decoder associated with the displaying device or be generated locally at the video decoder. In the signalled pulldown, the encoder performs the pulldown calculations and signals specifically which frames are to be repeated, for example using the ‘repeat-first-field’ flag in the MPEG-2 video syntax. The decoder simply obeys the remotely generated instructions received.[0006]
In local pulldown, the encoder encodes the film information and transmits it to the receiving device. There is no information in the transmitted signal to tell the decoder at the receiving device how to perform the appropriate pulldown conversion (e.g. the ‘⅔ pulldown’ referred to above). The decoder must, therefore, calculate how to perform the appropriate conversion from the transmitted film frame rate to the displayed field rate.[0007]
If only pulldown conversion from the 24 Hz frame rate to a 60 Hz field rate were required, the single ⅔ pulldown conversion would be relatively easy to implement. However, other pulldown schemes are required. For example, the 24 Hz film frame rate may need to be converted to a 50 Hz field rate for the PAL television format.[0008]
Furthermore, an additional complexity in the NTSC television system is that the actual field rate is not 60 Hz but 60000/1001 Hz. Thus, the regular alternating ⅔ pulldown yields a field rate that is actually too high.[0009]
BRIEF DESCRIPTION OF DRAWINGSThe invention may be put into practice in a number of ways, one of which will now be described with reference to the accompanying drawings in which:[0010]
FIG. 1 illustrates Breshenham's line drawing algorithm;[0011]
FIG. 2 is a block diagram illustrating the data flow through a video decoder; and[0012]
FIGS.[0013]3 to6 and flow charts of various aspects of the invention.
DESCRIPTION OF INVENTIONAccording to the invention there is provided a method of converting frames of data received at a slower rate into fields of data generated at a faster rate, the method comprising:[0014]
determining a basic integer number of repetitions of fields in a frame period;[0015]
calculating a differential of the field repetition rate from the difference between the ratio of the faster to the slower rates and the ratio of the basic repetition number of fields in the frame period to the slower frame rate;[0016]
additionally repeating or deleting selected ones of the repeated fields, when the differential of the rate of repeating fields is substantially at variance with the calculated differential of the field repetition rate, to maintain the repetition of the fields at the faster rate.[0017]
The terms ‘frame’ and ‘field’ are used for convenience. Both are intended to refer to any frame, field, packet or other discreet quantity of data sent, received and/or constructed as a set. The invention allows the selected repetition rate to be modified by the inclusion or extraction of frames of repeated and, therefore, redundant data to fulfill the faster field data rates. Preferably, the selected basic integer repetition rate is less than the faster field rate. In this case, the method will add additionally repeated frames at the repetition rate. The repetition rate may be less than half the field rate.[0018]
The method does not have to select a slower basic integer repetition rate. In the alternative, it could equally well select a faster rate and then the method would be arranged to delete repeated frames where necessary.[0019]
The invention also extends to apparatus for converting frames of data received at a slower rate into fields of data generated at a faster rate, the apparatus comprising:[0020]
means for determining a basic integer number of repetitions of fields in a frame period;[0021]
means for calculating a differential of the field repetition rate from the difference between the ration of the faster to the slower rates and the ratio of the basic repetition number of fields in the frame period to the slower frame rate;[0022]
means for additionally repeating or deleting selected ones of the repeated fields, when the differential of the rate of repeating fields is substantially at variance with the calculated differentials of the field repetition rate, to maintain the repetition of the fields at the faster rate.[0023]
Preferably, the apparatus includes means for generating a repeat or delete frame signal for actuating the means for repeating or deleting selected ones of the repeated frames.[0024]
The present invention provides a generalized solution to the pulldown calculations that allow data at 23.98 Hz, 24 Hz and 25 Hz frame rates to be displayed at 50 Hz field rate and 23.98 Hz, 24 Hz, 25Hz and 29.97 Hz to be displayed at 59.94 Hz field rate.[0025]
Breshenham's line drawing algorithm is a method of drawing lines of arbitrary slope on display devices which are divided into a series of rectangular picture elements (pels). A description of Breshenham's algorithm can be found between pages 433 and 436 of the book ‘Fundamentals of Interactive Computer Graphics’ by Foley et al., published by Addison-Wesley.[0026]
In the line-drawing case illustrated in FIG. 1 (for lines that have a slope between 0 and 1) the algorithm approximates the desired line by deciding, for each co-ordinate in the X axis, which pel in the Y axis is closest to the line. This pel is illuminated or colored in as appropriate for the application.[0027]
As the algorithm moves from left to right in the diagram from say (n−1) to (n) it decides whether to select the pel in the same Y coordinate as for (n−1) or whether to increment the Y coordinate. In the diagram the Y coordinate is incremented at (n) and (n+2) but not at (n+1).[0028]
The decision of whether or not to increment the Y coordinate is used, in its application to the current invention, in the decision of whether to display the current frame for 3 field periods rather than the normal 2 field periods when deriving a faster field rate from an incoming frame rate in a video decoder.[0029]
In the simple case of conversion from 24 Hz frame rate to 60 Hz field rate, the desired speed-up ratio is {fraction (60/24)}. However, the important decision is made in determining whether or not a frame is displayed for three field periods (rather than two field periods) in a frame period. If there were no three field period frames then the 24 Hz frames rate would yield 48 fields. Thus, the ratio of the number of twice repeated fields can be subtracted from the speed-up ratio:
[0030]Plotting a line with slope ½ will then allow us to calculate the pulldown pattern. Clearly, for a line of slope ½ the Y coordinate is incremented once for every other step of the X coordinate. this is the expected result since we know that we display alternate film frames for 3 field times in order to perform {fraction (3/2)} pulldown.[0031]
Our U.S. patent application No. 9,415,413.5 filed on Jul. 29, 1994 entitled ‘Method and Apparatus for Video Decompression’ describes a multi-standard video decoder and is incorporated herein by reference. the present invention can be implemented in relation to this decoder receiving the MPEG-2 standard.[0032]
Referring to FIG. 2, in a preferred embodiment of the decoder described in the above patent application coded MPEG data (MPEG-1 or MPEG-2) is transferred into the device via a coded[0033]data input circuit200. This data is then transferred viasignals202 to the Start Code Detector (SCD)204. TheSCD204 recognizes a number of start codes which are unique patterns of bits, these are replaced by corresponding Tokens that may easily be recognized by subsequent circuitry. The remainder of the data (other than the start codes) is carried by a DATA Token. This stream of “start code” and DATA Tokens is transferred viasignals206 to formattingcircuitry208 that arranges the data into a suitable format for storage in external memory. This data is transferred viasignals210 to the Synchronous Dynamic Random Access Memory (SDRAM)interface circuitry212.
The[0034]SDRAM interface circuitry212 deals with a number of streams of data which are multiplexed over a single set of interface signals230 in order that they may be written to or read from the external SDRAM device (or devices)228. In each case data is temporarily stored in a swing buffer (214,216,218,220,222, and224) each comprising two separate RAM arrays. Addresses for the SDRAM are generated by theaddress generator330 and transferred viasignals332 to theSDRAM interface circuitry212 where they are further processed by theDRAM interface controller226 before being applied via theSDRAM interface230 to theexternal SDRAM228. The address generation is such that such that a codeddata buffer234 and a number offramestores232 are maintained in the external SDRAM.
The formatted stream of “start code” tokens and DATA tokens mentioned previously is transferred to the[0035]SDRAM interface circuitry212 via thesignals210 where it is stored temporarily in theswing buffer214. This data is written into the area of theexternal SDRAM228 that comprises a Coded Data Buffer (CDB)234. This buffer has the function of a FIFO (First In, First Out) in that the order of the data is maintained. Data returning from theCDB234 is stored temporarily in theswing buffer216 before leaving the SDRAM interface circuitry via signals236. The data on thesignals236 is the same as that on thesignals210, except that it has been delayed by a (variable) time in theCDB234.
The data returning from the CDB is unformatted in the[0036]circuitry238 which undoes the formatting, suitable for storage in the external SDRAM, previously performed by theformatter208. It should however be noted that there is no restriction that the bus width of thesignals206 be the same as the signals240. In the preferred embodiment a wider bus width is used by the signals240 in order that a higher instantaneous data bandwidth may be supported at this point then by thesignals206.
The data (still comprising the “start code” tokens and the remainder of the data carried as DATA tokens) is passed via the signals[0037]240 to thevideo parser circuitry242. This circuitry as a whole has the task of further processing the coded video data. In particular the structure of the video data is “parsed” in order that its component parts are identified. The video parser comprises a Microprogrammed State Machine (MSM)244 which has a stored program. Instructions are passed viasignals250 to aHuffman decoder246. some parts of the instruction are interpreted by theHuffman decoder246. The remainder of the instruction, together with the data produced by the Huffman decoder is transferred via signals255 to an Arithmetic and Logic Unit (ALU)248. Here again some parts of the instruction are interpreted by the ALU itself whilst the remainder of the instruction and the data produced by the ALU are transferred viasignals256 to aToken Formatter258. TheHuffman decoder246 can signal error conditions to theMSM244 via thesignals252. TheALU248 may feedback condition-codes to theMSM244 viasignals254. This enables the MSM to perform a “JUMP” instruction that is conditional on data being processed in theALU248. The ALU includes within it a register file in order that selected information may also be stored. The “start code⇄ tokens effectively announce the type of data (contained in the DATA tokens) that follow. This allows the MSM to decide which instruction sequence to follow to decode the data. In addition to this gross decision based on the Tokens derived from the start codes the finer structure of the video data is followed by the mechanism, previously described, of storing that information that defines the structure of the video data in the register file within the ALU and using this to perform conditional “JUMP” instructions depending on the value of the decoded data to choose alternative sequences of instructions to decode the precise sequence of symbols in the coded data.
The decoded data, together with the remaining instruction bits (that are not used by the Huffman Decoder) are passed via[0038]signals256 to theToken Formatter258. The data is formatted in response to the instruction bits, into Tokens, which can be recognized by subsequent processing stages. The resulting tokens are transferred to three separate destinations via thesignals260,262, and264. One stream ofTokens262 passes to theAddress Generator330 where it is interpreted to generate suitable addresses to maintain the Coded Data Buffer and the framestores, as previously described. The second stream ofTokens264 is interpreted by VideoTiming Generation circuitry326 in order to control certain aspects of the final display of decoded video information. A third stream oftokens260 is passed to theInverse Modeller266 and on to subsequent processing circuitry. It should be understood that whilst each of the three streams of tokens (260,262 and264) is identical the information that is extracted is different in each case. Those Tokens that are irrelevant to the functioning of the specific circuitry are discarded. The tokens that are usefully interpreted in the cases ofstreams262 and264 are essentially control information while those usefully interpreted in the circuitry connected to thestream260 may more usefully be considered as data.
The[0039]Inverse Modeller266 has the task of expanding runs of zero coefficients present in the data so that the resulting data consists of blocks of data with precisely64 coefficients, this is transferred viasignals268 to the Inverse Zig-Zag circuit270. This circuit re-orders the stream of data according to one of two predefined patterns and results in data that might be considered two-dimensional. The Inverse Zig-Zag circuit includes a small Random Access Memory (RAM)272 in which data is temporarily stored whilst being reordered. The resulting data is transferred viasignals274 to theInverse Quantiser276. Here the coefficients are unquantized and returned to their proper numerical value in preparation for an Inverse Discrete Cosine (DCT) function. The Inverse DCT is a separable transform so that it must be applied twice, once in a vertical direction and once in a horizontal direction. In this embodiment a single one dimensional Inverse DCT function is used twice to perform the full two dimensional transform. The data first enters anInverse DCT circuit280 viasignals278. The resulting data is transferred viasignals284 and stored in aTranspose RAM282. The data is read out of the transpose RAM, but in a different order to that in which it was written in order that the data is transposed (i.e. rows and columns are swapped). This transposed data is transferred viasignals286 to theInverse DCT280 where it is processed a second time, the data resulting from this second transform being transferred viasignals288 to Field/Frame circuitry290.
The Field/Frame circuitry again reorders data in certain cases such that the data that is transferred via[0040]signals294 is in the same organization (Field or Frame) as that read as prediction data from the framestores in the external SDRAM. The Field/Frame circuitry290 stores data temporarily in aRAM292 for the purpose of this reordering.
Prediction data is read from the framestores that are maintained, as previously described, in the external SDRAM. Predictions are read via two paths (one nominally for “forward predictions” and the other nominally for “backwards predictions” although this is not strictly adhered to). One path comprises the[0041]swing buffer222 andsignals296 whilst the other comprises theswing buffer224 and signals298. The data is filtered by thePrediction Filters300 where the two predictions (“forward” and “backward”) may be averaged if required by the particular prediction mode indicated for that data. The resulting prediction is transferred viasignals302 to aprediction adder304 where it is added to the data transferred from the Field/Frame circuitry via thesignals294. The resulting decoded picture information is written back into a third framestore viasignals306 and theswing buffer220.
In order to produce a video signal the decoded information is read from the SDRAM via the[0042]swing buffer218 and is then transferred via one of two signal paths. The chrominance data is transferred viasignals308 to a vertical upsampler312 which up-samples the data so that there are the same number scan lines as used for the luminance signal. Thevertical upsampler312 stores one scan line of each of the two chrominance signals in theline store314. The two resulting chrominance signals (the blue color difference signal and red color difference signal) are transferred via signals316 and318 to aHorizontal upsampler320. The luminance signal (that did not require vertical upsampling) is also transferred viasignals310 to the horizontal upsampler. Thehorizontal upsampler320 has the task of resampling the data by one of a number of preset scale factors to produce a suitable number of pels for the final scan line. The scale factor is selected viasignals324 which are provided by the Video Timing Generation (VTG)circuitry326. This information is simply extracted from one of the Tokens supplied to the VTG via signals264.
The data produced by the horizontal upsampler is transferred via[0043]signals322 to anOutput Multiplex327. The multiplexes the actual video data signal arriving via thesignals322 with synchronization, blanking and border information produced internally by the output multiplex in response to timingsignals328 generated by the Video Timing Generator (VTG)circuitry326. In order for the correct timing signals to be generated, particularly in the aspect of generating the correct amount of border information, the VTG uses information transferred in Tokens via thesignals264.
The final resulting video signal, together with a number of strobes, synchronization and blanking signals are transferred via[0044]signals334 to avideo output interface336. The video signal may then be transferred to some suitable video display device.
A number of other interfaces are provided. A[0045]microprocessor interface340 enables an external microprocessor to be connected to thesignals338.Signals342 connect to many of the blocks of circuitry allowing the current status of the video decoding device to be read by the eternal microprocessor. In addition, certain features may be controlled by the external microprocessor writing to various control registers via this interface.
A JTAG (Joint Test Action Group)[0046]interface346 allows various aspects of the device to be controlled via an external device connected to signals344. TheJTAG interface346 is often used only for printed circuit board testing (after assembly) in which it is only necessary to control the external signals of the video decoding device. In this embodiment additional test capability is provided and for this reason theJTAG interface346 is connected viasignals348 to all blocks of circuitry.
[0047]Circuitry352 is provided for the generation and distribution of clock signals354 from the external clock signals350. This includes various Phase Locked Loops (PLLs) that enable higher speed internal clocks to be generated from external lower speed clocks.
In the context of the video decoder described in the above patent application the display rate is known because of a configuration pin (NTSC/PAL) which indicates whether a 59.94 Hz or 50 Hz display raster is being produced. The film frame rate is transmitted in the MPEG-2 video stream as the frame-rate parameter.[0048]
In the normal course of events any progressive frame will be displayed for two field times. A bit in the PICTURE-TYPE token controls the repeating of the first field to make the frame display for three field times.
[0049]| TABLE 1 |
|
|
| E | 7 | 6 | 5 | 4 | 3 | 2 | 1 | 0 |
|
| 1 | 1 | 1 | 1 | 0 | 1 | 1 | 1 | 1 |
| 0 | d | x | f | p | s | s | t | t |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
The ‘f” bit is set to ‘1’ to repeat the first field. In the case that the sequence is interlaced, the ‘f” bit in this token directly matches the ‘repeat-first-field’ bit in the MPEG-2 sequence. (This is the signal pulldown case). However, in the case that ‘progressive-sequence’(as the term is defined in the MPEG standard) is ‘one’, indicating that the sequence is coded as a progressive sequence, local pulldown is enabled and the ‘f’ bit is calculated according to the algorithm described herein.[0050]
The algorithm is executed on the microprogrammable state machine (MSM) and is therefore specified in microcode (rather than the more familiar ‘C’ programme that illustrates the algorithm in the appendix hereto). The MSM is a 16-bit machine and this causes some minor complications because of the limited number range that can be represented in 16 bits.[0051]
This is dealt with by reducing the size of the denominator and numerator of the slope by common factors. In the example program given at the end of this document this is done by cancelling any common factor of 1001 and then dividing by 2 until either the numerator or the denominator is odd. Even this simple case yields numbers, dx and dy which will not exceed 16-bit number range as indicated by the ‘min’ and ‘max’ values shown in the results. In the said video decoder, the numbers dx and dy are precalculated and stored in tables that are indexed to determine the correct dx and dy values. As a result, the ratios can be further reduced by the smallest possible numerator and denominator as shown below:
[0052] | TABLE 2 |
| |
| |
| Display Field Rate (Hz) |
| (Hz) | Full Form | Reduced | Full Form | Reduced |
|
| | | | |
| 24000/1001 | | | | |
|
| 24 | | | | |
|
| 25 | | 0 | | |
|
| 30000/1001 | Not Supported | | 0 |
|
| | |
|
The variable “d” is a decision variable which is updated at each x coordinate (or each film frame). At each x coordinate the ideal value of y (represented by the line in FIG. 1) lies between two pels (one black and one white). d is proportional to the difference between the distance to the upper pel and the distance to the lower pel.[0053]
If d is negative than the ideal line lies closer to the lower pel.[0054]
If d is positive then the ideal line lies closer to the upper pel.[0055]
At each x coordinate the algorithm must choose either the lower or upper pel and then update the value of d in readiness for the next x coordinate (next frame).[0056]
If d is negative, the lower pel is chosen. d is updated by adding on incr[0057]1. Since incr1 is positive d will become less negative reflecting the fact that the line will now be farther from the lower pel (at the next x coordinate) in value.
If d is positive, the upper pel is chosen. d is updated by adding on incr[0058]2. Since incr2 is negative d will become less positive reflecting the fact that the line will now be farther from the upper pel (at the next x coordinate).
incr[0059]1 and incr2 therefore represent the change in d (i.e., the change in the difference between the distance from the ideal notional line to the upper pel and the distance from the line to the lower pel) for the two possible decisions that the algorithm may take. Thus, having chosen a basic integer value of repetitions of the field in a frame period the notional slope of the notional line is determined in accordance withequation 1 from which the algorithm is used to decide whether to add a field repeat in a frame period or not to maintain the running average rate of the repetiton of fields at the faster field rate.
In the examples in Table 2 the basic integer value of repetitions of the field in a frame period is conveniently chosen as 2. Because of this repetitions of fields have to be added according to the pulldown pattern to maintain the running average. However, a basic integer value resulting in an overall excess of repetitions of the fields could be chosen, such that selected field repetitions are deleted according to the pulldown pattern established.[0060]
It will also be noted that, as an alternative to storing dx and dy in the table and calculating incr[0061]1 and incr2 it will be equally valid to store precalculated values of incr1 and incr2 in the table directly.
FIG. 3 illustrates the procedure for decoding and displaying a field for an appropriate number of times.[0062]
FIG. 4 shows an example algorithm to determine dx and dy from field-rate and frame-rate. In this example, values of field-rate and frame-rate that are larger than 1000 are interpreted as representing a multiple of 1001, e.g. a frame-rate of 24000 actually represents a frame rate of 24000/1001 Hz.[0063]
FIG. 5 shows an algorithm to initialize incr[0064]1, incr2 and d. The algorithm is used before the first frame. The values of dx and dy are integers such that the fraction dy/dx represents the “slope of the line”.
FIG. 6 shows an algorithm to determine whether to display a frame for two or three field times. The algorithm is used once for each frame. The values of incr[0065]1 and incr2 are those determined by the initialization algorithm. The value of d is that produced by this algorithm for the previous frame or the initial is a time algorithm in the case of the first frame.
The principal advantages of this method are:[0066]
1. All of the required pulldown conversions are performed using the same arithmetic.[0067]
2. Once the parameters dx and dy are known (and these can be stored in the table) no multiplications or divisions are required.[0068]
3. The algorithm works for arbitrarily long sequences of frames—none of the numbers grow indefinitely (which would eventually lead to number representation problems irrespective of the word width).[0069]
4. The frames which have the repeated field are distributed evenly throughout the sequence of frames.[0070]
5. Very little state needs to be maintained in order for the algorithm to operate. Just a current value of ‘d’ and probably INCR[0071]1 and INCR2 (although these could be recalculated or looked in a table each frame period).
The following procedure is Breshenham's line drawing algorithm.
[0072] | |
| |
| void breshenham(x1, y1, x2, y2) |
| { |
| int dx, dy, incr1, incr2, d, x, y, xend; |
| dx = abs(x2-x1); |
| dy = abs(y2-y1); |
| d = 2 * (dy - dx); |
| incr1 = 2 * dy; |
| incr2 = d; |
| if (x1 > x2) |
| { |
| } |
| printf (“(%d, %d)\n”, x, y); |
| while (x < xend) |
| { |
| } |
| printf(“(%d, %d)\n”, x, y); |
The following program shows the modified algorithm (as the procedure three_fields()) to calculate which frames to display for three field-times. Each possible conversion is checked out by testing over one million film frames to ensure that the field rate does indeed approach the required value.
[0073] |
|
| #include < compiler.h> |
| #include <pddtypes.h> |
| #include <stdlib.h> |
| #include <stdio.h> |
| Boolean three_fields(int dx, int dy, int *d, Boolean initalise) |
| { |
| int incr1, incr2, x, y, xend; |
| int r = False; |
| incr1 = 2 * dy; |
| incr2 = 2 * (dy - dx); |
| if (initalise) |
| { |
| } |
| double check_ratio(int dx, int dy, int limits[2]) |
| { |
| int d; |
| int frame, field = 0, num_frames =10000000; |
| double ratio, field_rate; |
| int three = 0, two = 0; |
| (void) three_fields(dx, dy, &d, True ); |
| limits[0] = limits[1] = 0; |
| if (d < limits[0]) limits[0] = d; |
| if (d > limits[1]) limits[1] = d; |
| for (frame = 0; frame < num_frames; frame ++) |
| { |
| if ( three_fields (dx, dy, &d, False ) ) |
| { |
| } |
| if (d < limits[0]) limits[0] =d; |
| if (d > limits[1]) limits[1] =d; |
| } |
| ratio = ( (double)field) / ((double) frame); |
| return ratio; |
| } |
| static int frame_rates[ ] = /* input frame rates */ |
| { |
| −1, |
| 24000, /* numbers > 1000 express a numerator − denominator = |
| 24, |
| 25, |
| 30000, |
| 30, |
| 50, |
| 60000, |
| 60, |
| −1, −1, −1, −1, −1, −1, −1}; |
| static int field_rates[ ] = /* output display rates */ |
| { |
| }; |
| double real_rate (int rate) |
| { |
| return ((double) rate) / 1001.0; |
| } |
| void main (int argc, char **argv) |
| { |
| int dx, dy, field_index, frame_index; |
| int limits[2]; |
| double ratio, field_rate; |
| for (field_index = 0; field_index < 2; field_index ++) |
| { |
| for (frame_index = 1; frame_rates[frame_index] > 0; frame_index |
| if ( (real_rate(frame_rates[frame_index] ) *2.0) < = |
| real_rate(field_rates[field_index]) ) |
| dy = field_rates[field_index]; |
| dx = frame_rates[frame_index]; |
| if ( (field_rates[field_index] < = 1000) || |
| (frame_rates[frame_index] < = 1000) ) |
| /* NB if both have the 1001 then don't bother! */ |
| { |
| if (field_rates[field_index] > 1000) |
| if (frame_rates[frame_index] > 1000) |
| } |
| dy − = (2 * dx); |
| /* limit ratio by dividing by two */ |
| while ( ( (dx & 1) = = 0) && ( (dy& 1) = = 0) ) |
| { |
| } |
| ratio = check_ratio (dx, dy, limits); |
| printf(“output field rate = %d%s, input frame rate = |
| field_rates[field_index], |
| ( (field_rates[field_index]> 1000) ? “/1001”:“”), |
| frame_rates[frame_index], |
| ( (frame_rates[frame_index]> 1000) ? “/1001”:“”) ): |
| printf(“dx = %d, dy = %d\n”, dx, dy); |
| field_rate = frame_rates[frame_index] * ratio; |
| if (frame_rates[frame_index]> 1000) |
| printf (“ratio = %4.12g, field_rate = %4.12g\n”, ratio, |
| printf(“(field_rate = %4.12g/1001)\n”, field_rate * 1001); |
| printf(“min = %d, max = %d\n\n”, limits[0], limits[1]): |
| } |
| The program of the preceding pages yields the following output: |
| output field rate = 50, input frame rate = 24000/1001 |
| dx = 12000, dy = 1025 |
| ratio = 2.0854166, field_rate = 49.9999984016 |
| (field_rate = 50049.9984/1001) |
| min = −21950, max = 2000 |
| output field rate = 50, input frame rate = 24 |
| dx = 12, dy = 1 |
| ratio = 2.0833333, field_rate = 49.9999992 |
| (field_rate = 50049.9991992/1001) |
| min = −22, max = 0 |
| output field rate = 50, input frame rate = 25 |
| dx = 25, dy = 0 |
| ratio = 2, field_rate = 50 |
| (field_rate = 50050/1001) |
| min = −50, max = 0 |
| output field rate = 60000/1001, input frame rate = 24000/1001 |
| dx = 750, dy = 375 |
| ratio = 2.5, field_rate = 59.9400599401 |
| (field_rate = 60000/1001) |
| min = −750, max = 0 |
| output field rate = 60000/1001, input frame rate = 24 |
| dx = 3003, dy = 1494 |
| ratio = 2.4975024, field_rate = 59.9400576 |
| (field_rate = 59999.9976576/1001) |
| min = −3018, max = 2982 |
| output field rate = 60000/1001, input frame rate = 25 |
| dx = 25025, dy = 9950 |
| ratio = 2.3976023, field rate = 59.9400575 |
| (field_rate = 59999.9975575/1001) |
| min = −30150, max = 19850 |
| output field rate = 60000/1001, input frame rate = 30000/1001 |
| dx = 1875, dy = 0 |
| ratio = 2, field_rate = 59.9400599401 |
| (field_rate = 60000/1001) |
| min = −3750, max = 0 |
|