CROSS-REFERENCE TO RELATED APPLICATIONSThe present application claims the benefit under 35 U.S.C. 119(e) of U.S. Prov. Pat. Appl. No. 61/808,549, entitled “IMAGING PIPELINE FOR SPECTRO-COLORIMETERS”, by Ye YIN, et al. filed on Apr. 4, 2013, the contents of which are hereby incorporated herein by reference, in its entirety, for all purposes.
The present disclosure is related to U.S. patent application Ser. No. 13/736,058, entitled “PARALLEL SENSING CONFIGURATION COVERS SPECTRUM AND COLORIMETRIC QUANTITIES WITH SPATIAL RESOLUTION,” by Ye Yin et al., filed on Jan. 7, 2013, the contents of which are hereby incorporated by reference in their entirety, for all purposes.
FIELD OF THE DESCRIBED EMBODIMENTSThe described embodiments relate generally to methods, devices, and systems for an imaging pipeline, and more particularly to an optical test equipment/method for display testing that features a calibration configuration including spectral and colorimetric measurements with spatial resolution.
BACKGROUNDIn the field of spectro-colorimeters, calibration procedures of an imaging system for image correction and a spectroscopic system for color correction are performed regularly. Imaging system calibration typically measures display artifacts such as black and yellow mura, Moire patterns, display non-uniformity, linearization, and dark current correction. Conventionally, spectrometers are the typical instruments for color measurement. However, spectrometers can only measure one spot of flat uniform colors, while typical imaging system measure extended images in at least two dimensions to detect display artifacts. Using digital cameras as a means of color measurement device overcomes this limitation, but performance of digital cameras in terms of accuracy, resolution, precise color rendition is lower than spectrometers. A compromise is therefore made between a fast and inaccurate system using a digital camera, or a slow and highly precise system that alternates between a camera and a spectrometer.
Therefore, what is desired is a method and a system for calibration of a spectro-colorimeter that is fast and provides high color accuracy and resolution together with detailed image correction capabilities.
SUMMARY OF THE DESCRIBED EMBODIMENTSIn a first embodiment, a spectro-colorimeter system for imaging pipeline is provided, the system including a camera system including a separating component and a camera. The separating component directs a first portion of an incident light to the camera system. The system may also include a spectrometer system having an optical channel, a slit, and a spectroscopic resolving element, the separating component directing a second portion of the incident light to the spectrometer system through the optical channel; a controller coupling the camera system and the spectrometer system. In some embodiments the camera system is configured to provide a color image with the first portion of the incident light. Also, in some embodiments the spectrometer system is configured to provide a tristimulus signal from the second portion of the incident light. Furthermore, in some embodiments the controller is configured to correct the color image from the camera system using the tristimulus signal from the spectrometer.
In a second embodiment, an imaging pipeline method is provided, the method including providing a calibration target and receiving Red, Green, and Blue (RGB) data from a camera system. Also, the method may include receiving tristimulus (XYZ) data from a spectrometer system; providing a color correction matrix; and providing an error correction to the camera system.
In yet another embodiment a method for color selection in an imaging pipeline calibration is provided. The method may include selecting a training sample and including the training sample in a predictor set when the training sample is not already included. The method may also include obtaining a color correction matrix using the predictor set; obtaining an error value using the color correction matrix and a plurality of test samples; and forming a set of error values from a plurality of predictor sets when no more training samples are selected. Furthermore, the method may include selecting a training sample and a predictor set form a set of error values and providing the color correction matrix and the selected predictor set when the error value is less than a tolerance.
Other aspects and advantages of the invention will become apparent from the following detailed description taken in conjunction with the accompanying drawings which illustrate, by way of example, the principles of the described embodiments.
BRIEF DESCRIPTION OF THE DRAWINGSThe described embodiments may be better understood by reference to the following description and the accompanying drawings. Additionally, advantages of the described embodiments may be better understood by reference to the following description and accompanying drawings. These drawings do not limit any changes in form and detail that may be made to the described embodiments. Any such changes do not depart from the spirit and scope of the described embodiments.
FIG. 1 illustrates a spectro-colorimeter system for handling an imaging pipeline, according to some embodiments.
FIG. 2 illustrates a flow chart including steps in an imaging pipeline method, according to some embodiments.
FIG. 3 illustrates a flow chart including steps in an imaging pipeline method, according to some embodiments.
FIG. 4 illustrates a flow chart including steps in an imaging pipeline method, according to some embodiments.
FIG. 5 illustrates a flow chart including steps in an imaging pipeline calibration method, according to some embodiments.
FIG. 6A illustrates a color distribution chart for a plurality of training samples in an imaging pipeline calibration method, according to some embodiments.
FIG. 6B illustrates a color distribution chart for a plurality of training samples in an imaging pipeline calibration method, according to some embodiments.
FIG. 7A illustrates a color distribution chart for a plurality of test samples in an imaging pipeline calibration method, according to some embodiments.
FIG. 7B illustrates a color distribution chart for a plurality of test samples in an imaging pipeline calibration method, according to some embodiments.
FIG. 8 illustrates a flow chart including steps in a color selection algorithm used for an imaging pipeline calibration method, according to some embodiments.
FIG. 9A illustrates a camera system response chart for a signal linearity correction step in an imaging pipeline calibration method, according to some embodiments.
FIG. 9B illustrates a camera system response chart for a signal linearity correction step in an imaging pipeline calibration method, according to some embodiments.
FIG. 9C illustrates a camera system response chart for a signal linearity correction step in an imaging pipeline calibration method, according to some embodiments.
FIG. 10 illustrates a color distribution chart for a plurality of test samples measured and predicted in an imaging pipeline calibration method, according to some embodiments.
FIG. 11 illustrates a camera display for a uniformity correction step of a camera system in an imaging pipeline calibration method, according to some embodiments.
FIG. 12 illustrates an error average chart in an imaging pipeline calibration method, according to some embodiments.
FIG. 13A illustrates a color distribution chart for a plurality of training samples in an imaging pipeline calibration method, according to some embodiments.
FIG. 13B illustrates a color distribution chart for a plurality of training samples in an imaging pipeline calibration method, according to some embodiments.
FIG. 14 illustrates a block diagram of a spectro-colorimeter system for handling an imaging pipeline, according to some embodiments.
In the figures, elements referred to with the same or similar reference numerals include the same or similar structure, use, or procedure, as described in the first instance of occurrence of the reference numeral.
DETAILED DESCRIPTION OF SELECTED EMBODIMENTSRepresentative applications of methods and apparatus according to the present application are described in this section. These examples are being provided solely to add context and aid in the understanding of the described embodiments. It will thus be apparent to one skilled in the art that the described embodiments may be practiced without some or all of these specific details. In other instances, well known process steps have not been described in detail in order to avoid unnecessarily obscuring the described embodiments. Other applications are possible, such that the following examples should not be taken as limiting.
In the following detailed description, references are made to the accompanying drawings, which form a part of the description and in which are shown, by way of illustration, specific embodiments in accordance with the described embodiments. Although these embodiments are described in sufficient detail to enable one skilled in the art to practice the described embodiments, it is understood that these examples are not limiting; such that other embodiments may be used, and changes may be made without departing from the spirit and scope of the described embodiments.
Color measurement instruments fall into two general categories: broadband and narrowband. A broadband measurement instrument reports up to 3 color signals obtained by optically processing the input light through broadband filters. Photometers are the simplest example, providing a measurement only of the luminance of a stimulus. Photometers may be used to determine the nonlinear calibration function of displays. Densitometers are an example of broadband instruments that measure optical density of light filtered through red, green and blue filters. Colorimeters are another example of broadband instruments that directly report tristimulus (XYZ) values, and their derivatives such as CIELAB (i.e., International Commission on Illumination—CIE, French transalation—1976 (L*, a*, b*) color space). Under the narrowband category fall instruments that report spectral data of dimensionality significantly larger than three.
Spectrophotometers and spectro radiometers are examples of narrowband instruments. These instruments typically record spectral reflectance and radiance respectively within the visible spectrum in increments ranging from 1 to 10 nm, resulting in 30-200 channels. They also have the ability to internally calculate and report tristimulus coordinates from a narrowband spectral data. Spectro radiometers can measure both emissive and reflective stimuli, while spectrophotometers measure reflective stimuli, colorimeters or imaging photometers are imaging devices that behave like a camera. In some embodiments, imaging colorimeters include a time-sequential configuration or a Bayer-filter configuration. In some embodiments the time-sequential configuration separates the measurement objective color in a time sequential manner by spinning a color wheel. At any particular moment, the measurement objective photons with a selected color transmit through the filter and hit the embedded CCD or CMOS imager inside the colorimeter. Accordingly, the overall display color information and imaging is reconstructed after at least one cycle of the color wheel spinning. In some embodiments, the imaging colorimeter separates color channels using a Bayer filter configuration. A Bayer filter configuration includes a color filter array composed of periodically aligned 2×2 filter element. The 2×2 filter element may include two green filters, one red filter and one blue filter. The time-sequential configuration may be more precise than the Bayer filter configuration. On the other hand, the Bayer filter configuration may be faster than the time-sequential configuration. Further, the Bayer filter configuration has a ‘one-shot’ capability for extracting color information, albeit with limited resolution. In some embodiments, an imaging colorimeter may include a spatial Foveon filter separating colors using a vertically stacked photodiode layer.
In embodiments disclosed herein a spectro-colorimeter including a camera-based display color measurement system has a master-slave structure. More specifically, in some embodiments a spectrometer is a master device, driving a camera as a slave device. The spectro-colorimeter includes a controller that adjusts camera accuracy to match the spectrometer accuracy, maintaining an image pipeline. Adjusting camera accuracy includes building a characterization model using a color correction matrix. The color correction matrix transforms the camera color space to spectrometer color space. Accordingly, the color correction matrix is a transformation between RGB values (a first 3-dimensional vector) and XYZ values (a second 3-dimensional vector). Since the spectrometer and the camera are integrated in a spectro-colorimeter system, the color correction matrix can be generated in real time. Thus, a continuous and fluid imaging pipeline is established for display testing.
FIG. 1 illustrates a spectro-colorimeter system100 for handling an image pipeline, according to some embodiments. Spectro-colorimeter system100 includes acamera system150, aspectrometer system160, and acontroller system170.Controller system170 provides data exchange and control commands betweenspectrometer system160 andcamera system150. Also shown inFIG. 1 ischaracterization target120.Characterization target120 provides an optical target so thatcamera150 may form a 2-dimensionsl (2D) image on a sensor array in an image plane of acamera155. In some embodiments the sensor array is a 2D charge-coupled device (CCD) or complementary metal-oxide system (CMOS) sensor array. In some embodiments,characterization target120 may be an emissive target, or a reflective target. Examples ofcharacterization target120 may include a liquid crystal display (LCD), a light emitting diode (LED) display, or any other type of TV or display, such as used in a TV, a computer, a cellular phone, a laptop, a tablet computer or any other portable or handheld device.
Spectro-colorimeter system100, as in embodiments disclosed herein is able to acquire a high resolution spectrum and form an imaging pipeline simultaneously. Accordingly, the spectral measurement and the imaging may share the measurement lighting area at approximately the same or similar time.Light110 fromcharacterization target120 is incident on aseparating component130 which splits a portion of incident light110 towardscamera system150, and a portion of incident light110 towardspectral system160. Accordingly, in someembodiments separating component130 is a beam splitter. Further according to some embodiments, separatingcomponent130 may be a mirror having anaperture131 on the surface.
A portion of incident light110 separated by separatingcomponent130 is directed byoptical channel140 intospectrometer system160.Optical channel140 may include an optical channel, a transparent conduit, lenses and mirrors, and free space optics.Lens167 focuses the incident light through aslit161 intospectrometer system160.Spectrometer system160 may include acollimating mirror162, aspectroscopic resolving element164, a focusingmirror163, and adetector array165. Accordingly, in some embodiments slit161, mirrors162 and163, spectroscopic resolvingelement164 andsensor array165 are arranged in a crossed Czerny-Turner configuration. In some embodiments, spectroscopic resolvingelement164 may be a diffraction grating or a prism. One of ordinary skill in the art will recognize that the peculiarities of the spectrometer system configuration are not limiting to embodiments consistent with the present disclosure.Spectrometer system160 may include aprocessor circuit168 and amemory circuit169.Memory circuit169 may store commands the when executed byprocessor168cause spectrometer system160 to perform the many different operations consistent with embodiments in the present disclosure. For example,processor circuit168 may establish communication withcontroller circuit170, and provide data and commands tocamera system150.Processor circuit168 may also be configured to execute commands provided bycontroller170. In some embodiments,processor circuit168 may provide a tristimulus vector XYZ tocontroller170. Accordingly, the tristimulus vector XYZ may include highly resolved spectral information fromcharacterization target120 provided byspectroscopic system160 tocontroller170.
A portion of incident light110 reflected from separatingcomponent130 is directed byoptical component135 towardsimaging camera155.Optical component135 may include a mirror, a lens, a prism, or any combination of the above.Camera system150 may includeprocessor circuit158 and amemory circuit159.Memory circuit159 may store commands the when executed byprocessor158cause camera system150 to perform the many different operations consistent with embodiments in the present disclosure. For example,processor circuit158 may establish communication withcontroller circuit170, and provide data and commands tospectrometer system160.Processor circuit158 may also be configured to execute commands provided bycontroller170. Also, in someembodiments processor circuit158 provides RGB values measured bycamera system150 tocontroller170.
Thus, embodiments consistent with the present disclosure substantially reduce test time ofcharacterization target120 using simultaneous capture of a large number of measurements in a single image. Embodiments as disclosed herein also provide camera system150 (e.g., a CCD device) coupled tospectrometer system160 in an imaging pipeline. Thus, the highly resolved 2-dimensional information ofcamera system150 may be calibrated in real time with the highly resolved spectral information provided byspectroscopic system160.
FIG. 2 illustrates a flow chart including steps in animaging pipeline method200, according to some embodiments. Some steps inimaging pipeline method200 may be applied in a production environment for display devices (e.g., a factory), using a ‘golden’ sample, for example once a month. In some embodiments, steps inimaging pipeline method200 may be performed more frequently, such as for every display being tested. Some steps inimaging pipeline method200 may be performed for each one of a plurality of images tested on each display. Steps inmethod200 may be performed by a controller using data provided by a camera system and a spectrometer system (e.g.,controller170,camera system150, andspectrometer160, cf.FIG. 1). Accordingly, the data provided to the controller may be stored in a memory circuit and processed by a processor circuit in the camera system and, a memory circuit and a processor circuit in the spectrometer system (e.g.,processor circuits158 and168, andmemory circuits159 and169, cf.FIG. 1).
Step210 includes providing a calibration target. In some embodiments,step210 may include selecting a plurality of screen displays having standardized characteristics. For example, the plurality of screen displays may include a set of screens, each having a single, pre-determined color. In some embodiments selecting a plurality of screen displays may include selecting screen displays having spatial uniformity. For example, step210 may include selecting a plurality of screen displays having a uniform intensity. Step220 includes receiving RGB data fromcamera system150. Step230 includes receiving XYZ data fromspectrometer system160. The XYZ data received instep230 may include a tristimulus vector determined by a highly resolved spectral analysis ofincident light110. Step240 may include providing a color correction matrix (CCM). The CCM transforms RGB values provided bycamera system150 into device independent color space, such as CIE tristimulus vector XYZ. Step250 includes providing an error correction tocamera system150 so that camera system may adjust the image settings. In some embodiments,steps240 and250 may include steps and procedures as described in detail below.
A. Development of Color Correction Matrix:Since the spectral sensitivity functions ofcamera system150 may not be identical to the CIE color matching function of human vision, the output responses ofcamera system150 and the tristimulus values fromspectrometer system160 are related by a characterization model included insteps240 and250. For achieving high-fidelity color reproduction, the output RGB values fromcamera system150 are transformed to CIE colorimetric values, such as XYZ or CIELAB. The model is developed based on two sets of data, colorimetric values (e.g., tristimulus vector XYZ) provided byspectrometer system160 and camera responses (e.g., RGB output) fromcamera system150. Accordingly, the colorimetric response and the camera responses are originated by a characterization target. For example, the characterization target may becharacterization target120. In some embodiments, a calibration method of an imaging pipeline may include characterization targets that are accurate colorimetric standards. Thus, a calibration process as inimaging pipeline method200 may provide a reliable camera model that may be used in a display manufacturing environment.
The CCM instep240 may be constructed by simultaneously measuring the RGB response ofcamera system150 and the XYZ colorimetric values provided byspectrometer system160 from acharacterization target120.
A.1 Camera Characterization Model:Most characterization models are built by first measuring the characterization target on the media considered, and then generating the mathematical model to transform any color in the device color space to a particular color space. It is often possible to define the relationship between two color spaces through a 3 by 3 matrix. For example,
where X, Y and Z may be the CIE tristimulus values provided byspectrometer system160. R, G and B are camera signals provided bycamera system150. However, when modeling many devices the 3 by 3 matrix does not yield a sufficiently accurate, a complex or non-linear model may be desirable.
With the purpose of display measurement, a polynomial model is established without any assumption of physical features of the associated device. It includes a series of coefficients which is determined by regression from a set of a set of known samples. The generic formula for the polynomial model is given in Eq.2:
where iR, iGand iBare nonnegative integer indices representing the order of R, G and B camera response; nPis the order of the polynomial model; qx,iR,iG,iB, qy,iR,iG,iB, and qz,iR,iG,iBare the model coefficients to be determined. When all of iR, iGand iBare allowed to be zero, the constant coefficients will be included. When np=1, Eq.2 becomes:
X=qx,0,0,0+qx,1,0,0R+qx,0,1,0G+qx,0,0,1B
Y=qy,0,0,0,+qy,1,0,0R+qy,0,1,0G+qy,0,0,1B
Z=qz,0,0,0+qz,1,0,0R+qz,0,1,0G+qz,0,0,1B Eq. 3
and when nP=2, Eq.2 becomes:
X=qx,0,0,0+qx,1,0,0R+qx,0,1,0G+qx,0,0,1B+qx,2,0,0R2+qx,0,2,0G2+qx,0,0,2B2+qx,1,1,0RG+qx,1,0,1RB+qx,0,1,1GB
Y=qy,0,0,0+qy,1,0,0R+qy,1,1,0G+qy,0,0,1B+qy,2,0,0R2+qy,0,2,0G2+qy,0,0,2B2+qy,1,1,0RG+qy,1,0,1RB+qy,0,1,1GB
Z=qz,0,0,0+qz,1,0,0R+qz,0,1,0G+qz,0,0,1B+qz,2,0,0R2+qz,0,2,0G2+qz,0,0,2B2+qz,1,1,0RG+qz,1,0,1RB+qz,0,1,1GB Eq. 4
Eq.1 can be expressed in matrix form as given in Eq. 5:
Thus, for nP=1, Q is a 3 by 4 matrix:
where
c is a column vector of tristimulus values, Q is the polynomial mapping matrix, and
is a column vector formed by camera responses. For n
Pfrom 1, 2, 3, 4 to 5, all the sizes of the column vectors and together with the matrix Q are tabulated in Table 1.
| TABLE 1 |
|
| Sizes of the matrix for polynomial models |
| 1 | 3 | 3 × 4 | 4 |
| 2 | 3 | 3 × 10 | 10 |
| 3 | 3 | 3 × 20 | 20 |
| 4 | 3 | 3 × 35 | 35 |
| 5 | 3 | 3 × 56 | 56 |
| |
For characterizing digital camera by polynomial model, there are two steps:
1. To form vector
via the given camera RGB vector ō=(R,G,B)
T2. To transform
to the vector
c of tristimulus values by the mapping matrix Q
Note here the superscript T represents the transpose of vector or matrix. Since the polynomial model is established when the mapping matrix Q is defined, some training samples may be desirable.
Suppose K samples are available. For each sample, the camera response vector ō can be obtained by imaging the sample using camera. The tristimulus values vector
c can be also measured by physical measurement such as spectrophotometers. Hence there are K tristimulus values vectors:
ck, k=1, 2, . . . , K; and K camera response vectors: ō
k, k=1, 2, . . . , K, form the K vectors
k, k=1, 2, . . . , K. Then Eq.5 can be expressed as:
ck=Qk,l=1,2
, . . . ,K Eq. 7
where
kis formed based on the camera response vector ō
k. Letting
C=[c1,c2, . . . ,cK], and
A=[1,
2, . . . ,
K] Eq. 8
results in matrix equation:
C=QA Eq. 9
where C is 3 by K matrix, Q is 3 by Np matrix and A is Np by K matrix. In the above matrix equation, matrix Q is unknown. Since both of the matrices C and Q have three rows.
Let {tilde over (C)}j, j=1,2,3, represents the three row vectors of the matrix C, and {tilde over (Q)}j, j=1, 2, 3, are the three row vectors of the matrix Q. Thus, the matrix in Eq.9 can be split to three linear systems of equations:
{tilde over (C)}j={tilde over (Q)}jA Eq. 10
or
cj=ATqjwithcj=({tilde over (C)}j)T,qj=({tilde over (Q)}j)T,j=1,2,3 Eq. 11
Note that and {tilde over (C)}jare {tilde over (Q)}jand Np row vectors, butcjand areqjand Np column vectors.
When K>Np, the linear system of equation willcej=ATqjhave no solution. If K<Np, the equation will have many solutions. In fact when K=Np, it may have unique solution, or many solutions or no solution depending on the conditions of the vectorcjand matrix AT. In general, the least squares solution is required, which is formulated as minimizing the expression:
∥ATqj−cj∥2
Here ∥cj∥2denotes the 2-norm of the vectorcj. The above solution can be calculated by
cj=(AT)+qj Eq. 13
where (AT)+ is the generalized or pseudo-inverse of the matrix AT.
If K=Np and Eq.13 has a unique solution, (AT)+ becomes the normal inverse (AT)−1of the matrix AT. If the problem (Eq.13) has many solutions, the above solution will become the minimum norm solution. Note also thatqj=({tilde over (Q)}j)Tin Eq.12, thus after some algebraic manipulations the mapping Q is finally given by
Q=CA+ Eq. 14
The above K samples with known camera responses ōkand tristimulus values vectorsckwhich are applied to compute the matrix Q are called the training datasets.
For example, when using an individual sample to determine matrix Q in Eq.5, there are many matrices Q satisfying Eq.5. Constraints such as the above normalization are desirable since the unknown model parameters are used as multipliers. It is desirable that these parameters be smaller in magnitude in order to reduce noise propagation and to prevent local oscillation in prediction. In some embodiments, a minimum norm used may be the square root of the sum of squared unknowns (elements in the Q matrix). In the proposed method, the pseudo or generalized inverse is defined in Eq.14. Hence, regardless of the number of samples used, the matrix Q with minimum norm is obtained, resulting in a unique solution in each case.
Generally, a better mapping to the characterization target can be obtained by high-order polynomials which involves more terms in the matrix. However, their experimental results show that several particular terms used such as RGB (first order polynomial, white color) and1 (zero order polynomial, black color) can provide a more accurate prediction.
A.2 Development of Camera Characterization Target Based on Display Images:Generally, when more colors are included incharacterization target120, the model can predict with better accuracy until the model performance stabilizes. A large number of colors may increase production costs in terms of testing time and complexity, while increasing the accuracy of the color rendition ofcamera system150. Accordingly, embodiments consistent with the present disclosure provide an optimized set of display colors to constructcharacterization target120 with reduced impact in testing time and complexity, while maximizing colorimetric accuracy.
FIG. 3 illustrates a flow chart including steps in animaging pipeline method300, according to some embodiments. Steps inmethod300 may be performed by a controller using data provided by a camera system and a spectrometer system (e.g.,controller170,camera system150, andspectrometer160, cf.FIG. 1). Accordingly, the data provided to the controller may be stored in a memory circuit and processed by a processor circuit in the camera system and, a memory circuit and a processor circuit in the spectrometer system (e.g.,processor circuits158 and168, andmemory circuits159 and169, cf.FIG. 1).
The image quality ofcamera system150 can significantly vary with the method of each step in image-processing pipeline. Incamera system150, the image pipeline involves exposure time determination, defective pixel correction, linearization, dark current removal, uniform correction, spatial demosaicing, display area detection, clipping algorithm and binning. Since the aim is to accurately correlate camera response to spectrometer and be able to detect display artifacts, the effect of the exposure time, linearization, dark current removal, uniform correction and clipping algorithm on image quality is fully studied.Imaging pipeline method300 may include a calibration ofcamera system150. Step310 includes forming an image fromcharacterization target120. Step315 includes correcting defect pixels. The defect pixels may be included in the 2D sensor array of camera155 (cf.FIG. 1). Step320 may include correcting signal linearity. Step325 may include compensating for lens shading effects.
Step330 includes correcting for dark current and smear in the sensor array ofcamera system150.
B.31 Dark Current Removal:Each image is obtained with dark current removal and uniformity correction. The camera dark current is measured with no ambient light by 10 times, we get the average RGB reading values after 10 times measurements:
| TABLE I |
|
| Camera dark current in R, G, and B channels |
| Dark Current | 0.424042 | 0.4193533 | 0.4701065 |
| |
The CC chart is applied as a characterization target as a benchmark for the system to build a 3 by 3 CCM using least-square regression. An evaluation of the CCM derived from the data with or without dark current removal is shown in Tables 2(a) and (b), respectively. The differences are as small as sub-0.1 range.
| TABLE 2 |
|
| CCM derived from the 24 GretagMacbeth ColorChecker |
| chart (a) with and (b) without dark current removal |
|
| (a) |
| 3.6417 | 2.3073 | 0.4027 |
| 1.0473 | 5.9733 | −0.7247 |
| −0.0341 | −1.2831 | 5.9315 |
| 3.5314 | 2.3792 | 0.3478 |
| 0.94161 | 6.042 | −0.7775 |
| −0.1289 | −1.2208 | 5.8851 |
|
The results with and without removing dark current are shown in Tables 3 (a) and (b), respectively. The CIEDE2000 color differences are used as the metric to determine training and testing performance. The training performance is the model trained and tested by the Color Correction (CC) chart. The testing performance is the model trained by CC chart and tested by the 729 dataset. It can be seen that the average performance was slightly improved by 0.2 E00 units when we remove the dark current.
| TABLE 3 |
|
| Training and testing performance of the CCM |
| (a) with and (b) without dark current removal |
| Training performance | 0.222524 | 1.595685 | 0.855206 | 10.516312 |
| Testing performance | 0.039123 | 1.158254 | 0.616609 | 17.589503 |
| Training performance | 0.257385 | 1.746228 | 1.04665 | 9.943352 |
| Testing performance | 0.033203 | 1.339187 | 0.766321 | 16.974136 |
|
Step335 may include correct uniformity in the 2D image provided by the sensor array incamera system150. For example, step335 may include correct of Mura and Moire artifacts in the image. When the lines in the display happen to line up closely with some of the lines of CCD sensor, the Moire patterns will occur an interference pattern. An optical low pass filter or a digital filter may be used to remove the artifacts. Accordingly, in some embodiments step335 may include correction of artifacts resulting from a larger field of view ofcamera155 relative tocharacterization target120. An algorithm to detect the points of interest (POI) (the portion of a sensor array including light110 from characterization target120) may crop the area from a full camera view. Since the display testing patterns are uniform colors, a technique of edge detection is used. A measure of edge strength such as gradient magnitude is derived for searching local directional maxima magnitude. Based on the magnitude, a threshold is applied to decide whether edges are present of not at an image point. The higher the threshold, the more edges will be removed.
Step340 may include balancing a white display. Accordingly, step345 may include presenting a standard ‘white’characterization target120 and determine the RGB camera output. Step345 may include correct the gamma value ofcamera system150. Step350 may include providing RGB data for a color correction matrix step. Accordingly, step350 may include providing RGB data aftersteps310 through345 are completed, tocontroller170.Controller170 may then form CCM matrix executing step240 (cf.FIG. 2). Step355 includes receiving a CCM. For example, step355 may includeprocessor circuit158 receiving CCM fromcontroller170 whenstep240 is complete (cf.FIG. 2). Step360 includes providing corrected RGB data from the received CCM. Accordingly, step360 may include receiving tristimulus data XYZ together with CCM, so thatprocessor circuit158 may obtain the corrected RGB values. In someembodiments processor circuit158 may receive corrected RGB values directly fromcontroller170. Step365 includes receiving an error value. The error value may be a difference between the RGB data provided instep350 and the corrected RGB data provided instep360. Instep370processor circuit158 determines whether or not the error value is below or above a tolerance value. When the error value is below the tolerance, then step375 includes obtaining a tristimulus XYZ image from the CCM and the corrected RGB data. Accordingly, the XYZ image provided instep375 may have a high colorimetric accuracy since it uses data provided by a highresolution spectrometer system160 and acontroller170 forming a CCM as in step240 (cf.FIG. 2 and Eqs. 1-14 above). When the error value is above tolerance then imagingpipeline method300 is repeated fromstep350.
FIG. 4 illustrates a flow chart including steps in animaging pipeline method400, according to some embodiments. Steps inmethod400 may be performed by a controller using data provided by a camera system and a spectrometer system (e.g.,controller170,camera system150, andspectrometer160, cf.FIG. 1). Accordingly, the data provided to the controller may be stored in a memory circuit and processed by a processor circuit in the camera system and, a memory circuit and a processor circuit in the spectrometer system (e.g.,processor circuits158 and168, andmemory circuits159 and169, cf.FIG. 1).
Imaging pipeline method400 may include a calibration ofspectrometer system160. Step410 may include correcting a signal linearity. For example, the signal linearity may be the linearity of sensor array165 (cf.FIG. 1). In some embodiments,step410 is performed by providing a uniform light source tospectrometer system160. Step420 may include adjusting a wavelength scale. Step430 may include adjusting the spectral sensitivity. Step440 may include correcting for a dark current. The dark level error may be caused by the imperfect glass trap and specular beam error. Thus, step440 may include placing a glass wedge in the optical path ofspectrometer system160. Step450 may include receiving a characterization target light. And step460 may include providing XYZ data from the spectrum formed with the received characterization light.
FIG. 5 illustrates a flow chart including steps in an imagingpipeline calibration method500, according to some embodiments. Steps inmethod500 may be performed by a controller using data provided by a camera system and a spectrometer system (e.g.,controller170,camera system150, andspectrometer160, cf.FIG. 1). Accordingly, the data provided to the controller may be stored in a memory circuit and processed by a processor circuit in the camera system and, a memory circuit and a processor circuit in the spectrometer system (e.g.,processor circuits158 and168, andmemory circuits159 and169 cf.FIG. 1).
Step510 includes selecting a plurality of training samples. Step510 may include selecting a plurality of colors from a standard, or a ‘gold’ standard. Step520 includes providing a plurality of test samples from the plurality of training samples selected instep510. Accordingly, step520 may include digitally processing the training samples provided instep510 to generate a larger number of test samples. A plurality of training samples as selected instep510 may be as described in detail below, with reference toFIGS. 6A and 6B. In one example,training samples610 may be obtained from a well-known standard chart. The ColorChecker® Color Rendition Chart supplied by Macbeth Company in 1976 is now called ColorChecker® (CC) owned by X-Rite. It has been widely used as reference in the field of photography and video. The chart includes a matrix of 24 scientifically prepared color squares including three additive and three subtractive primaries, 6 greyscale tones, and natural color objects such as foliage, human skin and blue sky which exemplify the color of their counterparts. These 24 colors are reproduced on the testing display ascharacterization target120.
FIG. 6A illustrates acolor distribution chart600A for a plurality oftraining samples610 in an imaging pipeline calibration method, according to some embodiments.FIG. 6A shows the color distribution of the CC on a*b* planes. Accordingly, theabscissa601A inchart600A corresponds to the a* value (red-green scale), and theordinate602A inchart600A corresponds to the b* value (yellow-blue scale). The CC chart may include a set ofgray scale colors620 that are displayed in the origin ofchart600A (neutral color).
The greyscale of CC chart can be applied to correct the linearity between luminance level and camera response. Once the camera has been characterized, the greyscale is also used to check the gamma of the testing display (e.g., instep345, cf.FIG. 3).
FIG. 6B illustrates acolor distribution chart600B for the plurality oftraining samples610 in an imaging pipeline calibration method, according to some embodiments.FIG. 6B shows the color distribution of the CC on L*-C*ab planes. Accordingly, theabscissa601B inchart600B corresponds to the Ca*b* value (√{square root over (a*2+b*2)}), and the ordinate602B inchart600B corresponds to the L* value (luminance).Test samples610 in may include aset620 of gray scale colors that are displayed along the602B axis at regular intervals (evenly graded ‘lightness’).
A plurality of test samples as used in method500 (cf.FIG. 5 above) may be as described in detail with respect toFIGS. 7A and 7B, below. InFIG. 7A the abscissa701A and ordinate702A may be as inFIG. 6A. And inFIG. 7B the abscissa701B and ordinate701B may be as inFIG. 6B.
FIG. 7A illustrates acolor distribution chart700A for a plurality oftest samples710 in an imaging pipeline calibration method, according to some embodiments. Accordingly, test samples may include729 uniform distribution colors on display color gamut. One of ordinary skill will recognize that there is nothing limiting with regard to the number of data points intest sample710.
Thetest colors710 are formed fromtraining colors610 using 16 bits intervals along red, green and blue channels plus a grey scale are accumulated to have 729 colors. These colors uniformly distribute in the display color gamut as shown inFIGS. 7A and 7B.
FIG. 7B illustrates acolor distribution chart700B for a plurality oftest samples710 in an imaging pipeline calibration method, according to some embodiments.Test samples710 may includegray scale samples720.Chart700B shows an L*-Ca*b* plane, so that gray scale points720 are clearly distinguishable along the L* axis (ordinates).
Based on test sample set710, a color selection algorithm is applied to select colors to establish a characterization target for display measurement. This set is also applied to test the robustness of characterization targets.
FIG. 8 illustrates a flow chart including steps in acolor selection algorithm800 used for an imaging pipeline calibration method, according to some embodiments.Algorithm800 may include a color selection algorithm (CSA) to achieve high color accuracy in terms of color differences. In other words,CSA800 may achieve high color resolution. During the selection process, a source dataset including XYZ and camera RGB are first provided (see vectors c and a, in reference to step240 inmethod200, cf.FIG. 2). The number of samples in the source dataset and the training dataset, which are the samples selected from the source dataset are known.
Steps inmethod800 may be performed by a controller using data provided by a camera system and a spectrometer system (e.g.,controller170,camera system150, andspectrometer160, cf.FIG. 1). Accordingly, the data provided to the controller may be stored in a memory circuit and processed by a processor circuit in the camera system and, a memory circuit and a processor circuit in the spectrometer system (e.g.,processor circuits158 and168, andmemory circuits159 and169, cf.FIG. 1).
Step810 includes collecting a training sample. Accordingly, step810 may include selecting a training set from a standardized set. The standardized set may be a set of calibration colors. If K is the total number of samples in a training set, a value κ may be predefined as the number of training samples to form a predictor set. Thus, κ may be a ‘dimension’ of the predictor set. In some embodiments,method800 starts with κ equal to zero. Since there are K training samples, each sample is a candidate. Each of the K samples is first used (κ=0) to obtain a predictor set. Thus, K models are obtained. Step815 includes a query as to whether or not the training sample is already included in a predictor set. If the training sample is already included in the predictor set, thenmethod800 starts again with a new training sample, to form a new predictor set. A predictor set may include matrices C and AT, including vectors c and a (cf. the detailed description ofstep240 inmethod200,FIG. 2). Thus, the predictor set may include tristimulus values (XYZ, vector c) fromspectrometer system160, and RGB values from camera system150 (vector a, formed from RGB values according to Eq. 2). When the training sample is not included in the predictor set,step820 includes the training sample into the predictor set. In some embodiments, the predictor set may be empty, so that the first training sample selected inset810 may automatically be used in the predictor set. In step825 a CCM is obtained using the predictor set. Accordingly, the CCM may be formed as matrix Q, from matrices C and A (cf. Eq. 14). Step830 includes obtaining an error value from a plurality of test samples. For example, step830 may include obtaining RGB values for a plurality of test samples obtained with the tristimulus values XYZ provided byspectrometer system160 and the CCMmatrix Q. Step830 may further include comparing the obtained RGB values with the RGB values provided bycamera system150 for each test sample.
The set of test samples used instep830 may be much larger than the set of training samples used to form the predictor set. For example, the set of training samples insteps810 through825 may be as training set610 (cf.FIGS. 6A and 6B). And the set of test samples instep830 may be as test set710 (cf.FIGS. 7A and 7B). Step830 may include obtaining a single error value from a set of error values for each of the test samples. In some embodiments step830 may include averaging the error values from the set of error values for each of the test samples. In some embodiments,step830 may include selecting an error value from a statistical distribution of the error values for all the test samples. For example, a median, a mean, the maximum, or the minimum error values in a distribution of error values may be selected instep830. Step835 includes querying whether or not a new training sample is selected. For example, if a training sample remains to be selected then steps810 through835 are repeated until the result instep835 is a ‘no.’ In some embodiments,step835 may produce a ‘no’ when all training samples in the set of training samples have been selected or included in a predictor set. Accordingly, up to step835 a plurality of predictor sets is selected, each predictor set having the same number of c vectors and a vectors (κ+1). Moreover, each predictor set up to step835 includes a same set of κc vectors and κa vectors, except the c vector and a vector selected in the last iteration ofsteps810 through835. Also, within a single predictor set, all (κ+1) vectors c may be different from one another, and all (κ+1) vectors a may be different from one another. Thus, up to step835 an error value is assigned to each one of a predictor set associated with each selected training sample.
Whenstep835 results in a ‘no’ answer, then step840 includes forming a set of error values from the plurality of predictor sets. Step845 includes selecting a training sample and a predictor set from the set of error values. Accordingly,step845 includes selecting the training sample that provides the lowest error in the set of error values formed instep840. If the error value of the selected predictor set is less than a tolerance value according to step850, then the predictor set is used to form the CCM matrix instep855. Accordingly, step855 may include forming matrix Q using the (κ+1) c vectors and the (κ+1) a vectors from the selected predictor set as in Eq. 14. If the error value of the selected predictor set is greater than or equal to the tolerance value according to step850, thenmethod800 may be repeated fromstep810. The dimension of the predictor set is then increased by one (1).
For example, the second iteration ofsteps810 through845 should provide the best combination of 2 c vectors and a vectors for a predictor set. In order to avoid selecting 2 same c vectors or 2 same a vectors, the previously selected training sample c vector and a vector is removed from the source dataset and κ equals to one (1). Once again, each remaining training sample combined with the already selected training sample is used for training the model insteps810 through845. Thus, in a second iteration, the number of predictor sets instep840 will be K−1 models. Again, each predictor set is used to predict the full source dataset. From the predictions, the sample combined with already selected training sample with the smallest color difference will be selected.
Accordingly,method800 may be repeated until it reaches a number of training samples producing an error lower than the threshold. This CSA is simple and easy to implement. According to some embodiments a predictor set having a single element may include the lightest neutral color in the training set, with a mean error value (ΔE00) of about 15. Thus, in some embodiments it is desirable that the lightest neutral color be included in the training set.
FIG. 9A illustrates a camerasystem response chart900A for a signal linearity correction step in an imaging pipeline calibration method, according to some embodiments.Chart900A may be the result ofstep320 in imaging pipeline method300 (cf.FIG. 3).Abscissa901 inchart900A may be associated to a tristimulus XYZ vector provided byspectrometer system160, such as luminance L*, or a Y coordinate.Ordinate902 inchart900A may be associated to an RGB data fromcamera system150, such as the ‘Green’ count, ‘G.’ Data points910 may be associated to each training sample in a set of training samples (e.g.,training samples610, cf.FIGS. 6A and 6B). Data points910 may also comprise gray scale data points920-1,920-2,920-3,920-4,920-5, and920-6. To correct for signal linearity, the exposure time ofcamera155 incamera system150 may be adjusted, as follows.Chart900A is associated with a fixed exposure time scenario. In particular the exposure time may be a few milliseconds, such as less than 10, 10, 20, 24, or even more milliseconds.
In order to have sufficient image quality to detect display effect, the exposure time should be controlled by signal to noise ratio (SNR) of an image. Fixed exposure time for all measurements keeps the linearity between camera response and colors which is desirable for CCM development. Accordingly, it may be desirable to avoid SNR fluctuations with different color pattern, especially for adark characterization target120. Using a fixed exposure time may include ensuring that test colors are within the dynamic range ofcamera system150.
FIG. 9B illustrates a camerasystem response chart900B for a signal linearity correction step in an imaging pipeline calibration method, according to some embodiments.Ordinates901,abscissae902, anddata points910 and920 inchart900B are as inchart900A, described above.Chart900B includes a configuration wherein the exposure time incamera155 is set in auto-exposure mode. The auto-exposure setting ensures images with high SNR. However, chart900B shows that camera linearity to color stimulus will be lower than fixed exposure setting (chart900A). A configuration ofcamera system150 as described inchart900B may be desirable to increase the average camera signal. Accordingly, a signal level from about 40000 to 65535 may be obtained for some test samples, rendering higher average SNR as inchart900A.
FIG. 9C illustrates a camera system response chart900C for a signal linearity correction step in an imaging pipeline calibration method, according to some embodiments.Ordinates901,abscissae902, anddata points910 and920 inchart900B are as incharts900A and900B, described above. Chart900C includes a configuration wherein the exposure time incamera155 is set in auto-exposure mode. Further, in the configuration illustrated inFIG. 9C the output from camera system150 (ordinate902) is normalized by the exposure time. Chart900C illustrates that in order to correct signal linearity (e.g., instep320,method300, cf.FIG. 3), the camera output may be normalized with the exposure time. The camera RGB responses inFIGS. 9A-9C are measured for a set of achromatic samples, a uniform white and the dark condition.
FIG. 10 illustrates acolor distribution chart1000 for a plurality of test samples measured1010, and predicted1020, in an imaging pipeline calibration method, according to some embodiments.Chart1000 has anabscissa601A, anordinate602A, and adepth axis602B, as defined above with respect toFIGS. 6A and 6B. A training sample of 24 colors was used (cf.FIGS. 6A and 6B) to select a preferred predictor set according to method800 (cf.FIG. 8). A test sample of 729 colors (cf.FIGS. 7A and 7B) is shown inFIG. 10. It can be seen that larger errors occur in the dark region.
FIG. 11 illustrates acamera display1100 for a uniformity correction step of a camera system in an imaging pipeline calibration method, according to some embodiments.Camera display1100 may be a 2D sensor array, as discussed in detail above in relation toFIG. 1. In some embodiments, method300 (cf.FIG. 3) may include a step for detecting display artifacts such as black mura. Black mura may negatively affect the uniformity ofcamera system150. Accordingly, a spatial correction is conducted to minimize the effect of any spatial non-uniformity of the intensity of the illumination or of the sensitivity of the camera CCD array.FIG. 11 shows an example of non-uniformity effect on mura detection atdisplay edge portion1110. It can be seen that themiddle portion1120 ofdisplay1100 has very similar luminance intensity to the mura area at the edge. This increases the complexity of mura detection from a uniform display.
FIG. 12 illustrates an erroraverage chart1200 in an imaging pipeline calibration method, according to some embodiments.Chart1200 may be the result of several iterations inmethod800, described in detail above. The abscissa inchart1200 corresponds to the dimensionality of the predictor set (κ). The ordinate inchart1200 corresponds to the error obtained for the selected predictor seat at the end of each iteration sequence, instep845. In this particular example, inchart1200 the predictor set is formed from colors selected from a training set including the 729 samples ofFIGS. 7A and 7B.Characterization target120 is applied to train the characterization model and tested by the 729 samples.FIG. 12 shows the performance in terms of CIEDE2000 against the number of the samples selected bymethod800. It can be seen that the model performance stabilized at mean of one (1) error (E00) units with as few as four (4) training samples. Accordingly, it is desirable to determine which set of four training samples provides the optimal performance, so that this set is used for a CCM in any one ofimaging pipeline methods200,300, and400 (cf.FIGS. 2,3, and4).
FIGS. 13A and 13B illustratecolor distribution charts1300A and1300B for a plurality oftraining samples1310 in an imaging pipeline calibration method, according to some embodiments. The abscissae and ordinate inchart1300A are601A and601A, (cf.FIG. 6A). The abscissae and ordinate inchart1300B are601B and602B, respectively (cf.FIG. 6B). Accordingly, charts1300A and1300B display the color chart result for thetraining sample points710 usingmethod800 up to the fourth iteration (κ=4), as described inFIG. 12, above.Chart1300A displays the four training samples (open squares) selected in the preferred predictor set (CCM) inmethod800 in an a* b* plot.Chart1300B displays the four training samples (open squares) selected in the preferred predictor set (CCM) inmethod800 in an L* Ca*b* plot. The four training samples in the preferred predictor set are grey, cyan, yellow and magenta as shown in theFIGS. 13A and 13B. The 24 relevant samples of the 729 colors from the display gamut are also plotted. As expected, the training sample points (red circles) fall approximately at the center of the predicted values (open squares). The test colors inset710 cover the display color gamut and include grey scale and saturation colors.
Embodiments consistent with the present disclosure include a complete imaging pipeline for the new combo device: spectro-colorimeter, including the exposure time, dark current normalization, color correction matrix derivation, and flat field calibration. In some embodiments the imaging pipeline achieves a colorimeter accuracy better than two (ΔE<2) for 729 test samples covering the full bandwidth of the color space. Imaging pipelines as disclosed herein enable close-loop master-slave calibration ofspectrometer system160 andcamera system150. Therefore, embodiments as disclosed herein integrate two device components into a system, providing the imaging capability with spectrometer accuracy.
Embodiments consistent with the present disclosure may include applications in the display test industry as well as the machine vision field. Other applications may be readily envisioned, since an imaging pipeline consistent with embodiments as disclosed herein integrate two different hardware components such as acamera system150 and aspectrometer system160.
FIG. 14 illustrates a block diagram of a spectro-colorimeter system1400 for handling an imaging pipeline, according to some embodiments. Spectro-colorimeter system1400 includes aspectrometer system1460 and acamera system1450 used in an imaging pipeline as described above. Furthermore, Spectro-colorimeter system1400 may include acalibration target display1420 used in a calibration method for an imaging pipeline consistent with embodiments disclosed herein.
Spectro-colorimeter system1400 can include circuitry of a representative computing device. For example, spectro-colorimeter system1400 can include aprocessor1402 that pertains to a microprocessor or controller for controlling the overall operation of spectro-colorimeter system1400. Spectro-colorimeter system1400 can include instruction data pertaining to operating instructions, such as instructions for implementing and controlling user equipment, infile system1404.File system1404 can be a storage disk or a plurality of disks. In some embodiments,file system1404 can be flash memory, semiconductor (solid state) memory or the like.File system1404 can provide high capacity storage capability for the spectro-colorimeter system1400. In some embodiments, to compensate a relatively slow access time forfile system1404, spectro-colorimeter system1400 can also include acache1406.Cache1406 can include, for example, Random-Access Memory (RAM) provided by semiconductor memory, according to some embodiments. The relative access time forcache1406 can be substantially shorter than forfile system1404. On the other hand,file system1404 may include a higher storage capacity thancache1406. Spectro-colorimeter system1400 can also include aRAM1405 and a Read-Only Memory (ROM)1407.ROM1407 can store programs, utilities or processes to be executed in a non-volatile manner.RAM1405 can provide volatile data storage, such as forcache1406.
Spectro-colorimeter system1400 can also includeuser input device1408 allowing a user to interact with the spectro-colorimeter system1400. For example,user input device1408 can take a variety of forms, such as a button, a keypad, a dial, a touch screen, an audio input interface, a visual/image capture input interface, an input in the form of sensor data, and any other input device. Still further, spectro-colorimeter system1400 can include a display1410 (screen display) that can be controlled byprocessor1402 to display information, such as test results and calibration test results, to the user.Data bus1416 can facilitate data transfer between at leastfile system1404,cache1406,processor1402, andcontroller1470.Controller1470 can be used to interface with and control different devices such ascamera system1450,spectrometer system1460, andcalibration target display1420.Controller1470 may also control or motors to position mirror/lens through appropriate codecs. For example,control bus1474 can be used to controlcamera system1450.
Spectro-colorimeter system1400 can also include a network/bus interface1411 that couples todata link1412.Data link1412 allows spectro-colorimeter system1400 to couple to a host computer or to accessory devices or to other networks such as the internet. In some embodiments,data link1412 can be provided over a wired connection or a wireless connection. In the case of a wireless connection, network/bus interface1411 can include a wireless transceiver. In some embodiments,sensor1426 includes circuitry for detecting any number of stimuli. For example,sensor1426 can include any number of sensors for monitoring environmental conditions such as a light sensor such as a photometer, a temperature sensor and so on.
The various aspects, embodiments, implementations or features of the described embodiments can be used separately or in any combination. Various aspects of the described embodiments can be implemented by software, hardware or a combination of hardware and software. The described embodiments can also be embodied as computer readable code on a computer readable medium for controlling manufacturing operations or as computer readable code on a computer readable medium for controlling a manufacturing line. The computer readable medium is any data storage device that can store data which can thereafter be read by a computer system. Examples of the computer readable medium include read-only memory, random-access memory, CD-ROMs, HDDs, DVDs, magnetic tape, and optical data storage devices. The computer readable medium can also be distributed over network-coupled computer systems so that the computer readable code is stored and executed in a distributed fashion.
The foregoing description, for purposes of explanation, used specific nomenclature to provide a thorough understanding of the described embodiments. However, it will be apparent to one skilled in the art that the specific details are not required in order to practice the described embodiments. Thus, the foregoing descriptions of specific embodiments are presented for purposes of illustration and description. They are not intended to be exhaustive or to limit the described embodiments to the precise forms disclosed. It will be apparent to one of ordinary skill in the art that many modifications and variations are possible in view of the above teachings.