Inprobability theory andstatistics, thecoefficient of variation (CV), also known as normalizedroot-mean-square deviation (NRMSD),percent RMS, andrelative standard deviation (RSD), is astandardized measure ofdispersion of aprobability distribution orfrequency distribution. It is defined as the ratio of thestandard deviation to themean (or itsabsolute value,), and often expressed as a percentage ("%RSD"). The CV or RSD is widely used inanalytical chemistry to express the precision and repeatability of anassay. It is also commonly used in fields such asengineering orphysics when doing quality assurance studies andANOVA gauge R&R,[citation needed] by economists and investors ineconomic models, inepidemiology, and inpsychology/neuroscience.
The coefficient of variation (CV) is defined[1] as the ratio of the standard deviation to the mean,
It shows the extent of variability in relation to the mean of the population.The coefficient of variation should be computed only for data measured on scales that have a meaningful zero (ratio scale) and hence allow relative comparison of two measurements (i.e., division of one measurement by the other). The coefficient of variation may not have any meaning for data on aninterval scale.[2] For example, most temperature scales (e.g., Celsius, Fahrenheit etc.) are interval scales with arbitrary zeros, so the computed coefficient of variation would be different depending on the scale used. On the other hand,Kelvin temperature has a meaningful zero, the complete absence of thermal energy, and thus is a ratio scale. In plain language, it is meaningful to say that 20 Kelvin is twice as hot as 10 Kelvin, but only in this scale with a true absolute zero. While a standard deviation (SD) can be measured in Kelvin, Celsius, or Fahrenheit, the value computed is only applicable to that scale. Only the Kelvin scale can be used to compute a valid coefficient of variation.
Measurements that arelog-normally distributed exhibit stationary CV; in contrast, SD varies depending upon the expected value of measurements.
A more robust possibility is thequartile coefficient of dispersion, half theinterquartile range divided by the average of the quartiles (themidhinge),.
In most cases, a CV is computed for a single independent variable (e.g., a single factory product) with numerous, repeated measures of a dependent variable (e.g., error in the production process). However, data that are linear or even logarithmically non-linear and include a continuous range for the independent variable with sparse measurements across each value (e.g., scatter-plot) may be amenable to single CV calculation using amaximum-likelihood estimation approach.[3]
In the examples below, we will take the values given asrandomly chosen from a larger population of values.
In these examples, we will take the values given asthe entire population of values.
When only a sample of data from a population is available, the population CV can be estimated using the ratio of thesample standard deviation to the sample mean:
But this estimator, when applied to a small or moderately sized sample, tends to be too low: it is abiased estimator. Fornormally distributed data, an unbiased estimator[4] for a sample of size n is:
Many datasets follow an approximately log-normal distribution.[5] In such cases, a more accurate estimate, derived from the properties of thelog-normal distribution,[6][7][8] is defined as:
where is the sample standard deviation of the data after anatural log transformation. (In the event that measurements are recorded using any other logarithmic base, b, their standard deviation is converted to base e using, and the formula for remains the same.[9]) This estimate is sometimes referred to as the "geometric CV" (GCV)[10][11] in order to distinguish it from the simple estimate above. However, "geometric coefficient of variation" has also been defined by Kirkwood[12] as:
This term was intended to beanalogous to the coefficient of variation, for describing multiplicative variation in log-normal data, but this definition of GCV has no theoretical basis as an estimate of itself.
For many practical purposes (such assample size determination and calculation ofconfidence intervals) it is which is of most use in the context of log-normally distributed data. If necessary, this can be derived from an estimate of or GCV by inverting the corresponding formula.
The coefficient of variation is useful because the standard deviation of data must always be understood in the context of the mean of the data. In contrast, the actual value of the CV is independent of the unit in which the measurement has been taken, so it is adimensionless number. For comparison between data sets with different units or widely different means, one should use the coefficient of variation instead of the standard deviation.
The coefficient of variation is also common in applied probability fields such asrenewal theory,queueing theory, andreliability theory. In these fields, theexponential distribution is often more important than thenormal distribution.The standard deviation of anexponential distribution is equal to its mean, so its coefficient of variation is equal to 1. Distributions with CV < 1 (such as anErlang distribution) are considered low-variance, while those with CV > 1 (such as ahyper-exponential distribution) are considered high-variance[citation needed]. Some formulas in these fields are expressed using thesquared coefficient of variation, often abbreviated SCV. In modeling, a variation of the CV is the CV(RMSD). Essentially the CV(RMSD) replaces the standard deviation term with theRoot Mean Square Deviation (RMSD). While many natural processes indeed show a correlation between the average value and the amount of variation around it, accurate sensor devices need to be designed in such a way that the coefficient of variation is close to zero, i.e., yielding a constantabsolute error over their working range.
Inactuarial science, the CV is known asunitized risk.[13]
In industrial solids processing, CV is particularly important to measure the degree of homogeneity of a powder mixture. Comparing the calculated CV to a specification will allow to define if a sufficient degree of mixing has been reached.[14]
Influid dynamics, theCV, also referred to asPercent RMS,%RMS,%RMS Uniformity, orVelocity RMS, is a useful determination of flow uniformity for industrial processes. The term is used widely in the design of pollution control equipment, such as electrostatic precipitators (ESPs),[15] selective catalytic reduction (SCR), scrubbers, and similar devices. The Institute of Clean Air Companies (ICAC) references RMS deviation of velocity in the design of fabric filters (ICAC document F-7).[16] The guiding principle is that many of these pollution control devices require "uniform flow" entering and through the control zone. This can be related to uniformity of velocity profile, temperature distribution, gas species (such as ammonia for an SCR, or activated carbon injection for mercury absorption), and other flow-related parameters. ThePercent RMS also is used to assess flow uniformity in combustion systems, HVAC systems, ductwork, inlets to fans and filters, air handling units, etc. where performance of the equipment is influenced by the incoming flow distribution.
CV measures are often used as quality controls for quantitative laboratoryassays. While intra-assay and inter-assay CVs might be assumed to be calculated by simply averaging CV values across CV values for multiple samples within one assay or by averaging multiple inter-assay CV estimates, it has been suggested that these practices are incorrect and that a more complex computational process is required.[17] It has also been noted that CV values are not an ideal index of the certainty of a measurement when the number of replicates varies across samples − in this case standard error in percent is suggested to be superior.[18] If measurements do not have a natural zero point then the CV is not a valid measurement and alternative measures such as theintraclass correlation coefficient are recommended.[19]
The coefficient of variation fulfills therequirements for a measure of economic inequality.[20][21][22] Ifx (with entriesxi) is a list of the values of an economic indicator (e.g. wealth), withxi being the wealth of agenti, then the following requirements are met:
cv assumes its minimum value of zero for complete equality (allxi are equal).[22] Its most notable drawback is that it is not bounded from above, so it cannot be normalized to be within a fixed range (e.g. like theGini coefficient which is constrained to be between 0 and 1).[22] It is, however, more mathematically tractable than the Gini coefficient.
Archaeologists often use CV values to compare the degree of standardisation of ancient artefacts.[23][24] Variation in CVs has been interpreted to indicate different cultural transmission contexts for the adoption of new technologies.[25] Coefficients of variation have also been used to investigate pottery standardisation relating to changes in social organisation.[26] Archaeologists also use several methods for comparing CV values, for example the modified signed-likelihood ratio (MSLR) test for equality of CVs.[27][28]
Comparing coefficients of variation between parameters using relative units can result in differences that may not be real. If we compare the same set of temperatures inCelsius andFahrenheit (both relative units, wherekelvin andRankine scale are their associated absolute values):
Celsius: [0, 10, 20, 30, 40]
Fahrenheit: [32, 50, 68, 86, 104]
Thesample standard deviations are 15.81 and 28.46, respectively. The CV of the first set is 15.81/20 = 79%. For the second set (which are the same temperatures) it is 28.46/68 = 42%.
If, for example, the data sets are temperature readings from two different sensors (a Celsius sensor and a Fahrenheit sensor) and you want to know which sensor is better by picking the one with the least variance, then you will be misled if you use CV. The problem here is that you have divided by a relative value rather than an absolute.
Comparing the same data set, now in absolute units:
Kelvin: [273.15, 283.15, 293.15, 303.15, 313.15]
Rankine: [491.67, 509.67, 527.67, 545.67, 563.67]
Thesample standard deviations are still 15.81 and 28.46, respectively, because the standard deviation is not affected by a constant offset. The coefficients of variation, however, are now both equal to 5.39%.
Mathematically speaking, the coefficient of variation is not entirely linear. That is, for a random variable, the coefficient of variation of is equal to the coefficient of variation of only when. In the above example, Celsius can only be converted to Fahrenheit through a linear transformation of the form with, whereas Kelvins can be converted to Rankines through a transformation of the form.
Provided that negative and small positive values of the sample mean occur with negligible frequency, theprobability distribution of the coefficient of variation for a sample of size of i.i.d. normal random variables has been shown by Hendricks and Robey to be[29]
where the symbol indicates that the summation is over only even values of, i.e., if is odd, sum over even values of and if is even, sum only over odd values of.
This is useful, for instance, in the construction ofhypothesis tests orconfidence intervals. Statistical inference for the coefficient of variation in normally distributed data is often based onMcKay's chi-square approximation for the coefficient of variation.[30][31][32][33][34][35]
Liu (2012) reviews methods for the construction of a confidence interval for the coefficient of variation.[36] Notably, Lehmann (1986) derived the sampling distribution for the coefficient of variation using anon-central t-distribution to give an exact method for the construction of the CI.[37]
Standardized moments are similar ratios, where is thekth moment about the mean, which are also dimensionless and scale invariant. Thevariance-to-mean ratio,, is another similar ratio, but is not dimensionless, and hence not scale invariant. SeeNormalization (statistics) for further ratios.
Insignal processing, particularlyimage processing, thereciprocal ratio (or its square) is referred to as thesignal-to-noise ratio in general andsignal-to-noise ratio (imaging) in particular.
Other related ratios include: