Movatterモバイル変換


[0]ホーム

URL:


Jump to content
WikipediaThe Free Encyclopedia
Search

Law of the unconscious statistician

From Wikipedia, the free encyclopedia
Theorem in probability and statistics

Inprobability theory andstatistics, thelaw of the unconscious statistician, orLOTUS, is a theorem which expresses theexpected value of afunctiong(X) of arandom variableX in terms ofg and theprobability distribution ofX.

The form of the law depends on the type of random variableX in question. If the distribution ofX isdiscrete and one knows itsprobability mass functionpX, then the expected value ofg(X) isE[g(X)]=xg(x)pX(x),{\displaystyle \operatorname {E} [g(X)]=\sum _{x}g(x)p_{X}(x),\,}where the sum is over all possible valuesx ofX. If instead the distribution ofX iscontinuous withprobability density functionfX, then the expected value ofg(X) isE[g(X)]=g(x)fX(x)dx{\displaystyle \operatorname {E} [g(X)]=\int _{-\infty }^{\infty }g(x)f_{X}(x)\,\mathrm {d} x}

Both of these special cases can be expressed in terms of thecumulative probability distribution functionFX ofX, with the expected value ofg(X) now given by theLebesgue–Stieltjes integralE[g(X)]=g(x)dFX(x).{\displaystyle \operatorname {E} [g(X)]=\int _{-\infty }^{\infty }g(x)\,\mathrm {d} F_{X}(x).}

In even greater generality,X could be arandom element in anymeasurable space, in which case the law is given in terms ofmeasure theory and theLebesgue integral. In this setting, there is no need to restrict the context toprobability measures, and the law becomes a general theorem ofmathematical analysis on Lebesgue integration relative to apushforward measure.

Etymology

[edit]

This proposition is (sometimes) known as thelaw of the unconscious statistician because of a purported tendency to think of the aforementioned law as the very definition of the expected value of a functiong(X) and a random variableX, rather than (more formally) as a consequence of the true definition of expected value.[1] The naming is sometimes attributed toSheldon Ross' textbookIntroduction to Probability Models, although he removed the reference in later editions.[2] Many statistics textbooks do present the result as the definition of expected value.[3]

Joint distributions

[edit]

A similar property holds forjoint distributions, or equivalently, forrandom vectors. For discrete random variablesX andY, a function of two variablesg, and joint probability mass functionpX,Y(x,y){\displaystyle p_{X,Y}(x,y)}:[4]E[g(X,Y)]=yxg(x,y)pX,Y(x,y){\displaystyle \operatorname {E} [g(X,Y)]=\sum _{y}\sum _{x}g(x,y)p_{X,Y}(x,y)}In theabsolutely continuous case, withfX,Y(x,y){\displaystyle f_{X,Y}(x,y)} being the joint probability density function,E[g(X,Y)]=g(x,y)fX,Y(x,y)dxdy{\displaystyle \operatorname {E} [g(X,Y)]=\int _{-\infty }^{\infty }\int _{-\infty }^{\infty }g(x,y)f_{X,Y}(x,y)\,\mathrm {d} x\,\mathrm {d} y}

Special cases

[edit]

A number of special cases are given here. In the simplest case, where the random variableX takes on countably many values (so that its distribution is discrete), the proof is particularly simple, and holds without modification ifX is a discreterandom vector or even a discreterandom element.

The case of acontinuous random variable is more subtle, since the proof in generality requires subtle forms of the change-of-variables formula for integration. However, in the framework ofmeasure theory, the discrete case generalizes straightforwardly to general (not necessarily discrete)random elements, and the case of a continuous random variable is then a special case by making use of theRadon–Nikodym theorem.

Discrete case

[edit]

Suppose thatX is a random variable which takes on only finitely or countably many different valuesx1,x2, ..., with probabilitiesp1,p2, .... Then for any functiong of these values, the random variableg(X) has valuesg(x1),g(x2), ..., although some of these may coincide with each other. For example, this is the case ifX can take on both values1 and−1 andg(x) =x2.

Lety1,y2, ... enumerate the possibledistinct values ofg(X){\displaystyle g(X)}, and for eachi letIi denote the collection of allj withg(xj) =yi. Then, according to the definition of expected value, there isE[g(X)]=iyipg(X)(yi).{\displaystyle \operatorname {E} [g(X)]=\sum _{i}y_{i}p_{g(X)}(y_{i}).}

Since ayi{\displaystyle y_{i}} can be the image of multiple, distinctxj{\displaystyle x_{j}}, it holds thatpg(X)(yi)=jIipX(xj).{\displaystyle p_{g(X)}(y_{i})=\sum _{j\in I_{i}}p_{X}(x_{j}).}

Then the expected value can be rewritten asiyipg(X)(yi)=iyijIipX(xj)=ijIig(xj)pX(xj)=xg(x)pX(x).{\displaystyle \sum _{i}y_{i}p_{g(X)}(y_{i})=\sum _{i}y_{i}\sum _{j\in I_{i}}p_{X}(x_{j})=\sum _{i}\sum _{j\in I_{i}}g(x_{j})p_{X}(x_{j})=\sum _{x}g(x)p_{X}(x).}This equality relates the average of the outputs ofg(X) as weighted by the probabilities of the outputs themselves to the average of the outputs ofg(X) as weighted by the probabilities of the outputs ofX.

IfX takes on only finitely many possible values, the above is fully rigorous. However, ifX takes on countably many values, the last equality given does not always hold, as seen by theRiemann series theorem. Because of this, it is necessary to assume theabsolute convergence of the sums in question.[5]

Continuous case

[edit]

Suppose thatX is a random variable whose distribution has a continuous densityf. Ifg is a general function, then the probability thatg(X) is valued in a set of real numbersK equals the probability thatX is valued ing−1(K), which is given byg1(K)f(x)dx.{\displaystyle \int _{g^{-1}(K)}f(x)\,\mathrm {d} x.}Under various conditions ong, thechange-of-variables formula for integration can be applied to relate this to an integral overK, and hence to identify the density ofg(X) in terms of the density ofX. In the simplest case, ifg is differentiable with nowhere-vanishing derivative, then the above integral can be written asKf(g1(y))(g1)(y)dy,{\displaystyle \int _{K}f(g^{-1}(y))(g^{-1})'(y)\,\mathrm {d} y,}thereby identifyingg(X) as possessing the densityf (g−1(y))(g−1)′(y). The expected value ofg(X) is then identified asyf(g1(y))(g1)(y)dy=g(x)f(x)dx,{\displaystyle \int _{-\infty }^{\infty }yf(g^{-1}(y))(g^{-1})'(y)\,\mathrm {d} y=\int _{-\infty }^{\infty }g(x)f(x)\,\mathrm {d} x,}where the equality follows by another use of the change-of-variables formula for integration. This shows that the expected value ofg(X) is encoded entirely by the functiong and the densityf ofX.[6]

The assumption thatg is differentiable with nonvanishing derivative, which is necessary for applying the usual change-of-variables formula, excludes many typical cases, such asg(x) =x2. The result still holds true in these broader settings, although the proof requires more sophisticated results frommathematical analysis such asSard's theorem and thecoarea formula. In even greater generality, using theLebesgue theory as below, it can be found that the identityE[g(X)]=g(x)f(x)dx{\displaystyle \operatorname {E} [g(X)]=\int _{-\infty }^{\infty }g(x)f(x)\,\mathrm {d} x}holds true wheneverX has a densityf (which does not have to be continuous) and wheneverg is ameasurable function for whichg(X) has finite expected value. (Every continuous function is measurable.) Furthermore, without modification to the proof, this holds even ifX is arandom vector (with density) andg is a multivariable function; the integral is then taken over the multi-dimensional range of values ofX.

Measure-theoretic formulation

[edit]

An abstract and general form of the result is available using the framework ofmeasure theory and theLebesgue integral. Here, the setting is that of ameasure space(Ω,μ) and ameasurable mapX fromΩ to ameasurable spaceΩ'. The theorem then says that for any measurable functiong onΩ' which is valued inreal numbers (or even theextended real number line), there isΩgXdμ=Ωgd(Xμ),{\displaystyle \int _{\Omega }g\circ X\,\mathrm {d} \mu =\int _{\Omega '}g\,\mathrm {d} (X_{\sharp }\mu ),}(interpreted as saying, in particular, that either side of the equality exists if the other side exists). HereXμ denotes thepushforward measure onΩ′. The 'discrete case' given above is the special case arising whenX takes on only countably many values andμ is aprobability measure. In fact, the discrete case (although without the restriction to probability measures) is the first step in proving the general measure-theoretic formulation, as the general version follows therefrom by an application of themonotone convergence theorem.[7] Without any major changes, the result can also be formulated in the setting ofouter measures.[8]

Ifμ is aσ-finite measure, the theory of theRadon–Nikodym derivative is applicable. In the special case that the measureXμ isabsolutely continuous relative to some background σ-finite measureν onΩ′, there is a real-valued functionfX onΩ' representing theRadon–Nikodym derivative of the two measures, and thenΩgd(Xμ)=ΩgfXdν.{\displaystyle \int _{\Omega '}g\,\mathrm {d} (X_{\sharp }\mu )=\int _{\Omega '}gf_{X}\,\mathrm {d} \nu .}In the further special case thatΩ′ is thereal number line, as in the contexts discussed above, it is natural to takeν to be theLebesgue measure, and this then recovers the 'continuous case' given above wheneverμ is aprobability measure. (In this special case, the condition of σ-finiteness is vacuous, since Lebesgue measure and every probability measure are trivially σ-finite.)[9]

References

[edit]
  1. ^DeGroot & Schervish 2014, pp. 213−214.
  2. ^Casella & Berger 2001, Section 2.2;Ross 2019.
  3. ^Casella & Berger 2001, Section 2.2.
  4. ^Ross 2019.
  5. ^Feller 1968, Section IX.2.
  6. ^Papoulis & Pillai 2002, Chapter 5.
  7. ^Bogachev 2007, Section 3.6;Cohn 2013, Section 2.6;Halmos 1950, Section 39.
  8. ^Federer 1969, Section 2.4.
  9. ^Halmos 1950, Section 39.
Retrieved from "https://en.wikipedia.org/w/index.php?title=Law_of_the_unconscious_statistician&oldid=1265412832"
Categories:
Hidden categories:

[8]ページ先頭

©2009-2025 Movatter.jp