Movatterモバイル変換


[0]ホーム

URL:


Jump to content
WikipediaThe Free Encyclopedia
Search

Marginal distribution

From Wikipedia, the free encyclopedia
Aspect of probability and statistics

Inprobability theory andstatistics, themarginal distribution of asubset of acollection ofrandom variables is theprobability distribution of the variables contained in the subset. It gives the probabilities of various values of the variables in the subset without reference to the values of the other variables. This contrasts with aconditional distribution, which gives the probabilities contingent upon the values of the other variables.

Marginal variables are those variables in the subset of variables being retained. These concepts are "marginal" because they can be found by summing values in a table along rows or columns, and writing the sum in the margins of the table.[1] The distribution of the marginal variables (the marginal distribution) is obtained bymarginalizing (that is, focusing on the sums in the margin) over the distribution of the variables being discarded, and the discarded variables are said to have beenmarginalized out.

The context here is that the theoretical studies being undertaken, or thedata analysis being done, involves a wider set of random variables but that attention is being limited to a reduced number of those variables. In many applications, an analysis may start with a given collection of random variables, then first extend the set by defining new ones (such as the sum of the original random variables) and finally reduce the number by placing interest in the marginal distribution of a subset (such as the sum). Several different analyses may be done, each treating a different subset of variables as the marginal distribution.

Definition

[edit]

Marginal probability mass function

[edit]

Given a knownjoint distribution of twodiscreterandom variables, say,X andY, the marginal distribution of either variable –X for example – is theprobability distribution ofX when the values ofY are not taken into consideration. This can be calculated by summing thejoint probability distribution over all values ofY. Naturally, the converse is also true: the marginal distribution can be obtained forY by summing over the separate values ofX.

pX(xi)=jp(xi,yj),andpY(yj)=ip(xi,yj){\displaystyle p_{X}(x_{i})=\sum _{j}p(x_{i},y_{j}),\quad {\text{and}}\quad p_{Y}(y_{j})=\sum _{i}p(x_{i},y_{j})}

X
Y
x1x2x3x4pY(y) ↓
y14/322/321/321/328/32
y23/326/323/323/3215/32
y39/320009/32
pX(x) →16/328/324/324/3232/32
Joint and marginal distributions of a pair of discrete random variables,X andY, dependent, thus having nonzeromutual informationI(X;Y). The values of the joint distribution are in the 3×4 rectangle; the values of the marginal distributions are along the right and bottom margins.

Amarginal probability can always be written as anexpected value:pX(x)=ypXY(xy)pY(y)dy=EY[pXY(xY)].{\displaystyle p_{X}(x)=\int _{y}p_{X\mid Y}(x\mid y)\,p_{Y}(y)\,\mathrm {d} y=\operatorname {E} _{Y}[p_{X\mid Y}(x\mid Y)]\;.}

Intuitively, the marginal probability ofX is computed by examining the conditional probability ofX given a particular value ofY, and then averaging this conditional probability over the distribution of all values ofY.

This follows from the definition ofexpected value (after applying thelaw of the unconscious statistician)EY[f(Y)]=yf(y)pY(y)dy.{\displaystyle \operatorname {E} _{Y}[f(Y)]=\int _{y}f(y)p_{Y}(y)\,\mathrm {d} y.}

Therefore, marginalization provides the rule for the transformation of the probability distribution of a random variableY and another random variableX=g(Y):pX(x)=ypXY(xy)pY(y)dy=yδ(xg(y))pY(y)dy.{\displaystyle p_{X}(x)=\int _{y}p_{X\mid Y}(x\mid y)\,p_{Y}(y)\,\mathrm {d} y=\int _{y}\delta {\big (}x-g(y){\big )}\,p_{Y}(y)\,\mathrm {d} y.}

Marginal probability density function

[edit]

Given twocontinuousrandom variablesX andY whosejoint distribution is known, then the marginalprobability density function can be obtained by integrating thejoint probability density,f, overY, and vice versa. That is

fX(x)=cdf(x,y)dyfY(y)=abf(x,y)dx{\displaystyle {\begin{aligned}f_{X}(x)=\int _{c}^{d}f(x,y)\,dy\\f_{Y}(y)=\int _{a}^{b}f(x,y)\,dx\end{aligned}}}

wherex[a,b]{\displaystyle x\in [a,b]}, andy[c,d]{\displaystyle y\in [c,d]}.

Marginal cumulative distribution function

[edit]

Finding the marginalcumulative distribution function from the joint cumulative distribution function is easy. Recall that:

IfX andY jointly take values on [a,b] × [c,d] then

FX(x)=F(x,d)andFY(y)=F(b,y){\displaystyle F_{X}(x)=F(x,d)\quad {\text{and}}\quad F_{Y}(y)=F(b,y)}

Ifd is ∞, then this becomes a limitFX(x)=limyF(x,y){\textstyle F_{X}(x)=\lim _{y\to \infty }F(x,y)}. Likewise forFY(y){\displaystyle F_{Y}(y)}.

Marginal distribution vs. conditional distribution

[edit]

Definition

[edit]

Themarginal probability is the probability of a single event occurring, independent of other events. Aconditional probability, on the other hand, is the probability that an event occurs given that another specific eventhas already occurred. This means that the calculation for one variable is dependent on another variable.[2]

The conditional distribution of a variable given another variable is the joint distribution of both variables divided by the marginal distribution of the other variable.[3] That is,

Example

[edit]

Suppose there is data from a classroom of 200 students on the amount of time studied (X) and the percentage of correct answers (Y).[4] Assuming thatX andY are discrete random variables, the joint distribution ofX andY can be described by listing all the possible values ofp(xi,yj), as shown in Table.3.

X
Y
Time studied (minutes)
% correct
x1 (0-20)x2 (21-40)x3 (41-60)x4(>60)pY(y) ↓
y1 (0-20)2/200008/20010/200
y2 (21-40)10/2002/2008/200020/200
y3 (41-59)2/2004/20032/20032/20070/200
y4 (60-79)020/20030/20010/20060/200
y5 (80-100)04/20016/20020/20040/200
pX(x) →14/20030/20086/20070/2001
Two-way table of dataset of the relationship in a classroom of 200 students between the amount of time studied and the percentage correct

Themarginal distribution can be used to determine how many students scored 20 or below:pY(y1)=PY(Y=y1)=i=14P(xi,y1)=2200+8200=10200{\displaystyle p_{Y}(y_{1})=P_{Y}(Y=y_{1})=\sum _{i=1}^{4}P(x_{i},y_{1})={\frac {2}{200}}+{\frac {8}{200}}={\frac {10}{200}}}, meaning 10 students or 5%.

Theconditional distribution can be used to determine the probability that a student that studied 60 minutes or more obtains a score of 20 or below:pY|X(y1|x4)=P(Y=y1|X=x4)=P(X=x4,Y=y1)P(X=x4)=8/20070/200=870=435{\displaystyle p_{Y|X}(y_{1}|x_{4})=P(Y=y_{1}|X=x_{4})={\frac {P(X=x_{4},Y=y_{1})}{P(X=x_{4})}}={\frac {8/200}{70/200}}={\frac {8}{70}}={\frac {4}{35}}}, meaning there is about a 11% probability of scoring 20 after having studied for at least 60 minutes.

Real-world example

[edit]

Suppose that the probability that a pedestrian will be hit by a car, while crossing the road at a pedestrian crossing, without paying attention to the traffic light, is to be computed. Let H be adiscrete random variable taking one value from {Hit, Not Hit}. Let L (for traffic light) be a discrete random variable taking one value from {Red, Yellow, Green}.

Realistically, H will be dependent on L. That is, P(H = Hit) will take different values depending on whether L is red, yellow or green (and likewise for P(H = Not Hit)). A person is, for example, far more likely to be hit by a car when trying to cross while the lights for perpendicular traffic are green than if they are red. In other words, for any given possible pair of values for H and L, one must consider thejoint probability distribution of H and L to find the probability of that pair of events occurring together if the pedestrian ignores the state of the light.

However, in trying to calculate themarginal probability P(H = Hit), what is being sought is the probability that H = Hit in the situation in which the particular value of L is unknown and in which the pedestrian ignores the state of the light. In general, a pedestrian can be hit if the lights are red OR if the lights are yellow OR if the lights are green. So, the answer for the marginal probability can be found by summing P(H | L) for all possible values of L, with each value of L weighted by its probability of occurring.

Here is a table showing the conditional probabilities of being hit, depending on the state of the lights. (Note that the columns in this table must add up to 1 because the probability of being hit or not hit is 1 regardless of the state of the light.)

Conditional distribution:P(HL){\displaystyle P(H\mid L)}
L
H
RedYellowGreen
Not Hit0.990.90.2
Hit0.010.10.8

To find the joint probability distribution, more data is required. For example, suppose P(L = red) = 0.2, P(L = yellow) = 0.1, and P(L = green) = 0.7. Multiplying each column in the conditional distribution by the probability of that column occurring results in the joint probability distribution of H and L, given in the central 2×3 block of entries. (Note that the cells in this 2×3 block add up to 1).

Joint distribution:P(H,L){\displaystyle P(H,L)}
L
H
RedYellowGreenMarginal probability P(H)
Not Hit0.1980.090.140.428
Hit0.0020.010.560.572
Total0.20.10.71

The marginal probability P(H = Hit) is the sum 0.572 along the H = Hit row of this joint distribution table, as this is the probability of being hit when the lights are red OR yellow OR green. Similarly, the marginal probability that P(H = Not Hit) is the sum along the H = Not Hit row.

Multivariate distributions

[edit]
Many samples from a bivariate normal distribution. The marginal distributions are shown in red and blue. The marginal distribution of X is also approximated by creating a histogram of the X coordinates without consideration of the Y coordinates.

Formultivariate distributions, formulae similar to those above apply with the symbolsX and/orY being interpreted as vectors. In particular, each summation or integration would be over all variables except those contained inX.[5]

That means, IfX1,X2,…,Xn arediscreterandom variables, then the marginalprobability mass function should bepXi(k)=p(x1,x2,,xi1,k,xi+1,,xn);{\displaystyle p_{X_{i}}(k)=\sum p(x_{1},x_{2},\dots ,x_{i-1},k,x_{i+1},\dots ,x_{n});}ifX1,X2,…,Xn arecontinuous random variables, then the marginalprobability density function should befXi(xi)=f(x1,x2,,xn)dx1dx2dxi1dxi+1dxn.{\displaystyle f_{X_{i}}(x_{i})=\int _{-\infty }^{\infty }\int _{-\infty }^{\infty }\int _{-\infty }^{\infty }\cdots \int _{-\infty }^{\infty }f(x_{1},x_{2},\dots ,x_{n})dx_{1}dx_{2}\cdots dx_{i-1}dx_{i+1}\cdots dx_{n}.}

See also

[edit]

References

[edit]
  1. ^Trumpler, Robert J. & Weaver, Harold F. (1962).Statistical Astronomy. Dover Publications. pp. 32–33.ISBN 9780520347533.{{cite book}}:ISBN / Date incompatibility (help)
  2. ^"Marginal & Conditional Probability Distributions: Definition & Examples".Study.com. Retrieved2019-11-16.
  3. ^"Exam P [FSU Math]".www.math.fsu.edu. Retrieved2019-11-16.
  4. ^Marginal and conditional distributions, retrieved2019-11-16
  5. ^A modern introduction to probability and statistics : understanding why and how. Dekking, Michel, 1946-. London: Springer. 2005.ISBN 9781852338961.OCLC 262680588.{{cite book}}: CS1 maint: others (link)

Bibliography

[edit]
  • Everitt, B. S.; Skrondal, A. (2010).Cambridge Dictionary of Statistics.Cambridge University Press.
  • Dekking, F. M.; Kraaikamp, C.; Lopuhaä, H. P.; Meester, L. E. (2005).A modern introduction to probability and statistics. London : Springer.ISBN 9781852338961.{{cite book}}: CS1 maint: publisher location (link)
International
National
Other
Retrieved from "https://en.wikipedia.org/w/index.php?title=Marginal_distribution&oldid=1323204115"
Category:
Hidden categories:

[8]ページ先頭

©2009-2025 Movatter.jp