Movatterモバイル変換


[0]ホーム

URL:


Jump to content
WikipediaThe Free Encyclopedia
Search

Joint probability distribution

From Wikipedia, the free encyclopedia
(Redirected fromJoint distribution function)
Type of probability distribution

Many sample observations (black) are shown from a joint probability distribution. The marginal densities are shown as well (in blue and in red).
Part of a series onstatistics
Probability theory

Givenrandom variablesX,Y,{\displaystyle X,Y,\ldots }, that are defined on the same[1]probability space, themultivariate orjoint probability distribution forX,Y,{\displaystyle X,Y,\ldots } is aprobability distribution that gives the probability that each ofX,Y,{\displaystyle X,Y,\ldots } falls in any particular range or discrete set of values specified for that variable. In the case of only two random variables, this is called abivariate distribution, but the concept generalizes to any number of random variables.

The joint probability distribution can be expressed in terms of a jointcumulative distribution function and either in terms of a jointprobability density function (in the case ofcontinuous variables) or jointprobability mass function (in the case ofdiscrete variables). These in turn can be used to find two other types of distributions: themarginal distribution giving the probabilities for any one of the variables with no reference to any specific ranges of values for the other variables, and theconditional probability distribution giving the probabilities for any subset of the variables conditional on particular values of the remaining variables.

Examples

[edit]

Draws from an urn

[edit]

Each of two urns contains twice as many red balls as blue balls, and no others, and one ball is randomly selected from each urn, with the two draws independent of each other. LetA{\displaystyle A} andB{\displaystyle B} be discrete random variables associated with the outcomes of the draw from the first urn and second urn respectively. The probability of drawing a red ball from either of the urns is2/3, and the probability of drawing a blue ball is1/3. The joint probability distribution is presented in the following table:

A=RedA=BlueP(B)
B=Red(2/3)(2/3) =4/9(1/3)(2/3) =2/94/9 +2/9 =2/3
B=Blue(2/3)(1/3) =2/9(1/3)(1/3) =1/92/9 +1/9 =1/3
P(A)4/9 +2/9 =2/32/9 +1/9 =1/3

Each of the four inner cells shows the probability of a particular combination of results from the two draws; these probabilities are the joint distribution. In any one cell the probability of a particular combination occurring is (since the draws are independent) the product of the probability of the specified result for A and the probability of the specified result for B. The probabilities in these four cells sum to 1, as with all probability distributions.

Moreover, the final row and the final column give themarginal probability distribution for A and the marginal probability distribution for B respectively. For example, for A the first of these cells gives the sum of the probabilities for A being red, regardless of which possibility for B in the column above the cell occurs, as2/3. Thus the marginal probability distribution forA{\displaystyle A} givesA{\displaystyle A}'s probabilitiesunconditional onB{\displaystyle B}, in a margin of the table.

Coin flips

[edit]

Consider the flip of twofair coins; letA{\displaystyle A} andB{\displaystyle B} be discrete random variables associated with the outcomes of the first and second coin flips respectively. Each coin flip is aBernoulli trial and has aBernoulli distribution. If a coin displays "heads" then the associated random variable takes the value 1, and it takes the value 0 otherwise. The probability of each of these outcomes is1/2, so the marginal (unconditional) density functions are

P(A)=1/2forA{0,1};P(B)=1/2forB{0,1}.{\displaystyle {\begin{aligned}P(A)&=1/2\quad {\text{for}}\quad A\in \{0,1\};\\P(B)&=1/2\quad {\text{for}}\quad B\in \{0,1\}.\end{aligned}}}

The joint probability mass function ofA{\displaystyle A} andB{\displaystyle B} defines probabilities for each pair of outcomes. All possible outcomes are(A,B){(0,0),(0,1),(1,0),(1,1)}.{\displaystyle (A,B)\in \{(0,0),(0,1),(1,0),(1,1)\}.}Since each outcome is equally likely the joint probability mass function becomesP(A,B)=1/4forA,B{0,1}.{\displaystyle P(A,B)=1/4\quad {\text{for}}\quad A,B\in \{0,1\}.}

Since the coin flips are independent, the joint probability mass function is the product of the marginals:P(A,B)=P(A)P(B)forA,B{0,1}.{\displaystyle P(A,B)=P(A)P(B)\quad {\text{for}}\quad A,B\in \{0,1\}.}

Rolling a die

[edit]

Consider the roll of a fairdie and letA=1{\displaystyle A=1} if the number is even (i.e. 2, 4, or 6) andA=0{\displaystyle A=0} otherwise. Furthermore, letB=1{\displaystyle B=1} if the number is prime (i.e. 2, 3, or 5) andB=0{\displaystyle B=0} otherwise.

123456
A010101
B011010

Then, the joint distribution ofA{\displaystyle A} andB{\displaystyle B}, expressed as a probability mass function, isP(A=0,B=0)=P{1}=16,P(A=1,B=0)=P{4,6}=26,P(A=0,B=1)=P{3,5}=26,P(A=1,B=1)=P{2}=16.{\displaystyle {\begin{aligned}\mathrm {P} (A=0,B=0)&=P\{1\}={\frac {1}{6}},&\mathrm {P} (A=1,B=0)&=P\{4,6\}={\frac {2}{6}},\\\mathrm {P} (A=0,B=1)&=P\{3,5\}={\frac {2}{6}},&\mathrm {P} (A=1,B=1)&=P\{2\}={\frac {1}{6}}.\end{aligned}}}

These probabilities necessarily sum to 1, since the probability ofsome combination ofA{\displaystyle A} andB{\displaystyle B} occurring is 1.

Marginal probability distribution

[edit]
Main article:Marginal distribution

If more than one random variable is defined in a random experiment, it is important to distinguish between the joint probability distribution of X and Y and the probability distribution of each variable individually. The individual probability distribution of a random variable is referred to as its marginal probability distribution. In general, the marginal probability distribution of X can be determined from the joint probability distribution of X and other random variables.

If the joint probability density function of random variable X and Y isfX,Y(x,y){\displaystyle f_{X,Y}(x,y)} , the marginal probability density function of X and Y, which defines themarginal distribution, is given by:

fX(x)=fX,Y(x,y)dyfY(y)=fX,Y(x,y)dx{\displaystyle {\begin{aligned}f_{X}(x)&=\int f_{X,Y}(x,y)\;dy\\f_{Y}(y)&=\int f_{X,Y}(x,y)\;dx\end{aligned}}}

where the first integral is over all points in the range of (X,Y) for which X=x and the second integral is over all points in the range of (X,Y) for which Y=y.[2]

Joint cumulative distribution function

[edit]

For a pair of random variablesX,Y{\displaystyle X,Y}, the joint cumulative distribution function (CDF)FX,Y{\displaystyle F_{X,Y}} is given by[3]: 89 

FX,Y(x,y)=P(Xx,Yy){\displaystyle F_{X,Y}(x,y)=\operatorname {P} (X\leq x,Y\leq y)}   (Eq.1)

where the right-hand side represents theprobability that the random variableX{\displaystyle X} takes on a value less than or equal tox{\displaystyle x}and thatY{\displaystyle Y} takes on a value less than or equal toy{\displaystyle y}.

ForN{\displaystyle N} random variablesX1,,XN{\displaystyle X_{1},\ldots ,X_{N}}, the joint CDFFX1,,XN{\displaystyle F_{X_{1},\ldots ,X_{N}}} is given by

FX1,,XN(x1,,xN)=P(X1x1,,XNxN){\displaystyle F_{X_{1},\ldots ,X_{N}}(x_{1},\ldots ,x_{N})=\operatorname {P} (X_{1}\leq x_{1},\ldots ,X_{N}\leq x_{N})}   (Eq.2)

Interpreting theN{\displaystyle N} random variables as arandom vectorX=(X1,,XN)T{\displaystyle \mathbf {X} =(X_{1},\ldots ,X_{N})^{T}} yields a shorter notation:

FX(x)=P(X1x1,,XNxN){\displaystyle F_{\mathbf {X} }(\mathbf {x} )=\operatorname {P} (X_{1}\leq x_{1},\ldots ,X_{N}\leq x_{N})}

Joint density function or mass function

[edit]

Discrete case

[edit]

The jointprobability mass function of twodiscrete random variablesX,Y{\displaystyle X,Y} is:

pX,Y(x,y)=P(X=x and Y=y){\displaystyle p_{X,Y}(x,y)=\mathrm {P} (X=x\ \mathrm {and} \ Y=y)}   (Eq.3)

or written in terms of conditional distributionspX,Y(x,y)=P(Y=yX=x)P(X=x)=P(X=xY=y)P(Y=y){\displaystyle p_{X,Y}(x,y)=\mathrm {P} (Y=y\mid X=x)\cdot \mathrm {P} (X=x)=\mathrm {P} (X=x\mid Y=y)\cdot \mathrm {P} (Y=y)}whereP(Y=yX=x){\displaystyle \mathrm {P} (Y=y\mid X=x)} is theprobability ofY=y{\displaystyle Y=y} given thatX=x{\displaystyle X=x}.

The generalization of the preceding two-variable case is the joint probability distribution ofn{\displaystyle n} discrete random variablesX1,X2,,Xn{\displaystyle X_{1},X_{2},\dots ,X_{n}} which is:

pX1,,Xn(x1,,xn)=P(X1=x1 and  and Xn=xn){\displaystyle p_{X_{1},\ldots ,X_{n}}(x_{1},\ldots ,x_{n})=\mathrm {P} (X_{1}=x_{1}{\text{ and }}\dots {\text{ and }}X_{n}=x_{n})}   (Eq.4)

or equivalently

pX1,,Xn(x1,,xn)=P(X1=x1)P(X2=x2X1=x1)P(X3=x3X1=x1,X2=x2)P(Xn=xnX1=x1,X2=x2,,Xn1=xn1).{\displaystyle {\begin{aligned}p_{X_{1},\ldots ,X_{n}}(x_{1},\ldots ,x_{n})={}&\mathrm {P} (X_{1}=x_{1})\\&\cdot \mathrm {P} (X_{2}=x_{2}\mid X_{1}=x_{1})\\&\cdot \mathrm {P} (X_{3}=x_{3}\mid X_{1}=x_{1},X_{2}=x_{2})\\&\cdots \\&\cdot \mathrm {P} (X_{n}=x_{n}\mid X_{1}=x_{1},X_{2}=x_{2},\dots ,X_{n-1}=x_{n-1}).\end{aligned}}}

This identity is known as thechain rule of probability.

Since these are probabilities, in the two-variable case

ijP(X=xi and Y=yj)=1,{\displaystyle \sum _{i}\sum _{j}\mathrm {P} (X=x_{i}\ \mathrm {and} \ Y=y_{j})=1,\,}which generalizes forn{\displaystyle n\,} discrete random variablesX1,X2,,Xn{\displaystyle X_{1},X_{2},\dots ,X_{n}} to

ijkP(X1=x1i,X2=x2j,,Xn=xnk)=1.{\displaystyle \sum _{i}\sum _{j}\dots \sum _{k}\mathrm {P} (X_{1}=x_{1i},X_{2}=x_{2j},\dots ,X_{n}=x_{nk})=1.\;}

Continuous case

[edit]

Thejointprobability density functionfX,Y(x,y){\displaystyle f_{X,Y}(x,y)} for twocontinuous random variables is defined as the derivative of the joint cumulative distribution function (seeEq.1):

fX,Y(x,y)=2FX,Y(x,y)xy{\displaystyle f_{X,Y}(x,y)={\frac {\partial ^{2}F_{X,Y}(x,y)}{\partial x\partial y}}}   (Eq.5)

This is equal to:fX,Y(x,y)=fYX(yx)fX(x)=fXY(xy)fY(y){\displaystyle f_{X,Y}(x,y)=f_{Y\mid X}(y\mid x)f_{X}(x)=f_{X\mid Y}(x\mid y)f_{Y}(y)}

wherefYX(yx){\displaystyle f_{Y\mid X}(y\mid x)} andfXY(xy){\displaystyle f_{X\mid Y}(x\mid y)} are theconditional distributions ofY{\displaystyle Y} givenX=x{\displaystyle X=x} and ofX{\displaystyle X} givenY=y{\displaystyle Y=y} respectively, andfX(x){\displaystyle f_{X}(x)} andfY(y){\displaystyle f_{Y}(y)} are themarginal distributions forX{\displaystyle X} andY{\displaystyle Y} respectively.

The definition extends naturally to more than two random variables:

fX1,,Xn(x1,,xn)=nFX1,,Xn(x1,,xn)x1xn{\displaystyle f_{X_{1},\ldots ,X_{n}}(x_{1},\ldots ,x_{n})={\frac {\partial ^{n}F_{X_{1},\ldots ,X_{n}}(x_{1},\ldots ,x_{n})}{\partial x_{1}\ldots \partial x_{n}}}}   (Eq.6)

Again, since these are probability distributions, one hasxyfX,Y(x,y)dydx=1{\displaystyle \int _{x}\int _{y}f_{X,Y}(x,y)\;dy\;dx=1}respectivelyx1xnfX1,,Xn(x1,,xn)dxndx1=1{\displaystyle \int _{x_{1}}\ldots \int _{x_{n}}f_{X_{1},\ldots ,X_{n}}(x_{1},\ldots ,x_{n})\;dx_{n}\ldots \;dx_{1}=1}

Mixed case

[edit]

The "mixed joint density" may be defined where one or more random variables are continuous and the other random variables are discrete. With one variable of each typefX,Y(x,y)=fXY(xy)P(Y=y)=P(Y=yX=x)fX(x).{\displaystyle f_{X,Y}(x,y)=f_{X\mid Y}(x\mid y)\mathrm {P} (Y=y)=\mathrm {P} (Y=y\mid X=x)f_{X}(x).}One example of a situation in which one may wish to find the cumulative distribution of one random variable which is continuous and another random variable which is discrete arises when one wishes to use alogistic regression in predicting the probability of a binary outcome Y conditional on the value of a continuously distributed outcomeX{\displaystyle X}. Onemust use the "mixed" joint density when finding the cumulative distribution of this binary outcome because the input variables(X,Y){\displaystyle (X,Y)} were initially defined in such a way that one could not collectively assign it either a probability density function or a probability mass function. Formally,fX,Y(x,y){\displaystyle f_{X,Y}(x,y)} is the probability density function of(X,Y){\displaystyle (X,Y)} with respect to theproduct measure on the respectivesupports ofX{\displaystyle X} andY{\displaystyle Y}. Either of these two decompositions can then be used to recover the joint cumulative distribution function:FX,Y(x,y)=tyxfX,Y(s,t)ds.{\displaystyle F_{X,Y}(x,y)=\sum _{t\leq y}\int _{-\infty }^{x}f_{X,Y}(s,t)\;ds.}The definition generalizes to a mixture of arbitrary numbers of discrete and continuous random variables.

Additional properties

[edit]

Joint distribution for independent variables

[edit]

In general two random variablesX{\displaystyle X} andY{\displaystyle Y} areindependent if and only if the joint cumulative distribution function satisfiesFX,Y(x,y)=FX(x)FY(y){\displaystyle F_{X,Y}(x,y)=F_{X}(x)\cdot F_{Y}(y)}

Two discrete random variablesX{\displaystyle X} andY{\displaystyle Y} are independent if and only if the joint probability mass function satisfiesP(X=x and Y=y)=P(X=x)P(Y=y){\displaystyle P(X=x\ {\text{and}}\ Y=y)=P(X=x)\cdot P(Y=y)}for allx{\displaystyle x} andy{\displaystyle y}.

While the number of independent random events grows, the related joint probability value decreases rapidly to zero, according to a negative exponential law.

Similarly, two absolutely continuous random variables are independent if and only iffX,Y(x,y)=fX(x)fY(y){\displaystyle f_{X,Y}(x,y)=f_{X}(x)\cdot f_{Y}(y)}for allx{\displaystyle x} andy{\displaystyle y}. This means that acquiring any information about the value of one or more of the random variables leads to a conditional distribution of any other variable that is identical to its unconditional (marginal) distribution; thus no variable provides any information about any other variable.

Joint distribution for conditionally dependent variables

[edit]

If a subsetA{\displaystyle A} of the variablesX1,,Xn{\displaystyle X_{1},\cdots ,X_{n}} isconditionally dependent given another subsetB{\displaystyle B} of these variables, then the probability mass function of the joint distribution isP(X1,,Xn){\displaystyle \mathrm {P} (X_{1},\ldots ,X_{n})}.P(X1,,Xn){\displaystyle \mathrm {P} (X_{1},\ldots ,X_{n})} is equal toP(B)P(AB){\displaystyle P(B)\cdot P(A\mid B)}. Therefore, it can be efficiently represented by the lower-dimensional probability distributionsP(B){\displaystyle P(B)} andP(AB){\displaystyle P(A\mid B)}. Such conditional independence relations can be represented with aBayesian network orcopula functions.

Covariance

[edit]
Main article:Covariance

When two or more random variables are defined on a probability space, it is useful to describe how they vary together; that is, it is useful to measure the relationship between the variables. A common measure of the relationship between two random variables is the covariance. Covariance is a measure of linear relationship between the random variables. If the relationship between the random variables is nonlinear, the covariance might not be sensitive to the relationship, which means, it does not relate the correlation between two variables.

The covariance between the random variablesX{\displaystyle X} andY{\displaystyle Y} is[2]cov(X,Y)=σXY=E[(Xμx)(Yμy)]=E(XY)μxμy.{\displaystyle \operatorname {cov} (X,Y)=\sigma _{XY}=\operatorname {E} \left[(X-\mu _{x})(Y-\mu _{y})\right]=\operatorname {E} (XY)-\mu _{x}\mu _{y}.}

Correlation

[edit]
Main article:Correlation

There is another measure of the relationship between two random variables that is often easier to interpret than the covariance.

The correlation just scales the covariance by the product of the standard deviation of each variable. Consequently, the correlation is a dimensionless quantity that can be used to compare the linear relationships between pairs of variables in different units. If the points in the joint probability distribution of X and Y that receive positive probability tend to fall along a line of positive (or negative) slope, ρXY is near +1 (or −1). If ρXY equals +1 or −1, it can be shown that the points in the joint probability distribution that receive positive probability fall exactly along a straight line. Two random variables with nonzero correlation are said to be correlated. Similar to covariance, the correlation is a measure of the linear relationship between random variables.

The correlation coefficient between the random variablesX{\displaystyle X} andY{\displaystyle Y} isρXY=cov(X,Y)V(X)V(Y)=σXYσXσY.{\displaystyle \rho _{XY}={\frac {\operatorname {cov} (X,Y)}{\sqrt {V(X)V(Y)}}}={\frac {\sigma _{XY}}{\sigma _{X}\sigma _{Y}}}.}

Important named distributions

[edit]

Named joint distributions that arise frequently in statistics include themultivariate normal distribution, themultivariate stable distribution, themultinomial distribution, thenegative multinomial distribution, themultivariate hypergeometric distribution, and theelliptical distribution.

See also

[edit]

References

[edit]
  1. ^Feller, William (1968).An Introduction to Probability Theory and its Applications. Vol. 1 (3rd ed.). pp. 217–218.ISBN 978-0471257080.
  2. ^abMontgomery, Douglas C.; Runger, George C. (19 November 2013).Applied Statistics and Probability for Engineers (Sixth ed.). Hoboken, NJ:Wiley.ISBN 978-1-118-53971-2.OCLC 861273897.
  3. ^Park, Kun Il (2018).Fundamentals of Probability and Stochastic Processes with Applications to Communications. Springer.ISBN 978-3-319-68074-3.

External links

[edit]
Discrete
univariate
with finite
support
with infinite
support
Continuous
univariate
supported on a
bounded interval
supported on a
semi-infinite
interval
supported
on the whole
real line
with support
whose type varies
Mixed
univariate
continuous-
discrete
Multivariate
(joint)
Directional
Degenerate
andsingular
Degenerate
Dirac delta function
Singular
Cantor
Families
Retrieved from "https://en.wikipedia.org/w/index.php?title=Joint_probability_distribution&oldid=1323298240"
Categories:
Hidden categories:

[8]ページ先頭

©2009-2025 Movatter.jp