Movatterモバイル変換


[0]ホーム

URL:


Jump to content
WikipediaThe Free Encyclopedia
Search

Geometric distribution

From Wikipedia, the free encyclopedia
Probability distribution
Not to be confused withHypergeometric distribution.
Geometric
Probability mass function
Cumulative distribution function
Parameters0<p1{\displaystyle 0<p\leq 1} success probability (real)0<p1{\displaystyle 0<p\leq 1} success probability (real)
Supportk trials wherekN={1,2,3,}{\displaystyle k\in \mathbb {N} =\{1,2,3,\dotsc \}}k failures wherekN0={0,1,2,}{\displaystyle k\in \mathbb {N} _{0}=\{0,1,2,\dotsc \}}
PMF(1p)k1p{\displaystyle (1-p)^{k-1}p}(1p)kp{\displaystyle (1-p)^{k}p}
CDF1(1p)x{\displaystyle 1-(1-p)^{\lfloor x\rfloor }} forx1{\displaystyle x\geq 1},
0{\displaystyle 0} forx<1{\displaystyle x<1}
1(1p)x+1{\displaystyle 1-(1-p)^{\lfloor x\rfloor +1}} forx0{\displaystyle x\geq 0},
0{\displaystyle 0} forx<0{\displaystyle x<0}
Mean1p{\displaystyle {\frac {1}{p}}}1pp{\displaystyle {\frac {1-p}{p}}}
Median

1log2(1p){\displaystyle \left\lceil {\frac {-1}{\log _{2}(1-p)}}\right\rceil }

(not unique if1/log2(1p){\displaystyle -1/\log _{2}(1-p)} is an integer)

1log2(1p)1{\displaystyle \left\lceil {\frac {-1}{\log _{2}(1-p)}}\right\rceil -1}

(not unique if1/log2(1p){\displaystyle -1/\log _{2}(1-p)} is an integer)
Mode1{\displaystyle 1}0{\displaystyle 0}
Variance1pp2{\displaystyle {\frac {1-p}{p^{2}}}}1pp2{\displaystyle {\frac {1-p}{p^{2}}}}
Skewness2p1p{\displaystyle {\frac {2-p}{\sqrt {1-p}}}}2p1p{\displaystyle {\frac {2-p}{\sqrt {1-p}}}}
Excess kurtosis6+p21p{\displaystyle 6+{\frac {p^{2}}{1-p}}}6+p21p{\displaystyle 6+{\frac {p^{2}}{1-p}}}
Entropy(1p)log(1p)plogpp{\displaystyle {\tfrac {-(1-p)\log(1-p)-p\log p}{p}}}(1p)log(1p)plogpp{\displaystyle {\tfrac {-(1-p)\log(1-p)-p\log p}{p}}}
MGFpet1(1p)et,{\displaystyle {\frac {pe^{t}}{1-(1-p)e^{t}}},}
fort<ln(1p){\displaystyle t<-\ln(1-p)}
p1(1p)et,{\displaystyle {\frac {p}{1-(1-p)e^{t}}},}
fort<ln(1p){\displaystyle t<-\ln(1-p)}
CFpeit1(1p)eit{\displaystyle {\frac {pe^{it}}{1-(1-p)e^{it}}}}p1(1p)eit{\displaystyle {\frac {p}{1-(1-p)e^{it}}}}
PGFpz1(1p)z{\displaystyle {\frac {pz}{1-(1-p)z}}}p1(1p)z{\displaystyle {\frac {p}{1-(1-p)z}}}
Fisher information1p2(1p){\displaystyle {\tfrac {1}{p^{2}(1-p)}}}1p2(1p){\displaystyle {\tfrac {1}{p^{2}(1-p)}}}

Inprobability theory andstatistics, thegeometric distribution is either one of twodiscrete probability distributions:

These two different geometric distributions should not be confused with each other. Often, the nameshifted geometric distribution is adopted for the former one (distribution ofX{\displaystyle X}); however, to avoid ambiguity, it is considered wise to indicate which is intended, by mentioning the support explicitly.

The geometric distribution gives the probability that the first occurrence of success requiresk{\displaystyle k} independent trials, each with success probabilityp{\displaystyle p}. If the probability of success on each trial isp{\displaystyle p}, then the probability that thek{\displaystyle k}-th trial is the first success is

Pr(X=k)=(1p)k1p{\displaystyle \Pr(X=k)=(1-p)^{k-1}p}

fork=1,2,3,4,{\displaystyle k=1,2,3,4,\dots }

The above form of the geometric distribution is used for modeling the number of trials up to and including the first success. By contrast, the following form of the geometric distribution is used for modeling the number of failures until the first success:

Pr(Y=k)=Pr(X=k+1)=(1p)kp{\displaystyle \Pr(Y=k)=\Pr(X=k+1)=(1-p)^{k}p}

fork=0,1,2,3,{\displaystyle k=0,1,2,3,\dots }

The geometric distribution gets its name because its probabilities follow ageometric sequence. It is sometimes called the Furry distribution afterWendell H. Furry.[1]: 210 

Definition

[edit]

The geometric distribution is thediscrete probability distribution that describes when the first success in an infinite sequence ofindependent and identically distributedBernoulli trials occurs. Itsprobability mass function depends on its parameterization andsupport. When supported onN{\displaystyle \mathbb {N} }, the probability mass function isP(X=k)=(1p)k1p{\displaystyle P(X=k)=(1-p)^{k-1}p} wherek=1,2,3,{\displaystyle k=1,2,3,\dotsc } is the number of trials andp{\displaystyle p} is the probability of success in each trial.[2]: 260–261 

The support may also beN0{\displaystyle \mathbb {N} _{0}}, definingY=X1{\displaystyle Y=X-1}. This alters the probability mass function intoP(Y=k)=(1p)kp{\displaystyle P(Y=k)=(1-p)^{k}p} wherek=0,1,2,{\displaystyle k=0,1,2,\dotsc } is the number of failures before the first success.[3]: 66 

An alternative parameterization of the distribution gives the probability mass functionP(Y=k)=(PQ)k(1PQ){\displaystyle P(Y=k)=\left({\frac {P}{Q}}\right)^{k}\left(1-{\frac {P}{Q}}\right)} whereP=1pp{\displaystyle P={\frac {1-p}{p}}} andQ=1p{\displaystyle Q={\frac {1}{p}}}.[1]: 208–209 

An example of a geometric distribution arises from rolling a six-sideddie until a "1" appears. Each roll isindependent with a1/6{\displaystyle 1/6} chance of success. The number of rolls needed follows a geometric distribution withp=1/6{\displaystyle p=1/6}.

Properties

[edit]

Memorylessness

[edit]
Main article:Memorylessness

The geometric distribution is the only memoryless discrete probability distribution.[4] It is the discrete version of the same property found in theexponential distribution.[1]: 228  The property asserts that the number of previously failed trials does not affect the number of future trials needed for a success.

Because there are two definitions of the geometric distribution, there are also two definitions of memorylessness for discrete random variables.[5] Expressed in terms ofconditional probability, the two definitions arePr(X>m+nX>n)=Pr(X>m),{\displaystyle \Pr(X>m+n\mid X>n)=\Pr(X>m),}andPr(Y>m+nYn)=Pr(Y>m),{\displaystyle \Pr(Y>m+n\mid Y\geq n)=\Pr(Y>m),}

wherem{\displaystyle m} andn{\displaystyle n} arenatural numbers,X{\displaystyle X} is a geometrically distributed random variable defined overN{\displaystyle \mathbb {N} }, andY{\displaystyle Y} is a geometrically distributed random variable defined overN0{\displaystyle \mathbb {N} _{0}}. Note that these definitions are not equivalent for discrete random variables;Y{\displaystyle Y} does not satisfy the first equation andX{\displaystyle X} does not satisfy the second.

Moments and cumulants

[edit]

Theexpected value andvariance of a geometrically distributedrandom variableX{\displaystyle X} defined overN{\displaystyle \mathbb {N} } is[2]: 261 E(X)=1p,var(X)=1pp2.{\displaystyle \operatorname {E} (X)={\frac {1}{p}},\qquad \operatorname {var} (X)={\frac {1-p}{p^{2}}}.}With a geometrically distributed random variableY{\displaystyle Y} defined overN0{\displaystyle \mathbb {N} _{0}}, the expected value changes intoE(Y)=1pp,{\displaystyle \operatorname {E} (Y)={\frac {1-p}{p}},}while the variance stays the same.[6]: 114–115 

For example, when rolling a six-sided die until landing on a "1", the average number of rolls needed is11/6=6{\displaystyle {\frac {1}{1/6}}=6} and the average number of failures is11/61/6=5{\displaystyle {\frac {1-1/6}{1/6}}=5}.

Themoment generating function of the geometric distribution when defined overN{\displaystyle \mathbb {N} } andN0{\displaystyle \mathbb {N} _{0}} respectively is[7][6]: 114 MX(t)=pet1(1p)etMY(t)=p1(1p)et,t<ln(1p){\displaystyle {\begin{aligned}M_{X}(t)&={\frac {pe^{t}}{1-(1-p)e^{t}}}\\M_{Y}(t)&={\frac {p}{1-(1-p)e^{t}}},t<-\ln(1-p)\end{aligned}}}The moments for the number of failures before the first success are given byE(Yn)=k=0(1p)kpkn=pLin(1p)(for n0){\displaystyle {\begin{aligned}\mathrm {E} (Y^{n})&{}=\sum _{k=0}^{\infty }(1-p)^{k}p\cdot k^{n}\\&{}=p\operatorname {Li} _{-n}(1-p)&({\text{for }}n\neq 0)\end{aligned}}}

whereLin(1p){\displaystyle \operatorname {Li} _{-n}(1-p)} is thepolylogarithm function.[8]

Thecumulant generating function of the geometric distribution defined overN0{\displaystyle \mathbb {N} _{0}} is[1]: 216 K(t)=lnpln(1(1p)et){\displaystyle K(t)=\ln p-\ln(1-(1-p)e^{t})}Thecumulantsκr{\displaystyle \kappa _{r}} satisfy the recursionκr+1=qδκrδq,r=1,2,{\displaystyle \kappa _{r+1}=q{\frac {\delta \kappa _{r}}{\delta q}},r=1,2,\dotsc }whereq=1p{\displaystyle q=1-p}, when defined overN0{\displaystyle \mathbb {N} _{0}}.[1]: 216 

Proof of expected value

[edit]

Consider the expected valueE(X){\displaystyle \mathrm {E} (X)} ofX as above, i.e. the average number of trials until a success. The first trial either succeeds with probabilityp{\displaystyle p}, or fails with probability1p{\displaystyle 1-p}. If it fails, theremaining mean number of trials until a success is identical to the original mean - this follows from the fact that all trials are independent.

From this we get the formula:

E(X)=p+(1p)(1+E[X]),{\displaystyle \operatorname {\mathrm {E} } (X)=p+(1-p)(1+\mathrm {E} [X]),}

which, when solved forE(X){\displaystyle \mathrm {E} (X)}, gives:

E(X)=1p.{\displaystyle \operatorname {E} (X)={\frac {1}{p}}.}

The expected number offailuresY{\displaystyle Y} can be found from thelinearity of expectation,E(Y)=E(X1)=E(X)1=1p1=1pp{\displaystyle \mathrm {E} (Y)=\mathrm {E} (X-1)=\mathrm {E} (X)-1={\frac {1}{p}}-1={\frac {1-p}{p}}}. It can also be shown in the following way:

E(Y)=pk=0(1p)kk=p(1p)k=0(1p)k1k=p(1p)(k=0ddp[(1p)k])=p(1p)[ddp(k=0(1p)k)]=p(1p)ddp(1p)=1pp.{\displaystyle {\begin{aligned}\operatorname {E} (Y)&=p\sum _{k=0}^{\infty }(1-p)^{k}k\\&=p(1-p)\sum _{k=0}^{\infty }(1-p)^{k-1}k\\&=p(1-p)\left(-\sum _{k=0}^{\infty }{\frac {d}{dp}}\left[(1-p)^{k}\right]\right)\\&=p(1-p)\left[{\frac {d}{dp}}\left(-\sum _{k=0}^{\infty }(1-p)^{k}\right)\right]\\&=p(1-p){\frac {d}{dp}}\left(-{\frac {1}{p}}\right)\\&={\frac {1-p}{p}}.\end{aligned}}}

The interchange of summation and differentiation is justified by the fact that convergentpower seriesconverge uniformly oncompact subsets of the set of points where they converge.

Summary statistics

[edit]

Themean of the geometric distribution is its expected value which is, as previously discussed in§ Moments and cumulants,1p{\displaystyle {\frac {1}{p}}} or1pp{\displaystyle {\frac {1-p}{p}}} when defined overN{\displaystyle \mathbb {N} } orN0{\displaystyle \mathbb {N} _{0}} respectively.

Themedian of the geometric distribution islog2log(1p){\displaystyle \left\lceil -{\frac {\log 2}{\log(1-p)}}\right\rceil }when defined overN{\displaystyle \mathbb {N} }[9] andlog2log(1p){\displaystyle \left\lfloor -{\frac {\log 2}{\log(1-p)}}\right\rfloor } when defined overN0{\displaystyle \mathbb {N} _{0}}.[3]: 69 

Themode of the geometric distribution is the first value in the support set. This is 1 when defined overN{\displaystyle \mathbb {N} } and 0 when defined overN0{\displaystyle \mathbb {N} _{0}}.[3]: 69 

Theskewness of the geometric distribution is2p1p{\displaystyle {\frac {2-p}{\sqrt {1-p}}}}.[6]: 115 

Thekurtosis of the geometric distribution is9+p21p{\displaystyle 9+{\frac {p^{2}}{1-p}}}.[6]: 115  Theexcess kurtosis of a distribution is the difference between its kurtosis and the kurtosis of anormal distribution,3{\displaystyle 3}.[10]: 217  Therefore, the excess kurtosis of the geometric distribution is6+p21p{\displaystyle 6+{\frac {p^{2}}{1-p}}}. Sincep21p0{\displaystyle {\frac {p^{2}}{1-p}}\geq 0}, the excess kurtosis is always positive so the distribution isleptokurtic.[3]: 69  In other words, the tail of a geometric distribution decays faster than a Gaussian.[10]: 217 

Entropy and Fisher's information

[edit]

Entropy (geometric distribution, failures before success)

[edit]

Entropy is a measure of uncertainty in a probability distribution. For the geometric distribution that models the number of failures before the first success, the probability mass function is:

P(X=k)=(1p)kp,k=0,1,2,{\displaystyle P(X=k)=(1-p)^{k}p,\quad k=0,1,2,\dots }

The entropyH(X){\displaystyle H(X)} for this distribution is defined as:

H(X)=k=0P(X=k)lnP(X=k)=k=0(1p)kpln((1p)kp)=k=0(1p)kp[kln(1p)+lnp]=logp1pplog(1p){\displaystyle {\begin{aligned}H(X)&=-\sum _{k=0}^{\infty }P(X=k)\ln P(X=k)\\&=-\sum _{k=0}^{\infty }(1-p)^{k}p\ln \left((1-p)^{k}p\right)\\&=-\sum _{k=0}^{\infty }(1-p)^{k}p\left[k\ln(1-p)+\ln p\right]\\&=-\log p-{\frac {1-p}{p}}\log(1-p)\end{aligned}}}

The entropy increases as the probabilityp{\displaystyle p} decreases, reflecting greater uncertainty as success becomes rarer.

Fisher's information (geometric distribution, failures before success)

[edit]

Fisher information measures the amount of information that an observable random variableX{\displaystyle X} carries about an unknown parameterp{\displaystyle p}. For the geometric distribution (failures before the first success), the Fisher information with respect top{\displaystyle p} is given by:

I(p)=1p2(1p){\displaystyle I(p)={\frac {1}{p^{2}(1-p)}}}

Proof:

Fisher information increases asp{\displaystyle p} decreases, indicating that rarer successes provide more information about the parameterp{\displaystyle p}.

Entropy (geometric distribution, trials until success)

[edit]

For the geometric distribution modeling the number of trials until the first success, the probability mass function is:

P(X=k)=(1p)k1p,k=1,2,3,{\displaystyle P(X=k)=(1-p)^{k-1}p,\quad k=1,2,3,\dots }

The entropyH(X){\displaystyle H(X)} for this distribution is the same as that of version modeling trials until failure,

H(X)=logp1pplog(1p){\displaystyle {\begin{aligned}H(X)&=-\log p-{\frac {1-p}{p}}\log(1-p)\end{aligned}}}

Fisher's information (geometric distribution, trials until success)

[edit]

Fisher information for the geometric distribution modeling the number of trials until the first success is given by:

I(p)=1p2(1p){\displaystyle I(p)={\frac {1}{p^{2}(1-p)}}}

Proof:

L(p;X)=(1p)X1p{\displaystyle L(p;X)=(1-p)^{X-1}p}
  • Thelog-likelihood function is:
lnL(p;X)=(X1)ln(1p)+lnp{\displaystyle \ln L(p;X)=(X-1)\ln(1-p)+\ln p}
  • The score function (first derivative of the log-likelihood w.r.t.p{\displaystyle p}) is:
plnL(p;X)=1pX11p{\displaystyle {\frac {\partial }{\partial p}}\ln L(p;X)={\frac {1}{p}}-{\frac {X-1}{1-p}}}
  • The second derivative of the log-likelihood function is:
2p2lnL(p;X)=1p2X1(1p)2{\displaystyle {\frac {\partial ^{2}}{\partial p^{2}}}\ln L(p;X)=-{\frac {1}{p^{2}}}-{\frac {X-1}{(1-p)^{2}}}}
  • Fisher information is calculated as the negative expected value of the second derivative:

I(p)=E[2p2lnL(p;X)]=(1p21pp(1p)2)=1p2(1p){\displaystyle {\begin{aligned}I(p)&=-E\left[{\frac {\partial ^{2}}{\partial p^{2}}}\ln L(p;X)\right]\\&=-\left(-{\frac {1}{p^{2}}}-{\frac {1-p}{p(1-p)^{2}}}\right)\\&={\frac {1}{p^{2}(1-p)}}\end{aligned}}}

General properties

[edit]

Related distributions

[edit]

Statistical inference

[edit]

The true parameterp{\displaystyle p} of an unknown geometric distribution can be inferred through estimators and conjugate distributions.

Method of moments

[edit]

Provided they exist, the firstl{\displaystyle l} moments of a probability distribution can be estimated from a samplex1,,xn{\displaystyle x_{1},\dotsc ,x_{n}} using the formulami=1nj=1nxji{\displaystyle m_{i}={\frac {1}{n}}\sum _{j=1}^{n}x_{j}^{i}}wheremi{\displaystyle m_{i}} is thei{\displaystyle i}th sample moment and1il{\displaystyle 1\leq i\leq l}.[16]: 349–350  EstimatingE(X){\displaystyle \mathrm {E} (X)} withm1{\displaystyle m_{1}} gives thesample mean, denotedx¯{\displaystyle {\bar {x}}}. Substituting this estimate in the formula for the expected value of a geometric distribution and solving forp{\displaystyle p} gives the estimatorsp^=1x¯{\displaystyle {\hat {p}}={\frac {1}{\bar {x}}}} andp^=1x¯+1{\displaystyle {\hat {p}}={\frac {1}{{\bar {x}}+1}}} when supported onN{\displaystyle \mathbb {N} } andN0{\displaystyle \mathbb {N} _{0}} respectively. These estimators arebiased sinceE(1x¯)>1E(x¯)=p{\displaystyle \mathrm {E} \left({\frac {1}{\bar {x}}}\right)>{\frac {1}{\mathrm {E} ({\bar {x}})}}=p} as a result ofJensen's inequality.[17]: 53–54 

Maximum likelihood estimation

[edit]

Themaximum likelihood estimator ofp{\displaystyle p} is the value that maximizes thelikelihood function given a sample.[16]: 308  By finding thezero of thederivative of thelog-likelihood function when the distribution is defined overN{\displaystyle \mathbb {N} }, the maximum likelihood estimator can be found to bep^=1x¯{\displaystyle {\hat {p}}={\frac {1}{\bar {x}}}}, wherex¯{\displaystyle {\bar {x}}} is the sample mean.[18] If the domain isN0{\displaystyle \mathbb {N} _{0}}, then the estimator shifts top^=1x¯+1{\displaystyle {\hat {p}}={\frac {1}{{\bar {x}}+1}}}. As previously discussed in§ Method of moments, these estimators are biased.

Regardless of the domain, the bias is equal to

bE[(p^mlep)]=p(1p)n{\displaystyle b\equiv \operatorname {E} {\bigg [}\;({\hat {p}}_{\mathrm {mle} }-p)\;{\bigg ]}={\frac {p\,(1-p)}{n}}}

which yields thebias-corrected maximum likelihood estimator,[citation needed]

p^mle=p^mleb^{\displaystyle {\hat {p\,}}_{\text{mle}}^{*}={\hat {p\,}}_{\text{mle}}-{\hat {b\,}}}

Bayesian inference

[edit]

InBayesian inference, the parameterp{\displaystyle p} is a random variable from aprior distribution with aposterior distribution calculated usingBayes' theorem after observing samples.[17]: 167  If abeta distribution is chosen as the prior distribution, then the posterior will also be a beta distribution and it is called theconjugate distribution. In particular, if aBeta(α,β){\displaystyle \mathrm {Beta} (\alpha ,\beta )} prior is selected, then the posterior, after observing samplesk1,,knN{\displaystyle k_{1},\dotsc ,k_{n}\in \mathbb {N} }, is[19]pBeta(α+n, β+i=1n(ki1)).{\displaystyle p\sim \mathrm {Beta} \left(\alpha +n,\ \beta +\sum _{i=1}^{n}(k_{i}-1)\right).\!}Alternatively, if the samples are inN0{\displaystyle \mathbb {N} _{0}}, the posterior distribution is[20]pBeta(α+n,β+i=1nki).{\displaystyle p\sim \mathrm {Beta} \left(\alpha +n,\beta +\sum _{i=1}^{n}k_{i}\right).}Since the expected value of aBeta(α,β){\displaystyle \mathrm {Beta} (\alpha ,\beta )} distribution isαα+β{\displaystyle {\frac {\alpha }{\alpha +\beta }}},[11]: 145  asα{\displaystyle \alpha } andβ{\displaystyle \beta } approach zero, the posterior mean approaches its maximum likelihood estimate.

Random variate generation

[edit]
Further information:Non-uniform random variate generation

The geometric distribution can be generated experimentally fromi.i.d.standard uniform random variables by finding the first such random variable to be less than or equal top{\displaystyle p}. However, the number of random variables needed is also geometrically distributed and the algorithm slows asp{\displaystyle p} decreases.[21]: 498 

Random generation can be done inconstant time by truncatingexponential random numbers. An exponential random variableE{\displaystyle E} can become geometrically distributed with parameterp{\displaystyle p} throughE/log(1p){\displaystyle \lceil -E/\log(1-p)\rceil }. In turn,E{\displaystyle E} can be generated from a standard uniform random variableU{\displaystyle U} altering the formula intolog(U)/log(1p){\displaystyle \lceil \log(U)/\log(1-p)\rceil }.[21]: 499–500 [22]

Applications

[edit]

The geometric distribution is used in many disciplines. Inqueueing theory, theM/M/1 queue has a steady state following a geometric distribution.[23] Instochastic processes, the Yule Furry process is geometrically distributed.[24] The distribution also arises when modeling the lifetime of a device in discrete contexts.[25] It has also been used to fit data including modeling patients spreadingCOVID-19.[26]

See also

[edit]

References

[edit]
  1. ^abcdefJohnson, Norman L.;Kemp, Adrienne W.;Kotz, Samuel (2005-08-19).Univariate Discrete Distributions. Wiley Series in Probability and Statistics (1 ed.). Wiley.doi:10.1002/0471715816.ISBN 978-0-471-27246-5.
  2. ^abNagel, Werner; Steyer, Rolf (2017-04-04).Probability and Conditional Expectation: Fundamentals for the Empirical Sciences. Wiley Series in Probability and Statistics (1st ed.). Wiley.doi:10.1002/9781119243496.ISBN 978-1-119-24352-6.
  3. ^abcdeChattamvelli, Rajan; Shanmugam, Ramalingam (2020).Discrete Distributions in Engineering and the Applied Sciences. Synthesis Lectures on Mathematics & Statistics. Cham:Springer International Publishing.doi:10.1007/978-3-031-02425-2.ISBN 978-3-031-01297-6.
  4. ^Dekking, Frederik Michel; Kraaikamp, Cornelis; Lopuhaä, Hendrik Paul; Meester, Ludolf Erwin (2005).A Modern Introduction to Probability and Statistics. Springer Texts in Statistics. London: Springer London. p. 50.doi:10.1007/1-84628-168-7.ISBN 978-1-85233-896-1.
  5. ^Weisstein, Eric W."Memoryless".mathworld.wolfram.com. Retrieved2024-07-25.
  6. ^abcdeForbes, Catherine; Evans, Merran; Hastings, Nicholas; Peacock, Brian (2010-11-29).Statistical Distributions (1st ed.). Wiley.doi:10.1002/9780470627242.ISBN 978-0-470-39063-4.
  7. ^Bertsekas, Dimitri P.;Tsitsiklis, John N. (2008).Introduction to Probability. Optimization and Computation Series (2nd ed.). Belmont:Athena Scientific. p. 235.ISBN 978-1-886529-23-6.
  8. ^Weisstein, Eric W."Geometric Distribution".MathWorld. Retrieved2024-07-13.
  9. ^Aggarwal, Charu C. (2024).Probability and Statistics for Machine Learning: A Textbook. Cham: Springer Nature Switzerland. p. 138.doi:10.1007/978-3-031-53282-5.ISBN 978-3-031-53281-8.
  10. ^abChan, Stanley (2021).Introduction to Probability for Data Science (1st ed.).Michigan Publishing.ISBN 978-1-60785-747-1.
  11. ^abcdLovric, Miodrag, ed. (2011).International Encyclopedia of Statistical Science (1st ed.). Berlin, Heidelberg: Springer Berlin Heidelberg.doi:10.1007/978-3-642-04898-2.ISBN 978-3-642-04897-5.
  12. ^abGallager, R.; van Voorhis, D. (March 1975). "Optimal source codes for geometrically distributed integer alphabets (Corresp.)".IEEE Transactions on Information Theory.21 (2):228–230.doi:10.1109/TIT.1975.1055357.ISSN 0018-9448.
  13. ^Lisman, J. H. C.; Zuylen, M. C. A. van (March 1972)."Note on the generation of most probable frequency distributions".Statistica Neerlandica.26 (1):19–23.doi:10.1111/j.1467-9574.1972.tb00152.x.ISSN 0039-0402.
  14. ^Pitman, Jim (1993).Probability. New York, NY: Springer New York. p. 372.doi:10.1007/978-1-4612-4374-8.ISBN 978-0-387-94594-1.
  15. ^Ciardo, Gianfranco; Leemis, Lawrence M.; Nicol, David (1 June 1995)."On the minimum of independent geometrically distributed random variables".Statistics & Probability Letters.23 (4):313–326.doi:10.1016/0167-7152(94)00130-Z.hdl:2060/19940028569.S2CID 1505801.
  16. ^abEvans, Michael; Rosenthal, Jeffrey (2023).Probability and Statistics: The Science of Uncertainty (2nd ed.). Macmillan Learning.ISBN 978-1429224628.
  17. ^abHeld, Leonhard; Sabanés Bové, Daniel (2020).Likelihood and Bayesian Inference: With Applications in Biology and Medicine. Statistics for Biology and Health. Berlin, Heidelberg: Springer Berlin Heidelberg.doi:10.1007/978-3-662-60792-3.ISBN 978-3-662-60791-6.
  18. ^Siegrist, Kyle (2020-05-05)."7.3: Maximum Likelihood".Statistics LibreTexts. Retrieved2024-06-20.
  19. ^Fink, Daniel. "A Compendium of Conjugate Priors".CiteSeerX 10.1.1.157.5540.
  20. ^"3. Conjugate families of distributions"(PDF).Archived(PDF) from the original on 2010-04-08.
  21. ^abDevroye, Luc (1986).Non-Uniform Random Variate Generation. New York, NY: Springer New York.doi:10.1007/978-1-4613-8643-8.ISBN 978-1-4613-8645-2.
  22. ^Knuth, Donald Ervin (1997).The Art of Computer Programming. Vol. 2 (3rd ed.). Reading, Mass:Addison-Wesley. p. 136.ISBN 978-0-201-89683-1.
  23. ^Daskin, Mark S. (2021).Bite-Sized Operations Management. Synthesis Lectures on Operations Research and Applications. Cham:Springer International Publishing. p. 127.doi:10.1007/978-3-031-02493-1.ISBN 978-3-031-01365-2.
  24. ^Madhira, Sivaprasad; Deshmukh, Shailaja (2023).Introduction to Stochastic Processes Using R. Singapore: Springer Nature Singapore. p. 449.doi:10.1007/978-981-99-5601-2.ISBN 978-981-99-5600-5.
  25. ^Gupta, Rakesh; Gupta, Shubham; Ali, Irfan (2023), Garg, Harish (ed.),"Some Discrete Parametric Markov–Chain System Models to Analyze Reliability",Advances in Reliability, Failure and Risk Analysis, Singapore: Springer Nature Singapore, pp. 305–306,doi:10.1007/978-981-19-9909-3_14,ISBN 978-981-19-9908-6, retrieved2024-07-13
  26. ^Polymenis, Athanase (2021-10-01)."An application of the geometric distribution for assessing the risk of infection with SARS-CoV-2 by location".Asian Journal of Medical Sciences.12 (10):8–11.doi:10.3126/ajms.v12i10.38783.ISSN 2091-0576.
Discrete
univariate
with finite
support
with infinite
support
Continuous
univariate
supported on a
bounded interval
supported on a
semi-infinite
interval
supported
on the whole
real line
with support
whose type varies
Mixed
univariate
continuous-
discrete
Multivariate
(joint)
Directional
Degenerate
andsingular
Degenerate
Dirac delta function
Singular
Cantor
Families
Retrieved from "https://en.wikipedia.org/w/index.php?title=Geometric_distribution&oldid=1322957239"
Categories:
Hidden categories:

[8]ページ先頭

©2009-2025 Movatter.jp