Movatterモバイル変換


[0]ホーム

URL:


Jump to content
WikipediaThe Free Encyclopedia
Search

Markov's inequality

From Wikipedia, the free encyclopedia
Concept in probability theory
Markov's inequality gives an upper bound for the measure of the set (indicated in red) wheref(x){\displaystyle f(x)} exceeds a given levelε{\displaystyle \varepsilon }. The bound combines the levelε{\displaystyle \varepsilon } with the average value off{\displaystyle f}.

Inprobability theory,Markov's inequality gives anupper bound on theprobability that anon-negativerandom variable is greater than or equal to some positiveconstant. Markov's inequality is tight in the sense that for each chosen positive constant, there exists a random variable such that the inequality is in fact an equality.[1]

It is named after the Russian mathematicianAndrey Markov, although it appeared earlier in the work ofPafnuty Chebyshev (Markov's teacher), and many sources, especially inanalysis, refer to it as Chebyshev's inequality (sometimes, calling it the first Chebyshev inequality, while referring toChebyshev's inequality as the second Chebyshev inequality) orBienaymé's inequality.

Markov's inequality (and other similar inequalities) relate probabilities toexpectations, and provide (frequently loose but still useful) bounds for thecumulative distribution function of a random variable. Markov's inequality can also be used to upper bound the expectation of a non-negative random variable in terms of its distribution function.

Statement

[edit]

IfX is a nonnegative random variable anda > 0, then the probabilitythatX is at leasta is at most the expectation ofX divided bya:[1]

P(Xa)E(X)a.{\displaystyle \operatorname {P} (X\geq a)\leq {\frac {\operatorname {E} (X)}{a}}.}

WhenE(X)>0{\displaystyle \operatorname {E} (X)>0}, we can takea=a~E(X){\displaystyle a={\tilde {a}}\cdot \operatorname {E} (X)} fora~>0{\displaystyle {\tilde {a}}>0} to rewrite the previous inequality as

P(Xa~E(X))1a~.{\displaystyle \operatorname {P} (X\geq {\tilde {a}}\cdot \operatorname {E} (X))\leq {\frac {1}{\tilde {a}}}.}

In the language ofmeasure theory, Markov's inequality states that if(X, Σ, μ) is ameasure space,f{\displaystyle f} is ameasurableextended real-valued function, andε > 0, then

μ({xX:|f(x)|ε})1εX|f|dμ.{\displaystyle \mu (\{x\in X:|f(x)|\geq \varepsilon \})\leq {\frac {1}{\varepsilon }}\int _{X}|f|\,d\mu .}

This measure-theoretic definition is sometimes referred to asChebyshev's inequality.[2]


Extended version for nondecreasing functions

[edit]

Ifφ is a nondecreasing nonnegative function,X is a (not necessarily nonnegative) random variable, andφ(a) > 0, then[3]

P(Xa)E(φ(X))φ(a).{\displaystyle \operatorname {P} (X\geq a)\leq {\frac {\operatorname {E} (\varphi (X))}{\varphi (a)}}.}

An immediate corollary, using higher moments ofX supported on values larger than 0, is

P(|X|a)E(|X|n)an.{\displaystyle \operatorname {P} (|X|\geq a)\leq {\frac {\operatorname {E} (|X|^{n})}{a^{n}}}.}


The uniformly randomized Markov's inequality

[edit]

IfX is a nonnegative random variable anda > 0, andU is a uniformly distributed random variable on[0,1]{\displaystyle [0,1]} that is independent ofX, then[4]

P(XUa)E(X)a.{\displaystyle \operatorname {P} (X\geq Ua)\leq {\frac {\operatorname {E} (X)}{a}}.}

SinceU is almost surely smaller than one, this bound is strictly stronger than Markov's inequality. Remarkably,U cannot be replaced by any constant smaller than one, meaning that deterministic improvements to Markov's inequality cannot exist in general. While Markov's inequality holds with equality for distributions supported on{0,a}{\displaystyle \{0,a\}}, the above randomized variant holds with equality for any distribution that is bounded on[0,a]{\displaystyle [0,a]}.


Proofs

[edit]

We separate the case in which the measure space is a probability space from the more general case because the probability case is more accessible for the general reader.

Intuition

[edit]

E(X)=P(X<a)E(X|X<a)+P(Xa)E(X|Xa){\displaystyle \operatorname {E} (X)=\operatorname {P} (X<a)\cdot \operatorname {E} (X|X<a)+\operatorname {P} (X\geq a)\cdot \operatorname {E} (X|X\geq a)} whereE(X|X<a){\displaystyle \operatorname {E} (X|X<a)} is larger than or equal to 0 as the random variableX{\displaystyle X} is non-negative andE(X|Xa){\displaystyle \operatorname {E} (X|X\geq a)} is larger than or equal toa{\displaystyle a} because the conditional expectation only takes into account of values larger than or equal toa{\displaystyle a} which r.v.X{\displaystyle X} can take.

Property 1:P(X<a)E(XX<a)0{\displaystyle \operatorname {P} (X<a)\cdot \operatorname {E} (X\mid X<a)\geq 0}

Given a non-negative random variableX{\displaystyle X}, the conditional expectationE(XX<a)0{\displaystyle \operatorname {E} (X\mid X<a)\geq 0} becauseX0{\displaystyle X\geq 0}. Also, probabilities are always non-negative, i.e.,P(X<a)0{\displaystyle \operatorname {P} (X<a)\geq 0}. Thus, the product:

P(X<a)E(XX<a)0{\displaystyle \operatorname {P} (X<a)\cdot \operatorname {E} (X\mid X<a)\geq 0}.

This is intuitive since conditioning onX<a{\displaystyle X<a} still results in non-negative values, ensuring the product remains non-negative.

Property 2:P(Xa)E(XXa)aP(Xa){\displaystyle \operatorname {P} (X\geq a)\cdot \operatorname {E} (X\mid X\geq a)\geq a\cdot \operatorname {P} (X\geq a)}

ForXa{\displaystyle X\geq a}, the expected value givenXa{\displaystyle X\geq a} is at leasta.E(XXa)a{\displaystyle a.\operatorname {E} (X\mid X\geq a)\geq a}. Multiplying both sides byP(Xa){\displaystyle \operatorname {P} (X\geq a)}, we get:

P(Xa)E(XXa)aP(Xa){\displaystyle \operatorname {P} (X\geq a)\cdot \operatorname {E} (X\mid X\geq a)\geq a\cdot \operatorname {P} (X\geq a)}.

This is intuitive since all values considered are at leasta{\displaystyle a}, making their average also greater than or equal toa{\displaystyle a}.

Hence intuitively,E(X)P(Xa)E(X|Xa)aP(Xa){\displaystyle \operatorname {E} (X)\geq \operatorname {P} (X\geq a)\cdot \operatorname {E} (X|X\geq a)\geq a\cdot \operatorname {P} (X\geq a)}, which directly leads toP(Xa)E(X)a{\displaystyle \operatorname {P} (X\geq a)\leq {\frac {\operatorname {E} (X)}{a}}}.

Probability-theoretic proof

[edit]

Method 1:From the definition of expectation:

E(X)=xf(x)dx{\displaystyle \operatorname {E} (X)=\int _{-\infty }^{\infty }xf(x)\,dx}

However, X is a non-negative random variable thus,

E(X)=xf(x)dx=0xf(x)dx{\displaystyle \operatorname {E} (X)=\int _{-\infty }^{\infty }xf(x)\,dx=\int _{0}^{\infty }xf(x)\,dx}

From this we can derive,

E(X)=0axf(x)dx+axf(x)dxaxf(x)dxaaf(x)dx=aaf(x)dx=aPr(Xa){\displaystyle \operatorname {E} (X)=\int _{0}^{a}xf(x)\,dx+\int _{a}^{\infty }xf(x)\,dx\geq \int _{a}^{\infty }xf(x)\,dx\geq \int _{a}^{\infty }af(x)\,dx=a\int _{a}^{\infty }f(x)\,dx=a\operatorname {Pr} (X\geq a)}

From here, dividing through bya{\displaystyle a} allows us to see that

Pr(Xa)E(X)/a{\displaystyle \Pr(X\geq a)\leq \operatorname {E} (X)/a}

Method 2:For any eventE{\displaystyle E}, letIE{\displaystyle I_{E}} be the indicator random variable ofE{\displaystyle E}, that is,IE=1{\displaystyle I_{E}=1} ifE{\displaystyle E} occurs andIE=0{\displaystyle I_{E}=0} otherwise.

Using this notation, we haveI(Xa)=1{\displaystyle I_{(X\geq a)}=1} if the eventXa{\displaystyle X\geq a} occurs, andI(Xa)=0{\displaystyle I_{(X\geq a)}=0} ifX<a{\displaystyle X<a}. Then, givena>0{\displaystyle a>0},

aI(Xa)X{\displaystyle aI_{(X\geq a)}\leq X}

which is clear if we consider the two possible values ofXa{\displaystyle X\geq a}. IfX<a{\displaystyle X<a}, thenI(Xa)=0{\displaystyle I_{(X\geq a)}=0}, and soaI(Xa)=0X{\displaystyle aI_{(X\geq a)}=0\leq X}. Otherwise, we haveXa{\displaystyle X\geq a}, for whichIXa=1{\displaystyle I_{X\geq a}=1} and soaIXa=aX{\displaystyle aI_{X\geq a}=a\leq X}.

SinceE{\displaystyle \operatorname {E} } is a monotonically increasing function, taking expectation of both sides of an inequality cannot reverse it. Therefore,

E(aI(Xa))E(X).{\displaystyle \operatorname {E} (aI_{(X\geq a)})\leq \operatorname {E} (X).}

Now, using linearity of expectations, the left side of this inequality is the same as

aE(I(Xa))=a(1P(Xa)+0P(X<a))=aP(Xa).{\displaystyle a\operatorname {E} (I_{(X\geq a)})=a(1\cdot \operatorname {P} (X\geq a)+0\cdot \operatorname {P} (X<a))=a\operatorname {P} (X\geq a).}

Thus we have

aP(Xa)E(X){\displaystyle a\operatorname {P} (X\geq a)\leq \operatorname {E} (X)}

and sincea > 0, we can divide both sides by a.

Measure-theoretic proof

[edit]

We may assume that the functionf{\displaystyle f} is non-negative, since only its absolute value enters in the equation. Now, consider the real-valued functions onX given by

s(x)={ε,if f(x)ε0,if f(x)<ε{\displaystyle s(x)={\begin{cases}\varepsilon ,&{\text{if }}f(x)\geq \varepsilon \\0,&{\text{if }}f(x)<\varepsilon \end{cases}}}

Then0s(x)f(x){\displaystyle 0\leq s(x)\leq f(x)}. By the definition of theLebesgue integral

Xf(x)dμXs(x)dμ=εμ({xX:f(x)ε}){\displaystyle \int _{X}f(x)\,d\mu \geq \int _{X}s(x)\,d\mu =\varepsilon \mu (\{x\in X:\,f(x)\geq \varepsilon \})}

and sinceε>0{\displaystyle \varepsilon >0}, both sides can be divided byε{\displaystyle \varepsilon }, obtaining

μ({xX:f(x)ε})1εXfdμ.{\displaystyle \mu (\{x\in X:\,f(x)\geq \varepsilon \})\leq {1 \over \varepsilon }\int _{X}f\,d\mu .}

Discrete case

[edit]

We now provide a proof for the special case whenX{\displaystyle X} is a discrete random variable which only takes on non-negative integer values.

Leta{\displaystyle a} be a positive integer. By definitionaPr(X>a){\displaystyle a\operatorname {Pr} (X>a)}=aPr(X=a+1)+aPr(X=a+2)+aPr(X=a+3)+...{\displaystyle =a\operatorname {Pr} (X=a+1)+a\operatorname {Pr} (X=a+2)+a\operatorname {Pr} (X=a+3)+...}aPr(X=a)+(a+1)Pr(X=a+1)+(a+2)Pr(X=a+2)+...{\displaystyle \leq a\operatorname {Pr} (X=a)+(a+1)\operatorname {Pr} (X=a+1)+(a+2)\operatorname {Pr} (X=a+2)+...}Pr(X=1)+2Pr(X=2)+3Pr(X=3)+...{\displaystyle \leq \operatorname {Pr} (X=1)+2\operatorname {Pr} (X=2)+3\operatorname {Pr} (X=3)+...}+aPr(X=a)+(a+1)Pr(X=a+1)+(a+2)Pr(X=a+2)+...{\displaystyle +a\operatorname {Pr} (X=a)+(a+1)\operatorname {Pr} (X=a+1)+(a+2)\operatorname {Pr} (X=a+2)+...}=E(X){\displaystyle =\operatorname {E} (X)}

Dividing bya{\displaystyle a} yields the desired result.

Corollaries

[edit]

Chebyshev's inequality

[edit]

Chebyshev's inequality uses thevariance to bound the probability that a random variable deviates far from the mean. Specifically,

P(|XE(X)|a)Var(X)a2,{\displaystyle \operatorname {P} (|X-\operatorname {E} (X)|\geq a)\leq {\frac {\operatorname {Var} (X)}{a^{2}}},}

for anya > 0.[3] HereVar(X) is thevariance of X, defined as:

Var(X)=E[(XE(X))2].{\displaystyle \operatorname {Var} (X)=\operatorname {E} [(X-\operatorname {E} (X))^{2}].}

Chebyshev's inequality follows from Markov's inequality by considering the random variable

(XE(X))2{\displaystyle (X-\operatorname {E} (X))^{2}}

and the constanta2,{\displaystyle a^{2},} for which Markov's inequality reads

P((XE(X))2a2)Var(X)a2.{\displaystyle \operatorname {P} ((X-\operatorname {E} (X))^{2}\geq a^{2})\leq {\frac {\operatorname {Var} (X)}{a^{2}}}.}

This argument can be summarized (where "MI" indicates use of Markov's inequality):

P(|XE(X)|a)=P((XE(X))2a2)MIE((XE(X))2)a2=Var(X)a2.{\displaystyle \operatorname {P} (|X-\operatorname {E} (X)|\geq a)=\operatorname {P} \left((X-\operatorname {E} (X))^{2}\geq a^{2}\right)\,{\overset {\underset {\mathrm {MI} }{}}{\leq }}\,{\frac {\operatorname {E} \left((X-\operatorname {E} (X))^{2}\right)}{a^{2}}}={\frac {\operatorname {Var} (X)}{a^{2}}}.}

Other corollaries

[edit]
  1. The "monotonic" result can be demonstrated by:
    P(|X|a)=P(φ(|X|)φ(a))MIE(φ(|X|))φ(a){\displaystyle \operatorname {P} (|X|\geq a)=\operatorname {P} {\big (}\varphi (|X|)\geq \varphi (a){\big )}\,{\overset {\underset {\mathrm {MI} }{}}{\leq }}\,{\frac {\operatorname {E} (\varphi (|X|))}{\varphi (a)}}}
  2. The result that, for a nonnegative random variableX, thequantile function ofX satisfies:
    QX(1p)E(X)p,{\displaystyle Q_{X}(1-p)\leq {\frac {\operatorname {E} (X)}{p}},}
    the proof using
    pP(XQX(1p))MIE(X)QX(1p).{\displaystyle p\leq \operatorname {P} (X\geq Q_{X}(1-p))\,{\overset {\underset {\mathrm {MI} }{}}{\leq }}\,{\frac {\operatorname {E} (X)}{Q_{X}(1-p)}}.}
  3. LetM0{\displaystyle M\succeq 0} be a self-adjoint matrix-valued random variable andA0{\displaystyle A\succ 0}. Then
    P(MA)tr(E(X)A1){\displaystyle \operatorname {P} (M\npreceq A)\leq \operatorname {tr} (\operatorname {E} (X)A^{-1})}
    which can be proved similarly.[5]

Examples

[edit]

Assuming no income is negative, Markov's inequality shows that no more than 10% (1/10) of the population can have more than 10 times the average income.[6]

Another simple example is as follows: Andrew makes 4 mistakes on average on his Statistics course tests. The best upper bound on the probability that Andrew will make at least 10 mistakes is 0.4 asP(X10)E(X)α=410.{\displaystyle \operatorname {P} (X\geq 10)\leq {\frac {\operatorname {E} (X)}{\alpha }}={\frac {4}{10}}.} Note that Andrew might do exactly 10 mistakes with probability 0.4 and make no mistakes with probability 0.6; the expectation is exactly 4 mistakes.

See also

[edit]

References

[edit]
  1. ^abHuber, Mark (2019-11-26)."Halving the Bounds for the Markov, Chebyshev, and Chernoff Inequalities Using Smoothing".The American Mathematical Monthly.126 (10):915–927.arXiv:1803.06361.doi:10.1080/00029890.2019.1656484.ISSN 0002-9890.
  2. ^Stein, E. M.; Shakarchi, R. (2005),Real Analysis,Princeton Lectures in Analysis, vol. 3 (1st ed.), p. 91.
  3. ^abLin, Zhengyan (2010).Probability inequalities. Springer. p. 52.
  4. ^Ramdas, Aaditya; Manole, Tudor (2023),Randomized and Exchangeable Improvements of Markov's, Chebyshev's and Chernoff's Inequalities,arXiv:2304.02611.
  5. ^Tu, Stephen (2017-11-04)."Markov's Inequality for Matrices". RetrievedMay 27, 2024.
  6. ^Ross, Kevin.5.4 Probability inequalitlies | An Introduction to Probability and Simulation.

External links

[edit]
Basic concepts
L1 spaces
L2 spaces
L{\displaystyle L^{\infty }} spaces
Maps
Inequalities
Results
ForLebesgue measure
Applications & related
icon
This articleneeds additional citations forverification. Please helpimprove this article byadding citations to reliable sources. Unsourced material may be challenged and removed.
Find sources: "Markov's inequality" – news ·newspapers ·books ·scholar ·JSTOR
(September 2010) (Learn how and when to remove this message)
Retrieved from "https://en.wikipedia.org/w/index.php?title=Markov%27s_inequality&oldid=1329727850"
Category:
Hidden categories:

[8]ページ先頭

©2009-2026 Movatter.jp