Movatterモバイル変換


[0]ホーム

URL:


Jump to content
WikipediaThe Free Encyclopedia
Search

Rényi entropy

From Wikipedia, the free encyclopedia
Concept in information theory

Ininformation theory, theRényi entropy is a quantity that generalizes various notions ofentropy, includingHartley entropy,Shannon entropy,collision entropy, andmin-entropy. The Rényi entropy is named afterAlfréd Rényi, who looked for the most general way to quantify information while preserving additivity for independent events.[1][2] In the context offractal dimension estimation, the Rényi entropy forms the basis of the concept ofgeneralized dimensions.[3]

The Rényi entropy is important in ecology and statistics asindex of diversity. The Rényi entropy is also important inquantum information, where it can be used as a measure ofentanglement. In the Heisenberg XY spin chain model, the Rényi entropy as a function ofα can be calculated explicitly because it is anautomorphic function with respect to a particular subgroup of themodular group.[4][5] Intheoretical computer science, the min-entropy is used in the context ofrandomness extractors.

Definition

[edit]

The Rényi entropy of orderα{\displaystyle \alpha }, where0<α<{\displaystyle 0<\alpha <\infty } andα1{\displaystyle \alpha \neq 1}, is defined as[1]Hα(X)=11αlog(i=1npiα).{\displaystyle \mathrm {H} _{\alpha }(X)={\frac {1}{1-\alpha }}\log \left(\sum _{i=1}^{n}p_{i}^{\alpha }\right).}It is further defined atα=0,1,{\displaystyle \alpha =0,1,\infty } asHα(X)=limγαHγ(X).{\displaystyle \mathrm {H} _{\alpha }(X)=\lim _{\gamma \to \alpha }\mathrm {H} _{\gamma }(X).}

Here,X{\displaystyle X} is adiscrete random variable with possible outcomes in the setA={x1,x2,...,xn}{\displaystyle {\mathcal {A}}=\{x_{1},x_{2},...,x_{n}\}} and corresponding probabilitiespiPr(X=xi){\displaystyle p_{i}\doteq \Pr(X=x_{i})} fori=1,,n{\displaystyle i=1,\dots ,n}. The resultingunit of information is determined by the base of thelogarithm, e.g.shannon for base 2, ornat for basee.If the probabilities arepi=1/n{\displaystyle p_{i}=1/n} for alli=1,,n{\displaystyle i=1,\dots ,n}, then all the Rényi entropies of the distribution are equal:Hα(X)=logn{\displaystyle \mathrm {H} _{\alpha }(X)=\log n}.In general, for all discrete random variablesX{\displaystyle X},Hα(X){\displaystyle \mathrm {H} _{\alpha }(X)} is a non-increasing function inα{\displaystyle \alpha }.

Applications often exploit the following relation between the Rényi entropy and theα-norm of the vector of probabilities:Hα(X)=α1αlog(Pα).{\displaystyle \mathrm {H} _{\alpha }(X)={\frac {\alpha }{1-\alpha }}\log \left({\left\|P\right\|}_{\alpha }\right).}Here, the discrete probability distributionP=(p1,,pn){\displaystyle P=(p_{1},\dots ,p_{n})} is interpreted as a vector inRn{\displaystyle \mathbb {R} ^{n}} withpi0{\displaystyle p_{i}\geq 0} andi=1npi=1{\textstyle \sum _{i=1}^{n}p_{i}=1}.

The Rényi entropy for anyα0{\displaystyle \alpha \geq 0} isSchur concave, a property that can be proven by theSchur–Ostrowski criterion.

Special cases

[edit]
Rényi entropy of a random variable with two possible outcomes againstp1, whereP = (p1, 1 −p1). Shown areΗ0,Η1,Η2 andΗ, with the unit on the vertical axis being theshannon.

Asα{\displaystyle \alpha } approaches zero, the Rényi entropy increasingly weighs all events with nonzero probability more equally, regardless of their probabilities. In the limit forα0{\displaystyle \alpha \to 0}, the Rényi entropy is just the logarithm of the size of thesupport ofX. The limit forα1{\displaystyle \alpha \to 1} is theShannon entropy. Asα{\displaystyle \alpha } approaches infinity, the Rényi entropy is increasingly determined by the events of highest probability.

Hartley or max-entropy

[edit]

H0(X){\displaystyle \mathrm {H} _{0}(X)} islogn{\displaystyle \log n} wheren{\displaystyle n} is the number of non-zero probabilities.[6] If the probabilities are all nonzero, it is simply the logarithm of thecardinality of the alphabet (A{\displaystyle {\mathcal {A}}}) ofX{\displaystyle X}, sometimes called theHartley entropy ofX{\displaystyle X},H0(X)=logn=log|A|{\displaystyle \mathrm {H} _{0}(X)=\log n=\log |{\mathcal {A}}|\,}

Shannon entropy

[edit]

The limiting value ofHα{\displaystyle \mathrm {H} _{\alpha }} asα1{\displaystyle \alpha \to 1} is theShannon entropy:[7]H1(X)limα1Hα(X)=i=1npilogpi{\displaystyle \mathrm {H} _{1}(X)\equiv \lim _{\alpha \to 1}\mathrm {H} _{\alpha }(X)=-\sum _{i=1}^{n}p_{i}\log p_{i}}

Collision entropy

[edit]

Collision entropy, sometimes just called "Rényi entropy", refers to the caseα=2{\displaystyle \alpha =2},H2(X)=logi=1npi2=logP(X=Y),{\displaystyle \mathrm {H} _{2}(X)=-\log \sum _{i=1}^{n}p_{i}^{2}=-\log P(X=Y),}whereX{\displaystyle X} andY{\displaystyle Y} areindependent and identically distributed. The collision entropy is related to theindex of coincidence. It is the negative logarithm of theSimpson diversity index.

Min-entropy

[edit]
Main article:Min-entropy

In the limit asα{\displaystyle \alpha \rightarrow \infty }, the Rényi entropyHα{\displaystyle \mathrm {H} _{\alpha }} converges to themin-entropyH{\displaystyle \mathrm {H} _{\infty }}:H(X)mini(logpi)=(maxilogpi)=logmaxipi.{\displaystyle \mathrm {H} _{\infty }(X)\doteq \min _{i}(-\log p_{i})=-(\max _{i}\log p_{i})=-\log \max _{i}p_{i}\,.}

Equivalently, the min-entropyH(X){\displaystyle \mathrm {H} _{\infty }(X)} is the largest real numberb such that all events occur with probability at most2b{\displaystyle 2^{-b}}.

The namemin-entropy stems from the fact that it is the smallest entropy measure in the family of Rényi entropies.In this sense, it is the strongest way to measure the information content of a discrete random variable.In particular, the min-entropy is never larger than theShannon entropy.

The min-entropy has important applications forrandomness extractors intheoretical computer science:Extractors are able to extract randomness from random sources that have a large min-entropy; merely having a largeShannon entropy does not suffice for this task.

Inequalities for different ordersα

[edit]

ThatHα{\displaystyle \mathrm {H} _{\alpha }} is non-increasing inα{\displaystyle \alpha } for any given distribution of probabilitiespi{\displaystyle p_{i}},which can be proven by differentiation,[8] asdHαdα=1(1α)2i=1nzilog(zi/pi)=1(1α)2DKL(zp){\displaystyle -{\frac {d\mathrm {H} _{\alpha }}{d\alpha }}={\frac {1}{(1-\alpha )^{2}}}\sum _{i=1}^{n}z_{i}\log(z_{i}/p_{i})={\frac {1}{(1-\alpha )^{2}}}D_{\text{KL}}(z\|p)}which is proportional toKullback–Leibler divergence (which is always non-negative), wherezi=piα/j=1npjα{\textstyle z_{i}=p_{i}^{\alpha }/\sum _{j=1}^{n}p_{j}^{\alpha }}. In particular, it is strictly positive except when the distribution is uniform.

At theα1{\displaystyle \alpha \to 1} limit, we havedHαdα12ipi(lnpi+H(p))2{\textstyle -{\frac {d\mathrm {H} _{\alpha }}{d\alpha }}\to {\frac {1}{2}}\sum _{i}p_{i}{\left(\ln p_{i}+H(p)\right)}^{2}}.

In particular cases inequalities can be proven also byJensen's inequality:[9][10]logn=H0H1H2H.{\displaystyle \log n=\mathrm {H} _{0}\geq \mathrm {H} _{1}\geq \mathrm {H} _{2}\geq \mathrm {H} _{\infty }.}

For values ofα>1{\displaystyle \alpha >1}, inequalities in the other direction also hold. In particular, we have[11][12]H22H.{\displaystyle \mathrm {H} _{2}\leq 2\mathrm {H} _{\infty }.}

On the other hand, the Shannon entropyH1{\displaystyle \mathrm {H} _{1}} can be arbitrarily high for a random variableX{\displaystyle X} that has a given min-entropy. An example of this is given by the sequence of random variablesXn{0,,n}{\displaystyle X_{n}\sim \{0,\ldots ,n\}} forn1{\displaystyle n\geq 1} such thatP(Xn=0)=1/2{\displaystyle P(X_{n}=0)=1/2} andP(Xn=x)=1/(2n){\displaystyle P(X_{n}=x)=1/(2n)} sinceH(Xn)=log2{\displaystyle \mathrm {H} _{\infty }(X_{n})=\log 2} butH1(Xn)=(log2+log2n)/2{\displaystyle \mathrm {H} _{1}(X_{n})=(\log 2+\log 2n)/2}.

Rényi divergence

[edit]

As well as the absolute Rényi entropies, Rényi also defined a spectrum of divergence measures generalising theKullback–Leibler divergence.[13]

TheRényi divergence of orderα{\displaystyle \alpha } oralpha-divergence of a distributionP from a distributionQ is defined to beDα(PQ)=1α1log(i=1npiαqiα1)=1α1logEip[(pi/qi)α1]{\displaystyle {\begin{aligned}D_{\alpha }(P\Vert Q)&={\frac {1}{\alpha -1}}\log \left(\sum _{i=1}^{n}{\frac {p_{i}^{\alpha }}{q_{i}^{\alpha -1}}}\right)\\[1ex]&={\frac {1}{\alpha -1}}\log \mathbb {E} _{i\sim p}\left[{\left(p_{i}/q_{i}\right)}^{\alpha -1}\right]\,\end{aligned}}}when0<α<{\displaystyle 0<\alpha <\infty } andα1{\displaystyle \alpha \neq 1}. We can define the Rényi divergence for the special valuesα = 0, 1, ∞ by taking a limit, and in particular the limitα → 1 gives the Kullback–Leibler divergence.

Some special cases:

The Rényi divergence is indeed adivergence, meaning simply thatDα(PQ){\displaystyle D_{\alpha }(P\|Q)} is greater than or equal to zero, and zero only whenP =Q. For any fixed distributionsP andQ, the Rényi divergence is nondecreasing as a function of its orderα, and it is continuous on the set ofα for which it is finite,[13] or for the sake of brevity, the information of orderα obtained if the distributionP is replaced by the distributionQ.[1]

Financial interpretation

[edit]

A pair of probability distributions can be viewed as a game of chance in which one of the distributions defines official odds and the other contains the actual probabilities. Knowledge of the actual probabilities allows a player to profit from the game. The expected profit rate is connected to the Rényi divergence as follows[14]ExpectedRate=1RD1(bm)+R1RD1/R(bm),{\displaystyle {\rm {ExpectedRate}}={\frac {1}{R}}\,D_{1}(b\|m)+{\frac {R-1}{R}}\,D_{1/R}(b\|m)\,,}wherem{\displaystyle m} is the distribution defining the official odds (i.e. the "market") for the game,b{\displaystyle b} is the investor-believed distribution andR{\displaystyle R} is the investor's risk aversion (theArrow–Pratt relative risk aversion).

If the true distribution isp{\displaystyle p} (not necessarily coinciding with the investor's beliefb{\displaystyle b}), the long-term realized rate converges to the true expectation which has a similar mathematical structure[14]RealizedRate=1R(D1(pm)D1(pb))+R1RD1/R(bm).{\displaystyle {\rm {RealizedRate}}={\frac {1}{R}}\,{\Big (}D_{1}(p\|m)-D_{1}(p\|b){\Big )}+{\frac {R-1}{R}}\,D_{1/R}(b\|m)\,.}

Properties specific toα = 1

[edit]

The valueα=1{\displaystyle \alpha =1}, which gives theShannon entropy and theKullback–Leibler divergence, is the only value at which thechain rule of conditional probability holds exactly:H(A,X)=H(A)+EaA[H(X|A=a)]{\displaystyle \mathrm {H} (A,X)=\mathrm {H} (A)+\mathbb {E} _{a\sim A}{\big [}\mathrm {H} (X|A=a){\big ]}}for the absolute entropies, andDKL(p(x|a)p(a)m(x,a))=DKL(p(a)m(a))+Ep(a){DKL(p(x|a)m(x|a))},{\displaystyle D_{\mathrm {KL} }(p(x|a)p(a)\|m(x,a))=D_{\mathrm {KL} }(p(a)\|m(a))+\mathbb {E} _{p(a)}\{D_{\mathrm {KL} }(p(x|a)\|m(x|a))\},}for the relative entropies.

The latter in particular means that if we seek a distributionp(x,a) which minimizes the divergence from some underlying prior measurem(x,a), and we acquire new information which only affects the distribution ofa, then the distribution ofp(x|a) remainsm(x|a), unchanged.

The other Rényi divergences satisfy the criteria of being positive and continuous, being invariant under 1-to-1 co-ordinate transformations, and of combining additively whenA andX are independent, so that ifp(A,X) =p(A)p(X), thenHα(A,X)=Hα(A)+Hα(X){\displaystyle \mathrm {H} _{\alpha }(A,X)=\mathrm {H} _{\alpha }(A)+\mathrm {H} _{\alpha }(X)\;}andDα(P(A)P(X)Q(A)Q(X))=Dα(P(A)Q(A))+Dα(P(X)Q(X)).{\displaystyle D_{\alpha }(P(A)P(X)\|Q(A)Q(X))=D_{\alpha }(P(A)\|Q(A))+D_{\alpha }(P(X)\|Q(X)).}

The stronger properties of theα=1{\displaystyle \alpha =1} quantities allow the definition ofconditional information andmutual information from communication theory.

Exponential families

[edit]

The Rényi entropies and divergences for anexponential family admit simple expressions[15]Hα(pF(x;θ))=11α(F(αθ)αF(θ)+logEp[e(α1)k(x)]){\displaystyle \mathrm {H} _{\alpha }(p_{F}(x;\theta ))={\frac {1}{1-\alpha }}\left(F(\alpha \theta )-\alpha F(\theta )+\log E_{p}\left[e^{(\alpha -1)k(x)}\right]\right)}andDα(p:q)=JF,α(θ:θ)1α{\displaystyle D_{\alpha }(p:q)={\frac {J_{F,\alpha }(\theta :\theta ')}{1-\alpha }}}whereJF,α(θ:θ)=αF(θ)+(1α)F(θ)F(αθ+(1α)θ){\displaystyle J_{F,\alpha }(\theta :\theta ')=\alpha F(\theta )+(1-\alpha )F(\theta ')-F(\alpha \theta +(1-\alpha )\theta ')}is a Jensen difference divergence.

Quantum information

[edit]

The Rényi entropy can be generalized to the quantum case as

Hα(ρ):=11αlogtr(ρα){\displaystyle H_{\alpha }(\rho ):={\frac {1}{1-\alpha }}\log \operatorname {tr} (\rho ^{\alpha })}

whereρ{\displaystyle \rho } is normalizeddensity matrix. Its limit asα1{\displaystyle \alpha \to 1} is thevon Neumann entropy.[16]

The Rényi relative entropy (or divergence) can be generalized in two possible ways: the straightforward quantum Rényi relative entropy

Dα(ρσ)=1α1logtr(ρασ1α){\displaystyle D_{\alpha }(\rho \|\sigma )={\frac {1}{\alpha -1}}\log \operatorname {tr} (\rho ^{\alpha }\sigma ^{1-\alpha })}

and the sandwiched Rényi relative entropy

D~α(ρσ)=1α1logtr[(σ1α2αρσ1α2α)α]{\displaystyle {\tilde {D}}_{\alpha }(\rho \|\sigma )={\frac {1}{\alpha -1}}\log \operatorname {tr} \left[\left(\sigma ^{\frac {1-\alpha }{2\alpha }}\rho \sigma ^{\frac {1-\alpha }{2\alpha }}\right)^{\alpha }\right]}

Both converge to thequantum relative entropy in the limitα1{\displaystyle \alpha \to 1}. However, the sandwiched Rényi relative entropy has more convenient properties when used to defined conditional entropies,[16] and has found applications inquantum key distribution.[17][18]

See also

[edit]

Notes

[edit]
  1. ^abcRényi (1961)
  2. ^Rioul (2021)
  3. ^Barros, Vanessa; Rousseau, Jérôme (2021-06-01). "Shortest Distance Between Multiple Orbits and Generalized Fractal Dimensions".Annales Henri Poincaré.22 (6):1853–1885.arXiv:1912.07516.Bibcode:2021AnHP...22.1853B.doi:10.1007/s00023-021-01039-y.ISSN 1424-0661.S2CID 209376774.
  4. ^Franchini, Its & Korepin (2008)
  5. ^Its & Korepin (2010)
  6. ^RFC 4086, page 6
  7. ^Bromiley, Thacker & Bouhova-Thacker (2004)
  8. ^Beck & Schlögl (1993)
  9. ^H1H2{\displaystyle \textstyle \mathrm {H} _{1}\geq \mathrm {H} _{2}} holds becausei=1Mpilogpilogi=1Mpi2{\displaystyle \textstyle \sum \limits _{i=1}^{M}{p_{i}\log p_{i}}\leq \log \sum \limits _{i=1}^{M}{p_{i}^{2}}}.
  10. ^HH2{\displaystyle \mathrm {H} _{\infty }\leq \mathrm {H} _{2}} holds becauselogi=1npi2logsupipi(i=1npi)=logsupipi{\displaystyle \textstyle \log \sum \limits _{i=1}^{n}{p_{i}^{2}}\leq \log \sup _{i}p_{i}\left({\sum \limits _{i=1}^{n}{p_{i}}}\right)=\log \sup _{i}p_{i}}.
  11. ^H22H{\displaystyle \mathrm {H} _{2}\leq 2\mathrm {H} _{\infty }} holds becauselogi=1npi2logsupipi2=2logsupipi{\displaystyle \textstyle \log \sum \limits _{i=1}^{n}{p_{i}^{2}}\geq \log \sup _{i}p_{i}^{2}=2\log \sup _{i}p_{i}}
  12. ^Devroye, Luc; Györfi, Laszlo; Lugosi, Gabor (1996-04-04).A Probabilistic Theory of Pattern Recognition (Corrected ed.). New York, NY: Springer.ISBN 978-0-387-94618-4.
  13. ^abVan Erven, Tim; Harremoës, Peter (2014). "Rényi Divergence and Kullback–Leibler Divergence".IEEE Transactions on Information Theory.60 (7):3797–3820.arXiv:1206.2459.Bibcode:2014ITIT...60.3797V.doi:10.1109/TIT.2014.2320500.S2CID 17522805.
  14. ^abSoklakov (2018)
  15. ^Nielsen & Nock (2011)
  16. ^abMüller-Lennert, Martin; Dupuis, Frédéric; Szehr, Oleg; Fehr, Serge; Tomamichel, Marco (1 December 2013). "On quantum Rényi entropies: A new generalization and some properties".Journal of Mathematical Physics.54 (12) 122203.arXiv:1306.3142.Bibcode:2013JMP....54l2203M.doi:10.1063/1.4838856.
  17. ^Arqand, Amir; A. Hahn, Thomas; Y. -Z. Tan, Ernest (2024). "Generalized Rényi entropy accumulation theorem and generalized quantum probability estimation".arXiv:2405.05912 [quant-ph].
  18. ^Chung, Rebecca R. B.; Ng, Nelly H. Y.; Cai, Yu (14 July 2025). "Generalized numerical framework for improved finite-sized key rates with Rényi entropy".Physical Review A.112 (1) 012612.arXiv:2502.02319.Bibcode:2025PhRvA.112a2612C.doi:10.1103/tyts-8v8j.

References

[edit]
Theory
Statistical thermodynamics
Models
Mathematical approaches
Critical phenomena
Entropy
Applications
Retrieved from "https://en.wikipedia.org/w/index.php?title=Rényi_entropy&oldid=1315231539"
Categories:
Hidden categories:

[8]ページ先頭

©2009-2025 Movatter.jp