Ininformation theory, theRényi entropy is a quantity that generalizes various notions ofentropy, includingHartley entropy,Shannon entropy,collision entropy, andmin-entropy. The Rényi entropy is named afterAlfréd Rényi, who looked for the most general way to quantify information while preserving additivity for independent events.[1][2] In the context offractal dimension estimation, the Rényi entropy forms the basis of the concept ofgeneralized dimensions.[3]
The Rényi entropy of order, where and, is defined as[1]It is further defined at as
Here, is adiscrete random variable with possible outcomes in the set and corresponding probabilities for. The resultingunit of information is determined by the base of thelogarithm, e.g.shannon for base 2, ornat for basee.If the probabilities are for all, then all the Rényi entropies of the distribution are equal:.In general, for all discrete random variables, is a non-increasing function in.
Applications often exploit the following relation between the Rényi entropy and theα-norm of the vector of probabilities:Here, the discrete probability distribution is interpreted as a vector in with and.
Rényi entropy of a random variable with two possible outcomes againstp1, whereP = (p1, 1 −p1). Shown areΗ0,Η1,Η2 andΗ∞, with the unit on the vertical axis being theshannon.
As approaches zero, the Rényi entropy increasingly weighs all events with nonzero probability more equally, regardless of their probabilities. In the limit for, the Rényi entropy is just the logarithm of the size of thesupport ofX. The limit for is theShannon entropy. As approaches infinity, the Rényi entropy is increasingly determined by the events of highest probability.
is where is the number of non-zero probabilities.[6] If the probabilities are all nonzero, it is simply the logarithm of thecardinality of the alphabet () of, sometimes called theHartley entropy of,
In the limit as, the Rényi entropy converges to themin-entropy:
Equivalently, the min-entropy is the largest real numberb such that all events occur with probability at most.
The namemin-entropy stems from the fact that it is the smallest entropy measure in the family of Rényi entropies.In this sense, it is the strongest way to measure the information content of a discrete random variable.In particular, the min-entropy is never larger than theShannon entropy.
The min-entropy has important applications forrandomness extractors intheoretical computer science:Extractors are able to extract randomness from random sources that have a large min-entropy; merely having a largeShannon entropy does not suffice for this task.
That is non-increasing in for any given distribution of probabilities,which can be proven by differentiation,[8] aswhich is proportional toKullback–Leibler divergence (which is always non-negative), where. In particular, it is strictly positive except when the distribution is uniform.
For values of, inequalities in the other direction also hold. In particular, we have[11][12]
On the other hand, the Shannon entropy can be arbitrarily high for a random variable that has a given min-entropy. An example of this is given by the sequence of random variables for such that and since but.
As well as the absolute Rényi entropies, Rényi also defined a spectrum of divergence measures generalising theKullback–Leibler divergence.[13]
TheRényi divergence of order oralpha-divergence of a distributionP from a distributionQ is defined to bewhen and. We can define the Rényi divergence for the special valuesα = 0, 1, ∞ by taking a limit, and in particular the limitα → 1 gives the Kullback–Leibler divergence.
: the log of the expected ratio of the probabilities;
: the log of the maximum ratio of the probabilities.
The Rényi divergence is indeed adivergence, meaning simply that is greater than or equal to zero, and zero only whenP =Q. For any fixed distributionsP andQ, the Rényi divergence is nondecreasing as a function of its orderα, and it is continuous on the set ofα for which it is finite,[13] or for the sake of brevity, the information of orderα obtained if the distributionP is replaced by the distributionQ.[1]
A pair of probability distributions can be viewed as a game of chance in which one of the distributions defines official odds and the other contains the actual probabilities. Knowledge of the actual probabilities allows a player to profit from the game. The expected profit rate is connected to the Rényi divergence as follows[14]where is the distribution defining the official odds (i.e. the "market") for the game, is the investor-believed distribution and is the investor's risk aversion (theArrow–Pratt relative risk aversion).
If the true distribution is (not necessarily coinciding with the investor's belief), the long-term realized rate converges to the true expectation which has a similar mathematical structure[14]
The latter in particular means that if we seek a distributionp(x,a) which minimizes the divergence from some underlying prior measurem(x,a), and we acquire new information which only affects the distribution ofa, then the distribution ofp(x|a) remainsm(x|a), unchanged.
The other Rényi divergences satisfy the criteria of being positive and continuous, being invariant under 1-to-1 co-ordinate transformations, and of combining additively whenA andX are independent, so that ifp(A,X) =p(A)p(X), thenand
The Rényi relative entropy (or divergence) can be generalized in two possible ways: the straightforward quantum Rényi relative entropy
and the sandwiched Rényi relative entropy
Both converge to thequantum relative entropy in the limit. However, the sandwiched Rényi relative entropy has more convenient properties when used to defined conditional entropies,[16] and has found applications inquantum key distribution.[17][18]
^Devroye, Luc; Györfi, Laszlo; Lugosi, Gabor (1996-04-04).A Probabilistic Theory of Pattern Recognition (Corrected ed.). New York, NY: Springer.ISBN978-0-387-94618-4.
^abMüller-Lennert, Martin; Dupuis, Frédéric; Szehr, Oleg; Fehr, Serge; Tomamichel, Marco (1 December 2013). "On quantum Rényi entropies: A new generalization and some properties".Journal of Mathematical Physics.54 (12) 122203.arXiv:1306.3142.Bibcode:2013JMP....54l2203M.doi:10.1063/1.4838856.
^Arqand, Amir; A. Hahn, Thomas; Y. -Z. Tan, Ernest (2024). "Generalized Rényi entropy accumulation theorem and generalized quantum probability estimation".arXiv:2405.05912 [quant-ph].
^Chung, Rebecca R. B.; Ng, Nelly H. Y.; Cai, Yu (14 July 2025). "Generalized numerical framework for improved finite-sized key rates with Rényi entropy".Physical Review A.112 (1) 012612.arXiv:2502.02319.Bibcode:2025PhRvA.112a2612C.doi:10.1103/tyts-8v8j.