Movatterモバイル変換


[0]ホーム

URL:


Jump to content
WikipediaThe Free Encyclopedia
Search

Universal code (data compression)

From Wikipedia, the free encyclopedia
Type of prefix code
icon
This articleneeds additional citations forverification. Please helpimprove this article byadding citations to reliable sources. Unsourced material may be challenged and removed.
Find sources: "Universal code" data compression – news ·newspapers ·books ·scholar ·JSTOR
(November 2011) (Learn how and when to remove this message)
Fibonacci, Elias Gamma, and Elias Delta vs binary coding
Rice withk = 2, 3, 4, 5, 8, 16 versus binary

Indata compression, auniversal code for integers is aprefix code that maps thepositive integers onto binary codewords, with the additional property that whatever the trueprobability distribution on integers, as long as the distribution is monotonic (i.e.,p(i) ≥ p(i + 1) for all positive i), theexpected lengths of the codewords are within a constant factor of the expected lengths that theoptimal code for that probability distribution would have assigned. A universal code isasymptotically optimal if the ratio between actual and optimalexpected lengths is bounded by a function of theinformation entropy of the code that, in addition to being bounded, approaches 1 as entropy approaches infinity.

In general, most prefix codes for integers assign longer codewords to larger integers. Such a code can be used to efficiently communicate a message drawn from a set of possible messages, by simply ordering the set of messages by decreasing probability and then sending the index of the intended message. Universal codes are generally not used for precisely known probability distributions, and no universal code is known to be optimal for any distribution used in practice.

A universal code should not be confused withuniversal source coding, in which the data compression method need not be a fixed prefix code and the ratio between actual and optimal expected lengths must approach one. However, note that an asymptotically optimal universal code can be used onindependent identically-distributed sources, by using increasingly largeblocks, as a method of universal source coding.

Universal and non-universal codes

[edit]

These are some universal codes for integers; an asterisk (*) indicates a code that can be trivially restated inlexicographical order, while a double dagger () indicates a code that is asymptotically optimal:

These are non-universal ones:

Their nonuniversality can be observed by noticing that, if any of these are used to code theGauss–Kuzmin distribution or theZeta distribution with parameter s=2, expected codeword length is infinite. For example, using unary coding on the Zeta distribution yields an expected length of

E(l)=6π2l=11l=.{\displaystyle E(l)={\frac {6}{\pi ^{2}}}\sum _{l=1}^{\infty }{\frac {1}{l}}=\infty .\,}

On the other hand, using the universal Elias gamma coding for the Gauss–Kuzmin distribution results in an expected codeword length (about 3.51 bits) near entropy (about 3.43 bits)- Академия Google.

Relationship to practical compression

[edit]

Huffman coding andarithmetic coding (when they can be used) give at least as good, and often better compression than any universal code.

However, universal codes are useful when Huffman coding cannot be used — for example, when one does not know the exact probability of each message, but only knows the rankings of their probabilities.

Universal codes are also useful when Huffman codes are inconvenient. For example, when the transmitter but not the receiver knows the probabilities of the messages, Huffman coding requires an overhead of transmitting those probabilities to the receiver. Using a universal code does not have that overhead.

Each universal code, like each other self-delimiting (prefix) binary code, has its own "implied probability distribution" given byP(i)=2l(i) wherel(i) is the length of theith codeword andP(i) is the corresponding symbol's probability. If the actual message probabilities areQ(i) andKullback–Leibler divergenceDKL(QP){\displaystyle D_{\text{KL}}(Q\|P)} is minimized by the code withl(i), then the optimal Huffman code for that set of messages will be equivalent to that code. Likewise, how close a code is to optimal can be measured by this divergence. Since universal codes are simpler and faster to encode and decode than Huffman codes (which is, in turn, simpler and faster thanarithmetic encoding), the universal code would be preferable in cases whereDKL(QP){\displaystyle D_{\text{KL}}(Q\|P)} is sufficiently small.Lossless Data Compression Program: Hybrid LZ77 RLE

For anygeometric distribution (an exponential distribution on integers), a Golomb code is optimal. With universal codes, the implicit distribution is approximately apower law such as1/n2{\displaystyle 1/n^{2}} (more precisely, aZipf distribution).For theFibonacci code, the implicit distribution is approximately1/nq{\displaystyle 1/n^{q}}, with

q=1/log2(φ)1.44,{\displaystyle q=1/\log _{2}(\varphi )\simeq 1.44,}

whereφ{\displaystyle \varphi } is thegolden ratio. For the ternarycomma code (i.e., encoding in base 3, represented with 2 bits per symbol), the implicit distribution is a power law withq=1+log3(4/3)1.26{\displaystyle q=1+\log _{3}(4/3)\simeq 1.26}. These distributions thus have near-optimal codes with their respective power laws.

External links

[edit]


Lossless
type
Entropy
Dictionary
Other
Hybrid
Lossy
type
Transform
Predictive
Audio
Concepts
Codec
parts
Image
Concepts
Methods
Video
Concepts
Codec
parts
Theory
Community
People
Retrieved from "https://en.wikipedia.org/w/index.php?title=Universal_code_(data_compression)&oldid=1305549466"
Categories:
Hidden categories:

[8]ページ先頭

©2009-2025 Movatter.jp