Movatterモバイル変換


[0]ホーム

URL:


Jump to content
WikipediaThe Free Encyclopedia
Search

Context mixing

From Wikipedia, the free encyclopedia
Type of data compression algorithm

Context mixing is a type ofdata compressionalgorithm in which the next-symbol predictions of two or morestatistical models are combined to yield a prediction that is often more accurate than any of the individual predictions. For example, one simple method (not necessarily the best) is toaverage theprobabilities assigned by eachmodel. Therandom forest is another method: it outputs the prediction that is themode of the predictions output by individual models. Combining models is an active area of research inmachine learning.[citation needed]

ThePAQ series ofdata compression programs use context mixing to assign probabilities to individualbits of the input.

Application to Data Compression

[edit]

Suppose that we are given two conditional probabilities,P(X|A){\displaystyle P(X|A)} andP(X|B){\displaystyle P(X|B)}, and we wish to estimateP(X|A,B){\displaystyle P(X|A,B)}, the probability of event X given both conditionsA{\displaystyle A} andB{\displaystyle B}. There is insufficient information forprobability theory to give a result. In fact, it is possible to construct scenarios in which the result could be anything at all. But intuitively, we would expect the result to be some kind of average of the two.

The problem is important for data compression. In this application,A{\displaystyle A} andB{\displaystyle B} are contexts,X{\displaystyle X} is the event that the next bit or symbol of the data to be compressed has a particular value, andP(X|A){\displaystyle P(X|A)} andP(X|B){\displaystyle P(X|B)} are the probability estimates by two independent models. Thecompression ratio depends on how closely the estimated probability approaches the true but unknown probability of eventX{\displaystyle X}. It is often the case that contextsA{\displaystyle A} andB{\displaystyle B} have occurred often enough to accurately estimateP(X|A){\displaystyle P(X|A)} andP(X|B){\displaystyle P(X|B)} by counting occurrences ofX{\displaystyle X} in each context, but the two contexts either have not occurred together frequently, or there are insufficient computing resources (time and memory) to collect statistics for the combined case.

For example, suppose that we are compressing a text file. We wish to predict whether the next character will be a linefeed, given that the previous character was a period (contextA{\displaystyle A}) and that the last linefeed occurred 72 characters ago (contextB{\displaystyle B}). Suppose that a linefeed previously occurred after 1 of the last 5 periods (P(X|A=0.2{\displaystyle P(X|A=0.2}) and in 5 out of the last 10 lines at column 72 (P(X|B)=0.5{\displaystyle P(X|B)=0.5}). How should these predictions be combined?

Two general approaches have been used, linear and logistic mixing. Linear mixing uses a weighted average of the predictions weighted by evidence. In this example,P(X|B){\displaystyle P(X|B)} gets more weight thanP(X|A){\displaystyle P(X|A)} becauseP(X|B){\displaystyle P(X|B)} is based on a greater number of tests. Older versions ofPAQ uses this approach.[1] Newer versions use logistic (orneural network) mixing by first transforming the predictions into thelogistic domain, log(p/(1-p)) before averaging.[2] This effectively gives greater weight to predictions near 0 or 1, in this caseP(X|A){\displaystyle P(X|A)}. In both cases, additional weights may be given to each of the input models and adapted to favor the models that have given the most accurate predictions in the past. All but the oldest versions of PAQ use adaptive weighting.

Most context mixing compressors predict one bit of input at a time. The output probability is simply the probability that the next bit will be a 1.

Linear Mixing

[edit]

We are given a set of predictionsPi(1)=n1i/ni{\textstyle P_{i}(1)=n_{1i}/n_{i}}, whereni=n0i+n1i{\textstyle n_{i}=n_{0i}+n_{1i}}, andn0i{\displaystyle n_{0i}} andn1i{\displaystyle n_{1i}} are the counts of 0 and 1 bits respectively for thei{\displaystyle i}'th model. The probabilities are computed by weighted addition of the 0 and 1 counts:

The weightswi{\displaystyle w_{i}} are initially equal and always sum to 1. Under the initial conditions, each model is weighted in proportion to evidence. The weights are then adjusted to favor the more accurate models. Suppose we are given that the actual bit being predicted isy{\displaystyle y} (0 or 1). Then the weight adjustment is:[3]wimax[0,wi+(yP(1))Sn1iS1niS0S1]{\displaystyle w_{i}\leftarrow \max[0,w_{i}+(y-P(1)){\frac {Sn_{1i}-S_{1}n_{i}}{S_{0}S_{1}}}]}Compression can be improved by boundingni{\textstyle n_{i}} so that the model weighting is better balanced. In PAQ6, whenever one of the bit counts is incremented, the part of the other count that exceeds 2 is halved. For example, after the sequence 000000001, the counts would go from(n0,n1)=(8,0){\textstyle (n_{0},n_{1})=(8,0)} to(5,1){\textstyle (5,1)}.

Logistic Mixing

[edit]

LetPi(1){\displaystyle P_{i}(1)} be the prediction by thei{\displaystyle i}'th model that the next bit will be a 1. Then the final predictionP(1){\displaystyle P(1)} is calculated:

whereP(1){\displaystyle P(1)} is the probability that the next bit will be a 1,Pi(1){\displaystyle P_{i}(1)} is the probability estimated by thei{\displaystyle i}'th model, and

After each prediction, the model is updated by adjusting the weights to minimize coding cost.

whereη{\displaystyle \eta } is the learning rate (typically 0.002 to 0.01),y{\displaystyle y} is the predicted bit, and (yP(1){\displaystyle y-P(1)}) is the prediction error.

List of Context Mixing Compressors

[edit]

All versions below use logistic mixing unless otherwise indicated.

  • AllPAQ versions (Matt Mahoney, Serge Osnach, Alexander Ratushnyak, Przemysław Skibiński, Jan Ondrus, and others)[1]. PAQAR and versions prior to PAQ7 used linear mixing. Later versions used logistic mixing.
  • All LPAQ versions (Matt Mahoney, Alexander Ratushnyak)[2].
  • ZPAQ (Matt Mahoney)[3].
  • WinRK 3.0.3 (Malcolm Taylor) in maximum compression PWCM mode[4]. Version 3.0.2 was based on linear mixing.
  • NanoZip (Sami Runsas) in maximum compression mode (option -cc)[5].
  • xwrt 3.2 (Przemysław Skibiński) in maximum compression mode (options -i10 through -i14)[6] as a back end to a dictionary encoder.
  • cmm1 through cmm4, M1, and M1X2 (Christopher Mattern) use a small number of contexts for high speed. M1 and M1X2 use agenetic algorithm to select twobit masked contexts in a separate optimization pass.
  • ccm (Christian Martelock).
  • bit (Osman Turan)[7].
  • pimple, pimple2, tc, and px (Ilia Muraviev)[8].
  • enc (Serge Osnach) tries several methods based onPPM and (linear) context mixing and chooses the best one.[9]
  • fpaq2 (Nania Francesco Antonio) using fixed weight averaging for high speed.
  • cmix (Byron Knoll) mixes many models, and is currently ranked first in the Large Text Compression benchmark,[4] as well as theSilesia corpus[5] and has surpassed the winning entry of theHutter Prize although it is not eligible due to using too much memory.

References

[edit]
  1. ^Mahoney, M. (2005), "Adaptive Weighing of Context Models for Lossless Data Compression",Florida Tech. Technical Report CS-2005-16
  2. ^Mahoney, M."PAQ8 Data Compression Program".
  3. ^Mahoney, M. V. (2005).Adaptive weighing of context models for lossless data compression.
  4. ^Matt Mahoney (2015-09-25)."Large Text Compression Benchmark". Retrieved2015-11-04.
  5. ^Matt Mahoney (2015-09-23)."Silesia Open Source Compression Benchmark". Retrieved2015-11-04.
Lossless
type
Entropy
Dictionary
Other
Hybrid
Lossy
type
Transform
Predictive
Audio
Concepts
Codec
parts
Image
Concepts
Methods
Video
Concepts
Codec
parts
Theory
Community
People
Retrieved from "https://en.wikipedia.org/w/index.php?title=Context_mixing&oldid=1324786685"
Category:
Hidden categories:

[8]ページ先頭

©2009-2026 Movatter.jp