![]() | Data Compression data differencing | Dictionary compression![]() |
Data compression can be viewed as a special case of data differencing.[1]Just as data compression involves 2 processes that often run on 2 different machines -- the "data compressor" and the "data expander" --data differencing involves 2 processes that often run on 2 different machines -- the "differ" and the "patcher", respectively.
Some algorithms and programs used in data differencing are very similar to algorithms used in "small-window compression" (aka traditional data compression) --such as "large-window compression" (aka "data folding"[2]) --while others are very different --such as "data de-duplication"[3][4]and the rsync algorithm.
A few algorithms used in traditional data compression of large filesare similar to data differencing algorithms --such as some techniques for storing many Huffman frequency tables described in Huffman: storing the frequency table.
Some algorithms and programs developed for data differencing include:
With other forms of compression, more information generally gives better compression.However, experiments by Factor, Sheinwald, and Yassour 2001 seem to indicate that,when using data differencing with LZ77-like compression,trying to use a large collection of shared files that resemble the target fileoften gives worse compression than using a single reference file from that collection.[8]
When compressing very large files, it usually doesn't make much difference exactly what the dictionary is pre-loaded to,so many systems start with an empty dictionary.
However, pre-loading the dictionary can produce significantly smaller files in two cases:
Some implementations of data compression algorithms already have support for pre-loading the dictionary.
One possible approach to data differencing involves tweaking some compression algorithmto "pre-load" its internal data structures (it's "window" or "dictionary")with data from the files we already have,to allow the compression algorithm to use that datato (hopefully) give better compression on the next file.[10][11]
For example, the IPSW algorithm (the in-place sliding window of Shapira and Storer)initializes the source sliding window of a LZ77-like decompressor to some shared file S,[8]allowing common substrings of the target file T to be copied out of S,as well as the (like other LZ77-like compressors) allowing repeated substringsto be copied from previously-decompressed parts of the target file T,or if all else fails, from literal values in the compressed file.
Sometimes such "pre-loading" implementationsuse unmodified "small-window compression" algorithm,as inUS Patent RE41152 2010.Many early compression algorithms pre-loaded the dictionary with a "static dictionary"(Huffword, PalmDoc, etc.).The LZW algorithm gives better compression than the very similar LZ78 algorithm.One way of thinking about LZW is to imagine that the 256 literal byte valuesare not a separate "special case", but are, in effect, pre-loaded into the dictionary;while the LZ78 algorithm, in effect, starts with an empty dictionary,and so gives worse compression.
With many compression algorithms, such pre-loading is limited by a "window" or "history limit" --when the window or limit is, say, 32 kByte,it doesn't matter what or how much data you try to pre-load,only the last 32 kByte is going to make any difference.
This leads some people working on data differencing to usea window or history limit much larger than other compression researchers.
The LZ77-style decompressors have a fixed-size "window" of recently-decompressed text,and the "copy references" can only refer to text inside that window.
Many compression algorithms, in theory, have no inherent "history limit" --such as LZ78-style algorithms and adaptive Huffman algorithms.However, in practice, most implementations of compression algorithms periodically reset their dictionary and start fresh.So they have a block size the length of the maximum text between dictionary resets.
With many early compression algorithms, an easy way to improve performance is to increase the window size.(i.e., either literally increase the "window" buffer for LZ77 algorithms,or simply reset the dictionary less often for LZ78-style algorithms).These early compression algorithms were typically developed on machines that had far less RAMthan modern machines,and so the "window size", the "reset frequency", and "internal dictionary size"were constrained by that limit.(Those early machines also ran much slower than modern machines,and many algorithms -- such as the LZRW series of algorithms --were heavily constrained by these speed limits;it is unclear what effect this had on dictionary size).
For simplicity, we will consider the effect of doubling the window size on a variety of algorithms.The original "small window" algorithm has some fixed window size W,and the widened "larger window" algorithm has some fixed window size 2*W,which can be thought of as the "near window" with all the same stuff in it as used by the small-window algorithm,and the "far window" that has stuff that is inaccessible to the small-window algorithm,but perhaps the larger-window algorithm can exploit that stuff to get better compression.
Alas, there are diminishing returns toincreasing the size of the window.In fact, almost always there is some data-dependent "optimum" window size.Increasing the size of the window beyond the optimum leads to worse compression (larger compressed files).
There are 4 reasons that windows larger than that optimum size are counter-productive:
1. Some LZ77-style algorithms use a fixed-length linear offset. Doubling the size of the window necessarily increases the length of each and every copy item by 1 bit.With typical 16-bit copy items, that makes the compressed file longer (worse compression) by a factor of 17/16,unless the compressor can find strings in the far window with matches that are longer than any strings in the near window.
2. Other LZ77-style algorithms use variable-length offsets. Typically more distant offsets are longer.Doubling the size of the window generally leaves the length of "extremely close" copy items exactly the same,increases the size of a few copy items near the outer limit of the near window by 1 bit,and requires even larger copy items to refer to stuff in the far window.So there's hardly any penalty for increasing the window size for this (2) category compared to the (1) category.However, the wins are not quite as good.In many cases, even when there is an excellent long match in the far window, longer than any match in the near window,it doesn't make the compressed file any smaller --the long copy item needed to refer to that distant matching stringmay be the same length or even longer thantwo short copy items that can re-assemble that same stringfrom two places in the near window.
3. As Jeff J. Kato et al point out,[12]It can even be counter-productive to havedata from one kind of file in the window/dictionarywhen trying to compress data with somewhat different characteristics,or when trying to compress a file whose characteristics change ("evolve")from one part to the next.Often it's better to reset the dictionaryand start from scratch.
However, sometimes these "large window" compressorshave great gains --in particular, when we're currently trying to compress a filethat is not merely the same kind of file,but is bit-for-bit identical to some early file.Some large-window techniques also work well when a file is *mostly* identicalwith relatively few edits here and there.
The default window size or block size for some compression software is:
i.e., the compressor sends the "clear code" to reset the dictionary after every (max dictionary size - number of hard-wired entries) = 2048-(256+2) compressed symbols.
).The GIF LZW encoder becomes non-adaptive after the dictionary is full, while it is "waiting" for the next clear code.
http://en.wikipedia.org/wiki/xdelta
![]() | Wikipedia has related information atdata differencing |
![]() | Data Compression data differencing | Dictionary compression![]() |