
Ininformation technology,lossy compression, orirreversible compression, is the class ofdata compression methods that uses inexact approximations and partial data discarding to represent the content. These techniques are used to reduce data size for storing, handling, and transmitting content. Higher degrees of approximation create coarser images as more details are removed. This is opposed tolossless data compression (reversible data compression) which does not degrade the data. The amount of data reduction possible using lossy compression is much higher than using lossless techniques.
Well-designed lossy compression technology often reduces file sizes significantly before degradation is noticed by the end-user. Even when noticeable by the user, further data reduction may be desirable (e.g., for real-time communication or to reduce transmission times or storage needs). The most widely used lossy compression algorithm is thediscrete cosine transform (DCT), first published byNasir Ahmed, T. Natarajan andK. R. Rao in 1974.
Lossy compression is most commonly used to compressmultimedia data (audio,video, andimages), especially in applications such asstreaming media andinternet telephony. By contrast, lossless compression is typically required for text and data files, such as bank records and text articles. It can be advantageous to make amaster lossless file which can then be used to produce additional copies from. This allows one to avoid basing new compressed copies on a lossy source file, which would yield additional artifacts and further unnecessaryinformation loss.
It is possible to compress many types of digital data in a way that reduces the size of acomputer file needed to store it, or thebandwidth needed to transmit it, with no loss of the full information contained in the original file. A picture, for example, is converted to a digital file by considering it to be an array of dots and specifying the color and brightness of each dot. If the picture contains an area of the same color, it can be compressed without loss by saying "200 red dots" instead of "red dot, red dot, ...(197 more times)..., red dot."
The original data contains a certain amount of information, and there is a lower bound to the size of a file that can still carry all the information. Basicinformation theory says that there is an absolute limit in reducing the size of this data. When data is compressed, itsentropy increases, and it cannot increase indefinitely. For example, a compressedZIP file is smaller than its original, but repeatedly compressing the same file will not reduce the size to nothing. Most compression algorithms can recognize when further compression would be pointless and would in fact increase the size of the data.
In many cases, files or data streams contain more information than is needed. For example, a picture may have more detail than the eye can distinguish when reproduced at the largest size intended; likewise, an audio file does not need a lot of fine detail during a very loud passage. Developing lossy compression techniques as closely matched to human perception as possible is a complex task. Sometimes the ideal is a file that provides exactly the same perception as the original, with as much digital information as possible removed; other times, perceptible loss of quality is considered a valid tradeoff.
The terms "irreversible" and "reversible" are preferred over "lossy" and "lossless" respectively for some applications, such as medical image compression, to circumvent the negative implications of "loss". The type and amount of loss can affect the utility of the images. Artifacts or undesirable effects of compression may be clearly discernible yet the result still useful for the intended purpose. Or lossy compressed images may be 'visually lossless', or in the case of medical images, so-calleddiagnostically acceptable irreversible compression (DAIC)[1] may have been applied.
Some forms of lossy compression can be thought of as an application oftransform coding, which is a type of data compression used fordigital images,digital audiosignals, anddigital video. The transformation is typically used to enable better (more targeted)quantization. Knowledge of the application is used to choose information to discard, thereby lowering itsbandwidth. The remaining information can then be compressed via a variety of methods. When the output is decoded, the result may not be identical to the original input, but is expected to be close enough for the purpose of the application.
The most common form of lossy compression is a transform coding method, thediscrete cosine transform (DCT),[2] which was first published byNasir Ahmed, T. Natarajan andK. R. Rao in 1974.[3] DCT is the most widely used form of lossy compression, for popularimage compression formats (such asJPEG),[4]video coding standards (such asMPEG andH.264/AVC) andaudio compression formats (such asMP3 andAAC).
In the case of audio data, a popular form of transform coding isperceptual coding, which transforms the raw data to a domain that more accurately reflects the information content. For example, rather than expressing a sound file as the amplitude levels over time, one may express it as the frequency spectrum over time, which corresponds more accurately to human audio perception. While data reduction (compression, be it lossy or lossless) is a main goal of transform coding, it also allows other goals: one may represent data more accurately for the original amount of space[5] – for example, in principle, if one starts with an analog or high-resolutiondigital master, anMP3 file of a given size should provide a better representation than a raw uncompressed audio inWAV orAIFF file of the same size. This is because uncompressed audio can only reduce file size by lowering bit rate or depth, whereas compressing audio can reduce size while maintaining bit rate and depth. This compression becomes a selective loss of the least significant data, rather than losing data across the board. Further, a transform coding may provide a better domain for manipulating or otherwise editing the data – for example,equalization of audio is most naturally expressed in the frequency domain (boost the bass, for instance) rather than in the raw time domain.
From this point of view, perceptual encoding is not essentially aboutdiscarding data, but rather about abetter representation of data. Another use is forbackward compatibility andgraceful degradation: in color television, encoding color via aluminance-chrominance transform domain (such asYUV) means that black-and-white sets display the luminance, while ignoring the color information. Another example ischroma subsampling: the use ofcolor spaces such asYIQ, used inNTSC, allow one to reduce the resolution on the components to accord with human perception – humans have highest resolution for black-and-white (luma), lower resolution for mid-spectrum colors like yellow and green, and lowest for red and blues – thus NTSC displays approximately 350 pixels of luma perscanline, 150 pixels of yellow vs. green, and 50 pixels of blue vs. red, which are proportional to human sensitivity to each component.
Lossy compression formats suffer fromgeneration loss: repeatedly compressing and decompressing the file will cause it to progressively lose quality. This is in contrast withlossless data compression, where data will not be lost via the use of such a procedure.Information-theoretical foundations for lossy data compression are provided byrate-distortion theory. Much like the use ofprobability in optimalcoding theory, rate-distortion theory heavily draws onBayesianestimation anddecision theory in order to model perceptual distortion and evenaesthetic judgment.
There are two basic lossy compression schemes:
In some systems the two techniques are combined, with transform codecs being used to compress the error signals generated by the predictive stage.
The advantage of lossy methods overlossless methods is that in some cases a lossy method can produce a much smaller compressed file than any lossless method, while still meeting the requirements of the application. Lossy methods are most often used for compressing sound, images or videos. This is because these types of data are intended for human interpretation where the mind can easily "fill in the blanks" or see past very minor errors or inconsistencies – ideally lossy compression istransparent (imperceptible), which can be verified via anABX test. Data files using lossy compression are smaller in size and thus cost less to store and to transmit over the Internet, a crucial consideration forstreaming video services such asNetflix andstreaming audio services such asSpotify.
When a user acquires a lossily compressed file, (for example, to reduce download time) the retrieved file can be quite different from the original at thebit level while being indistinguishable to the human ear or eye for most practical purposes. Many compression methods focus on the idiosyncrasies ofhuman physiology, taking into account, for instance, that the human eye can see only certain wavelengths of light. Thepsychoacoustic model describes how sound can be highly compressed without degrading perceived quality. Flaws caused by lossy compression that are noticeable to the human eye or ear are known ascompression artifacts.
Thecompression ratio (that is, the size of the compressed file compared to that of the uncompressed file) of lossy video codecs is nearly always far superior to that of the audio and still-image equivalents.
An important caveat about lossy compression (formally transcoding), is that editing lossily compressed files causesdigital generation loss from the re-encoding. This can be avoided by only producing lossy files from (lossless) originals and only editing (copies of) original files, such as images inraw image format instead ofJPEG. If data which has been compressed lossily is decoded and compressed losslessly, the size of the result can be comparable with the size of the data before lossy compression, but the data already lost cannot be recovered. When deciding to use lossy conversion without keeping the original, format conversion may be needed in the future to achieve compatibility with software or devices (format shifting), or to avoid payingpatent royalties for decoding or distribution of compressed files.
By modifying the compressed data directly without decoding and re-encoding, some editing of lossily compressed files without degradation of quality is possible. Editing which reduces the file size as if it had been compressed to a greater degree, but without more loss than this, is sometimes also possible.
The primary programs for lossless editing of JPEGs arejpegtran, and the derivedexiftran (which also preservesExif information), andJpegcrop (which provides a Windows interface).
These allow the image to becropped, rotated,flipped, andflopped, or even converted tograyscale (by dropping thechrominance channel). While unwanted information is destroyed, the quality of the remaining portion is unchanged.
Some other transforms are possible to some extent, such as joining images with the same encoding (composing side by side, as on a grid) or pasting images such as logos onto existing images (both viaJpegjoin), or scaling.[6]
Some changes can be made to the compression without re-encoding:
The freeware Windows-onlyIrfanView has some lossless JPEG operations in itsJPG_TRANSFORMplugin.
Metadata, such asID3 tags,Vorbis comments, orExif information, can usually be modified or removed without modifying the underlying data.
One may wish todownsample or otherwise decrease the resolution of the represented source signal and the quantity of data used for its compressed representation without re-encoding, as inbitrate peeling, but this functionality is not supported in all designs, as not all codecs encode data in a form that allows less important detail to simply be dropped. Some well-known designs that have this capability includeJPEG 2000 for still images andH.264/MPEG-4 AVC basedScalable Video Coding for video. Such schemes have also been standardized for older designs as well, such asJPEG images with progressive encoding, andMPEG-2 andMPEG-4 Part 2 video, although those prior schemes had limited success in terms of adoption into real-world common usage. Without this capacity, which is often the case in practice, to produce a representation with lower resolution or lower fidelity than a given one, one needs to start with the original source signal and encode, or start with a compressed representation and then decompress and re-encode it (transcoding), though the latter tends to causedigital generation loss.
Another approach is to encode the original signal at several different bitrates, and then either choose which to use (as when streaming over the internet – as inRealNetworks' "SureStream" – or offering varying downloads, as at Apple'siTunes Store), or broadcast several, where the best that is successfully received is used, as in various implementations ofhierarchical modulation. Similar techniques are used inmipmaps,pyramid representations, and more sophisticatedscale space methods. Some audio formats feature a combination of a lossy format and a lossless correction which when combined reproduce the original signal; the correction can be stripped, leaving a smaller, lossily compressed, file. Such formats includeMPEG-4 SLS (Scalable to Lossless),WavPack,OptimFROG DualStream, andDTS-HD Master Audio in lossless (XLL) mode).
Researchers have performed lossy compression on text by either using athesaurus to substitute short words for long ones, orgenerative text techniques,[14] although these sometimes fall into the related category oflossy data conversion.
A general kind of lossy compression is to lower the resolution of an image, as inimage scaling, particularlydecimation. One may also remove less "lower information" parts of an image, such as byseam carving. Many media transforms, such asGaussian blur, are, like lossy compression, irreversible: the original signal cannot be reconstructed from the transformed signal. However, in general these will have the same size as the original, and are not a form of compression. Lowering resolution has practical uses, as theNASANew Horizons craft transmittedthumbnails of its encounter with Pluto-Charon before it sent the higher resolution images. Another solution for slow connections is the usage ofImage interlacing which progressively defines the image. Thus a partial transmission is enough to preview the final image, in a lower resolution version, without creating a scaled and a full version too.[citation needed]