This articleneeds additional citations forverification. Please helpimprove this article byadding citations to reliable sources. Unsourced material may be challenged and removed. Find sources: "Compression artifact" – news ·newspapers ·books ·scholar ·JSTOR(September 2007) (Learn how and when to remove this message) |


Acompression artifact (orartefact) is a noticeable distortion of media (includingimages,audio, andvideo) caused by the application oflossy compression. Lossydata compression involves discarding some of the media's data so that it becomes small enough to be stored within the desireddisk space ortransmitted (streamed) within the availablebandwidth (known as the data rate orbit rate). If the compressor cannot store enough data in the compressed version, the result is a loss of quality, or introduction of artifacts. Thecompression algorithm may not be intelligent enough to discriminate between distortions of little subjective importance and those objectionable to the user.
The most common digital compression artifacts are DCT blocks, caused by thediscrete cosine transform (DCT) compression algorithm used in manydigital media standards, such asJPEG,MP3, andMPEG video file formats.[1][2][3] These compression artifacts appear when heavy compression is applied,[1] and occur often in common digital media, such asDVDs, common computer file formats such as JPEG, MP3 and MPEG files, and some alternatives to thecompact disc, such as Sony'sMiniDisc format. Uncompressed media (such as onLaserdiscs,Audio CDs, andWAV files) orlosslessly compressed media (such asFLAC orPNG) do not suffer from compression artifacts.
The minimization of perceivable artifacts is a key goal in implementing a lossy compression algorithm. However, artifacts are occasionallyintentionally produced for artistic purposes, a style known asglitch art[4] or datamoshing.[5]
Technically speaking, a compression artifact is a particular class of data error that is usually the consequence ofquantization in lossy data compression. Wheretransform coding is used, it typically assumes the form of one of thebasis functions of the coder's transform space.

When performing block-baseddiscrete cosine transform (DCT)[1] coding forquantization, as inJPEG-compressed images, several types of artifacts can appear.
Other lossy algorithms, which usepattern matching to deduplicate similar symbols, are prone to introducing hard to detect errors in printed text. For example, the numbers "6" and "8" may get replaced. This has been observed to happen withJBIG2 in certain photocopier machines.[6][7]

At low bit rates, anylossy block-based coding scheme introduces visible artifacts in pixel blocks and at block boundaries. These boundaries can be transform block boundaries, prediction block boundaries, or both, and may coincide withmacroblock boundaries. The termmacroblocking is commonly used regardless of the artifact's cause. Other names include blocking,[8] tiling,[9] mosaicing, pixelating, quilting, and checkerboarding.
Block-artifacts are a result of the very principle ofblock transform coding. The transform (for example the discrete cosine transform) is applied to a block of pixels, and to achieve lossy compression, the transform coefficients of each block arequantized. The lower the bit rate, the more coarsely the coefficients are represented and the more coefficients are quantized to zero. Statistically, images have more low-frequency than high-frequency content, so it is the low-frequency content that remains after quantization, which results in blurry, low-resolution blocks. In the most extreme case only the DC-coefficient, that is the coefficient which represents the average color of a block, is retained, and the transform block is only a single color after reconstruction.
Because this quantization process is applied individually in each block, neighboring blocks quantize coefficients differently. This leads to discontinuities at the block boundaries. These are most visible in flat areas, where there is little detail to mask the effect.
Various approaches have been proposed to reduce image compression effects, but to use standardized compression/decompression techniques and retain the benefits of compression (for instance, lower transmission and storage costs), many of these methods focus on "post-processing"—that is, processing images when received or viewed. No post-processing technique has been shown to improve image quality in all cases; consequently, none has garnered widespread acceptance, though some have been implemented and are in use in proprietary systems. Many photo editing programs, for instance, have proprietary JPEG artifact reduction algorithms built-in. Consumer equipment often calls this post-processingMPEG noise reduction.[10]
Boundary artifact in JPEG can be turned into more pleasing "grains" not unlike those in high ISO photographic films. Instead of just multiplying the quantized coefficients with the quantisation stepQ pertaining to the 2D-frequency, intelligent noise in the form of a random number in the interval[-Q/2;Q/2] can be added to the dequantized coefficient. This method can be added as an integral part to JPEG decompressors working on the trillions of existing and future JPEG images. As such it is not a "post-processing" technique.[11]
The ringing issue can be reduced at encode time by overshooting the DCT values, clamping the rings away.[12]
Posterization generally only happens at low quality, when the DC values are given too little importance. Tuning the quantization table helps.[13]

When motion prediction is used, as inMPEG-1,MPEG-2 orMPEG-4, compression artifacts tend to remain on several generations of decompressed frames, and move with theoptic flow of the image, leading to a peculiar effect, part way between a painting effect and "grime" that moves with objects in the scene.
Data errors in the compressed bit-stream, possibly due to transmission errors, can lead to errors similar to large quantization errors, or can disrupt the parsing of the data stream entirely for a short time, leading to "break-up" of the picture. Where gross errors have occurred in the bit-stream, decoders continue to apply updates to the damaged picture for a short interval, creating a "ghost image" effect, until receiving the next independently compressed frame. In MPEG picture coding, these are known as "I-frames", with the 'I' standing for "intra". Until the next I-frame arrives, the decoder can performerror concealment.
Block boundary discontinuities can occur at edges ofmotion compensation prediction blocks. In motion compensated video compression, the current picture is predicted by shifting blocks (macroblocks, partitions, or prediction units) of pixels from previously decoded frames. If two neighboring blocks use different motion vectors, there will be a discontinuity at the edge between the blocks.
Video compression artifacts include cumulative results of compression of the comprising still images, for instanceringing or other edge busyness in successive still images appear in sequence as a shimmering blur of dots around edges, calledmosquito noise, as they resemble mosquitoes swarming around the object.[14][15] The so-called "mosquito noise" is caused by the block-baseddiscrete cosine transform (DCT) compression algorithm used in mostvideo coding standards, such as theMPEG formats.[3]
The artifacts at block boundaries can be reduced by applying adeblocking filter. As in still image coding, it is possible to apply a deblocking filter to the decoder output as post-processing.
In motion-predicted video coding with a closed prediction loop, the encoder uses the decoder output as the prediction reference from which future frames are predicted. To that end, the encoder conceptually integrates a decoder. If this "decoder" performs a deblocking, the deblocked picture is then used as a reference picture for motion compensation, which improves coding efficiency by preventing a propagation of block artifacts across frames. This is referred to as an in-loop deblocking filter. Standards which specify an in-loop deblocking filter includeVC-1,H.263 Annex J,H.264/AVC, andH.265/HEVC.
Lossy audio compression typically works with apsychoacoustic model—a model of human hearing perception. Lossy audio formats typically involve the use of a time/frequency domain transform, such as amodified discrete cosine transform. With the psychoacoustic model, masking effects such as frequency masking and temporal masking are exploited, so that sounds that should be imperceptible are not recorded. For example, in general, human beings are unable to perceive a quiet tone played simultaneously with a similar but louder tone. A lossy compression technique might identify this quiet tone and attempt to remove it. Also, quantization noise can be "hidden" where they would be masked by more prominent sounds. With low compression, a conservative psy-model is used with small block sizes.
When the psychoacoustic model is inaccurate, when the transform block size is restrained, or when aggressive compression is used, this may result in compression artifacts. Compression artifacts in compressed audio typically show up as ringing,pre-echo, "birdie artifacts", drop-outs, rattling, warbling, metallic ringing, an underwater feeling, hissing, or "graininess".
An example of compression artifacts in audio is applause in a relatively highly compressed audio file (e.g. 96 kbit/sec MP3). In general, musical tones have repeating waveforms and more predictable variations in volume, whereas applause is essentially random, therefore hard to compress. A highly compressed track of applause may have "metallic ringing" and other compression artifacts.
Compression artifacts may intentionally be used as a visual style, sometimes known as "glitch art".Rosa Menkman's glitch art makes use ofcompression artifacts,[16] particularly thediscrete cosine transform blocks (DCT blocks) found in mostdigital mediadata compression formats such as JPEGdigital images andMP3digital audio.[2] In still images, an example isJpegs by German photographerThomas Ruff, which uses intentionalJPEG artifacts as the basis of the picture's style.[17][18]
Invideo art, one technique used isdatamoshing, where two videos are interleaved so intermediate frames are interpolated from two separate sources. Another technique involves simply transcoding from one lossy video format to another, which exploits the difference in how the separate video codecs process motion and color information.[19] The technique was pioneered by artistsBertrand Planes in collaboration with Christian Jacquemin in 2006 with DivXPrime,[20] Sven König,Takeshi Murata,Jacques Perconte and Paul B. Davis in collaboration withPaperrad, and more recently used byDavid OReilly and withinmusic videos forChairlift and byNabil Elderkin in the "Welcome to Heartbreak" music video forKanye West.[21][22]
There is also a genre ofinternet memes where often nonsensical images are purposefully heavily compressed sometimes multiple times for comedic effect. Images created using this technique are often referred to as "deep fried."[23]