This articleneeds additional citations forverification. Please helpimprove this article byadding citations to reliable sources. Unsourced material may be challenged and removed. Find sources: "Generation loss" – news ·newspapers ·books ·scholar ·JSTOR(November 2023) (Learn how and when to remove this message) |

Generation loss is the loss of quality between subsequentcopies ortranscodes of data. Anything that reduces the quality of the representation when copying, and would cause further reduction in quality on making a copy of the copy, can be considered a form of generation loss. File size increases are a common result of generation loss, as the introduction ofartifacts may actually increase the entropy of the data through each generation.
Inanalog systems (including systems that usedigital recording but make the copy over an analog connection), generation loss is mostly due tonoise andbandwidth issues incables,amplifiers,mixers, recording equipment and anything else between the source and the destination. Poorly adjusteddistribution amplifiers andmismatched impedances can make these problems even worse. Repeated conversion between analog and digital can also cause loss.
Generation loss was a major consideration in complex analog audio andvideo editing, where multi-layered edits were often created by making intermediate mixes which were then "bounced down" back onto tape. Careful planning was required to minimize generation loss, and the resulting noise and poor frequency response.
One way of minimizing the number of generations needed was to use an audio mixing or video editing suite capable of mixing a large number of channels at once; in the extreme case, for example with a 48-track recording studio, an entire complex mixdown could be done in a single generation, although this was prohibitively expensive for all but the best-funded projects.
The introduction of professionalanalog noise reduction systems such asDolby A helped reduce the amount of audible generation loss, but were eventually superseded by digital systems which vastly reduced generation loss.[1]
According toATIS, "Generation loss is limited to analog recording because digital recording and reproduction may be performed in a manner that is essentially free from generation loss."[1]
When used correctly, digital technology can eliminate generation loss. This implies the exclusive use of lossless compression codecs or uncompressed data from recording or creation until the final lossy encode for distribution through internet streaming or optical discs. Copying a digital file gives an exact copy if the equipment is operating properly which eliminates generation loss caused by copying, while reencoding digital files with lossy compression codecs can cause generation loss. This trait of digital technology has given rise to awareness of the risk of unauthorized copying. Before digital technology was widespread, arecord label, for example, could be confident knowing that unauthorized copies of their music tracks were never as good as the originals.
Generation loss can still occur when using lossy video or audio compression codecs as these introduce artifacts into the source material with each encode or reencode. Lossy compression codecs such asApple ProRes,Advanced Video Coding andmp3 are very widely used as they allow for dramatic reductions on file size while being indistinguishable from the uncompressed or losslessly compressed original for viewing purposes. The only way to avoid generation loss is by using uncompressed or losslessly compressed files; which may be expensive from a storage standpoint as they require larger amounts of storage space in flash memory or hard drives per second of runtime. Uncompressed video requires a high data rate; for example, a 1080p video at 60 frames per second require approximately 370 megabytes per second.[2] Lossy codecs make Blu-rays and streaming video over the internet feasible since neither can deliver the amounts of data needed for uncompressed or losslessly compressed video at acceptable frame rates and resolutions. Images can suffer from generation loss in the same way video and audio can.
Processing alossily compressed file rather than an original usually results in more loss of quality than generating the same output from an uncompressed original. For example, a low-resolution digital image for a web page is better if generated from an uncompressedraw image than from an already-compressedJPEG file of higher quality.
Indigital systems, several techniques such as lossy compression codecs and algorithms, used because of other advantages, may introduce generation loss and must be used with caution. However, copying a digital file itself incurs no generation loss—the copied file is identical to the original, provided a perfect copyingchannel is used.
Somedigital transforms are reversible, while some are not.Lossless compression is, by definition, fully reversible, whilelossy compression throws away some data which cannot be restored. Similarly, manyDSP processes are not reversible.
Thus careful planning of anaudio orvideo signal chain from beginning to end and rearranging to minimize multiple conversions is important to avoid generation loss when using lossy compression codecs. Often, arbitrary choices of numbers of pixels and sampling rates for source, destination, and intermediates can seriously degrade digital signals in spite of the potential of digital technology for eliminating generation loss completely.
Similarly, when using lossy compression, it will ideally only be done once, at the end of the workflow involving the file, after all required changes have been made.
Converting between lossy formats – be it decoding and re-encoding to the same format, between different formats, or between differentbitrates or parameters of the same format – causes generation loss.
Repeated applications oflossy compression and decompression can cause generation loss, particularly if the parameters used are not consistent across generations.Ideally an algorithm will be bothidempotent, meaning that if the signal is decoded and then re-encoded with identical settings, there is no loss, andscalable, meaning that if it is re-encoded with lower quality settings, the result will be the same as if it had been encoded from the original signal – seeScalable Video Coding. More generally, transcoding between different parameters of a particular encoding will ideally yield the greatest common shared quality – for instance, converting from an image with 4 bits of red and 8 bits of green to one with 8 bits of red and 4 bits of green would ideally yield simply an image with 4 bits of red color depth and 4 bits of green color depth without further degradation.
Some lossy compressionalgorithms are much worse than others in this regard, being neither idempotent nor scalable, and introducing further degradation if parameters are changed.
For example, withJPEG, changing the quality setting will cause different quantization constants to be used, causing additional loss. Further, as JPEG is divided into 16×16 blocks (or 16×8, or 8×8, depending onchroma subsampling), cropping that does not fall on an 8×8 boundary shifts the encoding blocks, causing substantial degradation – similar problems happen on rotation. This can be avoided by the use ofjpegtran or similar tools for cropping. Similar degradation occurs ifvideo keyframes do not line up from generation to generation.
Digitalresampling such asimage scaling, and otherDSP techniques can also introduce artifacts or degradesignal-to-noise ratio (S/N ratio) each time they are used, even if the underlying storage is lossless. When making a copy of a copy, the quality of the image will deteriorate with every ‘generation’.
Lossy image formats, such asJPEG, introduce degradation when files are repeatedly edited and re-saved. While directly copying a JPEG file preserves its quality, opening and saving it in an image editor creates a new, re-encoded version, introducing subtle changes. Social media platforms likeFacebook andX, formerly known as Twitter, automatically re-encode uploaded images at low-quality settings to optimize storage and bandwidth, further compounding compression artifacts. Over time, repeated re-encoding or processing can significantly degrade the image's quality.
Resampling causesaliasing, both blurring low-frequency components and adding high-frequency noise, causingjaggies, while rounding off computations to fit in finite precision introducesquantization, causingbanding; if fixed bydither, this instead becomes noise. In both cases, these at best degrade the signal's S/N ratio, and may cause artifacts. Quantization can be reduced by using high precision while editing (notably floating point numbers), only reducing back to fixed precision at the end.
Often, particular implementations fall short of theoretical ideals.
This sectionneeds expansion with: additional examples and further details. You can help byadding missing information.(February 2017) |
Successive generations of photocopies result in image distortion and degradation.[3] Repeatedly downloading and then reposting / reuploading content to platforms such asInstagram orYouTube can result to noticeable quality degradation.[4][5][6] Similar effects have been documented in copying ofVHS tapes.[7] This is because both services use lossy codecs on all data that is uploaded to them, even if the data being uploaded is a duplicate of data already hosted on the service, while VHS is an analog medium, where effects such as noise from interference can have a much more noticeable impact on recordings.