Incoding theory,concatenated codes form a class oferror-correcting codes that are derived by combining aninner code and anouter code. They were conceived in 1966 byDave Forney as a solution to the problem of finding a code that has both exponentially decreasing error probability with increasing block length andpolynomial-time decodingcomplexity.[1]Concatenated codes became widely used in space communications in the 1970s.
The field ofchannel coding is concerned with sending a stream of data at the highest possible rate over a givencommunications channel, and then decoding the original data reliably at the receiver, using encoding and decoding algorithms that are feasible to implement in a given technology.
Shannon's channel coding theorem shows that over many common channels there exist channel coding schemes that are able to transmit data reliably at all rates less than a certain threshold, called thechannel capacity of the given channel. In fact, the probability of decoding error can be made to decrease exponentially as the block length of the coding scheme goes to infinity. However, the complexity of a naive optimum decoding scheme that simply computes the likelihood of every possible transmitted codeword increases exponentially with, so such an optimum decoder rapidly becomes infeasible.
In hisdoctoral thesis,Dave Forney showed that concatenated codes could be used to achieve exponentially decreasing error probabilities at all data rates less than capacity, with decoding complexity that increases only polynomially with the code block length.


LetCin be a [n,k,d] code, that is, ablock code of lengthn,dimensionk, minimumHamming distanced, andrater =k/n, over an alphabetA:
LetCout be a [N,K,D] code over an alphabetB with |B| = |A|k symbols:
The inner codeCin takes one of |A|k = |B| possible inputs, encodes into ann-tuple overA, transmits, and decodes into one of |B| possible outputs. We regard this as a (super) channel which can transmit one symbol from the alphabetB. We use this channelN times to transmit each of theN symbols in a codeword ofCout. Theconcatenation ofCout (as outer code) withCin (as inner code), denotedCoutCin, is thus a code of lengthNn over the alphabetA:[1]
It maps each input messagem = (m1,m2, ...,mK) to a codeword (Cin(m'1),Cin(m'2), ...,Cin(m'N)),where (m'1,m'2, ...,m'N) =Cout(m1,m2, ...,mK).
Thekey insight in this approach is that ifCin is decoded using amaximum-likelihood approach (thus showing an exponentially decreasing error probability with increasing length), andCout is a code with lengthN = 2nr that can be decoded in polynomial time ofN, then the concatenated code can be decoded in polynomial time of its combined lengthn2nr =O(N⋅log(N)) and shows an exponentially decreasing error probability, even ifCin has exponential decoding complexity.[1] This is discussed in more detail in sectionDecoding concatenated codes.
In a generalization of above concatenation, there areN possible inner codesCin,i and thei-th symbol in a codeword ofCout is transmitted across the inner channel using thei-th inner code. TheJustesen codes are examples of generalized concatenated codes, where the outer code is aReed–Solomon code.
1. The distance of the concatenated codeCoutCin is at leastdD, that is, it is a [nN,kK,D'] code withD' ≥dD.
Proof:Consider two different messagesm1 ≠m2 ∈BK. Let Δ denote the distance between two codewords. Then
Thus, there are at leastD positions in which the sequence ofN symbols of the codewordsCout(m1) andCout(m2) differ. For these positions, denotedi, we have
Consequently, there are at leastd⋅D positions in the sequence ofn⋅N symbols taken from the alphabetA in which the two codewords differ, and hence
2. IfCout andCin arelinear block codes, thenCoutCin is also a linear block code.
This property can be easily shown based on the idea of defining agenerator matrix for the concatenated code in terms of the generator matrices ofCout andCin.
A natural concept for a decoding algorithm for concatenated codes is to first decode the inner code and then the outer code. For the algorithm to be practical it must bepolynomial-time in the final block length. Consider that there is a polynomial-time unique decoding algorithm for the outer code. Now we have to find a polynomial-time decoding algorithm for the inner code. It is understood that polynomial running time here means that running time is polynomial in the final block length. The main idea is that if the inner block length is selected to be logarithmic in the size of the outer code then the decoding algorithm for the inner code may run inexponential time of the inner block length, and we can thus use an exponential-time but optimalmaximum likelihood decoder (MLD) for the inner code.
In detail, let the input to the decoder be the vectory = (y1, ...,yN) ∈ (An)N. Then the decoding algorithm is a two-step process:
Now, the time complexity of the first step isO(N⋅exp(n)), wheren =O(log(N)) is the inner block length. In other words, it isNO(1) (i.e., polynomial-time) in terms of the outer block lengthN. As the outer decoding algorithm in step two is assumed to run in polynomial time the complexity of the overall decoding algorithm is polynomial-time as well.
The decoding algorithm described above can be used to correct all errors up to less thandD/4 in number. Usingminimum distance decoding, the outer decoder can correct all inputsy' with less thanD/2 symbolsy'i in error. Similarly, the inner code can reliably correct an inputyi if less thand/2 inner symbols are erroneous. Thus, for an outer symboly'i to be incorrect after inner decoding at leastd/2 inner symbols must have been in error, and for the outer code to fail this must have happened for at leastD/2 outer symbols. Consequently, the total number of inner symbols that must be received incorrectly for the concatenated code to fail must be at leastd/2⋅D/2 =dD/4.
The algorithm also works if the inner codes are different, e.g., forJustesen codes. Thegeneralized minimum distance algorithm, developed by Forney, can be used to correct up todD/2 errors.[2]It useserasure information from the inner code to improve performance of the outer code, and was the first example of an algorithm usingsoft-decision decoding.[3][4]
Although a simple concatenation scheme was implemented already for the 1971Mariner Mars orbiter mission,[5] concatenated codes were starting to be regularly used fordeep space communication with theVoyager program, which launched twospace probes in 1977.[6] Since then, concatenated codes became the workhorse for efficient error correction coding, and stayed so at least until the invention ofturbo codes andLDPC codes.[5][6]
Typically, the inner code is not a block code but asoft-decisionconvolutionalViterbi-decoded code with a short constraint length.[7]For the outer code, a longer hard-decision block code, frequently aReed-Solomon code with eight-bit symbols, is used.[1][5]The larger symbol size makes the outer code more robust toerror bursts that can occur due to channel impairments, and also because erroneous output of the convolutional code itself is bursty.[1][5] Aninterleaving layer is usually added between the two codes to spread error bursts across a wider range.[5]
The combination of an inner Viterbi convolutional code with an outerReed–Solomon code (known as an RSV code) was first used inVoyager 2,[5][8] and it became a popular construction both within and outside of the space sector. It is still notably used today forsatellite communications, such as theDVB-Sdigital television broadcast standard.[9]
In a looser sense, any (serial) combination of two or more codes may be referred to as a concatenated code. For example, within theDVB-S2 standard, a highly efficientLDPC code is combined with an algebraic outer code in order to remove any resilient errors left over from the inner LDPC code due to its inherenterror floor.[10]
A simple concatenation scheme is also used on the compact disc (CD), where an interleaving layer between two Reed–Solomon codes of different sizes spreads errors across various blocks.
The description above is given for what is now called a serially concatenated code.Turbo codes, as described first in 1993, implemented a parallel concatenation of two convolutional codes, with an interleaver between the two codes and an iterative decoder that passes information forth and back between the codes.[6] This design has a better performance than any previously conceived concatenated codes.
However, a key aspect of turbo codes is their iterated decoding approach. Iterated decoding is now also applied to serial concatenations in order to achieve higher coding gains, such as within serially concatenated convolutional codes (SCCCs). An early form of iterated decoding was implemented with two to five iterations in the "Galileo code" of theGalileo space probe.[5]
{{cite journal}}:Cite journal requires|journal= (help){{cite journal}}:Cite journal requires|journal= (help){{cite journal}}:Cite journal requires|journal= (help)