Movatterモバイル変換


[0]ホーム

URL:


Jump to content
WikipediaThe Free Encyclopedia
Search

Turbo code

From Wikipedia, the free encyclopedia
(Redirected fromTurbo codes)
High-performance forward error correction codes

Ininformation theory,turbo codes are a class of high-performanceforward error correction (FEC) codes developed around 1990–91, but first published in 1993. They were the first practical codes to closely approach the maximum channel capacity orShannon limit, a theoretical maximum for thecode rate at which reliable communication is still possible given a specific noise level. Turbo codes are used in3G/4G mobile communications (e.g., inUMTS andLTE) and in (deep space)satellitecommunications as well as other applications where designers seek to achieve reliable information transfer over bandwidth- or latency-constrained communication links in the presence of data-corrupting noise. Turbo codes compete withlow-density parity-check (LDPC) codes, which provide similar performance. Until the patent for turbo codes expired,[1] the patent-free status of LDPC codes was an important factor in LDPC's continued relevance.[2]

The name "turbo code" arose from the feedback loop used during normal turbo code decoding, which was analogized to the exhaust feedback used for engineturbocharging.Hagenauer has argued the term turbo code is a misnomer since there is no feedback involved in the encoding process.[3]

History

[edit]

The fundamental patent application for turbo codes was filed on 23 April 1991. The patent application listsClaude Berrou as the sole inventor of turbo codes. The patent filing resulted in several patents includingUS Patent 5,446,747, which expired 29 August 2013.

The first public paper on turbo codes was "Near Shannon Limit Error-correcting Coding and Decoding: Turbo-codes".[4] This paper was published 1993 in the Proceedings of IEEE International Communications Conference. The 1993 paper was formed from three separate submissions that were combined due to space constraints. The merger caused the paper to list three authors: Berrou,Glavieux, andThitimajshima (from Télécom Bretagne, formerENST Bretagne, France). However, it is clear from the original patent filing that Berrou is the sole inventor of turbo codes and that the other authors of the paper contributed material other than the core concepts.[improper synthesis]

Turbo codes were so revolutionary at the time of their introduction that many experts in the field of coding did not believe the reported results. When the performance was confirmed a small revolution in the world of coding took place that led to the investigation of many other types of iterative signal processing.[5]

The first class of turbo code was the parallel concatenated convolutional code (PCCC). Since the introduction of the original parallel turbo codes in 1993, many other classes of turbo code have been discovered, includingserial concatenated convolutional codes andrepeat-accumulate codes. Iterative turbo decoding methods have also been applied to more conventional FEC systems, including Reed–Solomon corrected convolutional codes, although these systems are too complex for practical implementations of iterative decoders. Turbo equalization also flowed from the concept of turbo coding.

In addition to turbo codes, Berrou also invented recursive systematic convolutional (RSC) codes, which are used in the example implementation of turbo codes described in the patent. Turbo codes that use RSC codes seem to perform better than turbo codes that do not use RSC codes.

Prior to turbo codes, the best constructions were serialconcatenated codes based on an outerReed–Solomon error correction code combined with an innerViterbi-decoded short constraint lengthconvolutional code, also known as RSV codes.

In a later paper, Berrou gave credit to the intuition of "G. Battail,J. Hagenauer and P. Hoeher, who, in the late 80s, highlighted the interest of probabilistic processing." He adds "R. Gallager and M. Tanner had already imagined coding and decoding techniques whose general principles are closely related," although the necessary calculations were impractical at that time.[6]

An example encoder

[edit]

There are many different instances of turbo codes, using different component encoders, input/output ratios, interleavers, andpuncturing patterns. This example encoder implementation describes a classic turbo encoder, and demonstrates the general design of parallel turbo codes.

This encoder implementation sends three sub-blocks of bits. The first sub-block is them-bit block of payload data. The second sub-block isn/2 parity bits for the payload data, computed using a recursive systematicconvolutional code (RSC code). The third sub-block isn/2 parity bits for a knownpermutation of the payload data, again computed using an RSC code. Thus, two redundant but different sub-blocks of parity bits are sent with the payload. The complete block hasm +n bits of data with a code rate ofm/(m +n). Thepermutation of the payload data is carried out by a device called aninterleaver.

Hardware-wise, this turbo code encoder consists of two identical RSC coders,C1 andC2, as depicted in the figure, which are connected to each other using a concatenation scheme, calledparallel concatenation:

In the figure,M is a memory register. The delay line and interleaver force input bits dk to appear in different sequences.At first iteration, the input sequencedk appears at both outputs of the encoder,xk and y1k ory2k due to the encoder's systematic nature. If the encodersC1 andC2 are used inn1 andn2 iterations, their rates are respectively equal to

 R1=n1+n22n1+n2 R2=n1+n2n1+2n2{\displaystyle {\begin{aligned}~R_{1}&={\frac {n_{1}+n_{2}}{2n_{1}+n_{2}}}\\~R_{2}&={\frac {n_{1}+n_{2}}{n_{1}+2n_{2}}}\end{aligned}}}

The decoder

[edit]

The decoder is built in a similar way to the above encoder. Two elementary decoders are interconnected to each other, but in series, not in parallel. TheDEC1{\displaystyle \textstyle DEC_{1}} decoder operates on lower speed (i.e.,R1{\displaystyle \textstyle R_{1}}), thus, it is intended for theC1{\displaystyle \textstyle C_{1}} encoder, andDEC2{\displaystyle \textstyle DEC_{2}} is forC2{\displaystyle \textstyle C_{2}} correspondingly.DEC1{\displaystyle \textstyle DEC_{1}} yields asoft decision which causesL1{\displaystyle \textstyle L_{1}} delay. The same delay is caused by the delay line in the encoder. TheDEC2{\displaystyle \textstyle DEC_{2}}'s operation causesL2{\displaystyle \textstyle L_{2}} delay.

An interleaver installed between the two decoders is used here to scatter error bursts coming fromDEC1{\displaystyle \textstyle DEC_{1}} output.DI block is a demultiplexing and insertion module. It works as a switch, redirecting input bits toDEC1{\displaystyle \textstyle DEC_{1}} at one moment and toDEC2{\displaystyle \textstyle DEC_{2}} at another. In OFF state, it feeds bothy1k{\displaystyle \textstyle y_{1k}} andy2k{\displaystyle \textstyle y_{2k}} inputs with padding bits (zeros).

Consider a memorylessAWGN channel, and assume that atk-th iteration, the decoder receives a pair of random variables:

 xk=(2dk1)+ak yk=2(Yk1)+bk{\displaystyle {\begin{aligned}~x_{k}&=(2d_{k}-1)+a_{k}\\~y_{k}&=2(Y_{k}-1)+b_{k}\end{aligned}}}

whereak{\displaystyle \textstyle a_{k}} andbk{\displaystyle \textstyle b_{k}} are independent noise components having the same varianceσ2{\displaystyle \textstyle \sigma ^{2}}.Yk{\displaystyle \textstyle Y_{k}} is ak-th bit fromyk{\displaystyle \textstyle y_{k}} encoder output.

Redundant information is demultiplexed and sent throughDI toDEC1{\displaystyle \textstyle DEC_{1}} (whenyk=y1k{\displaystyle \textstyle y_{k}=y_{1k}}) and toDEC2{\displaystyle \textstyle DEC_{2}} (whenyk=y2k{\displaystyle \textstyle y_{k}=y_{2k}}).

DEC1{\displaystyle \textstyle DEC_{1}} yields a soft decision; i.e.:

Λ(dk)=logp(dk=1)p(dk=0){\displaystyle \Lambda (d_{k})=\log {\frac {p(d_{k}=1)}{p(d_{k}=0)}}}

and delivers it toDEC2{\displaystyle \textstyle DEC_{2}}.Λ(dk){\displaystyle \textstyle \Lambda (d_{k})} is called thelogarithm of the likelihood ratio (LLR).p(dk=i),i{0,1}{\displaystyle \textstyle p(d_{k}=i),\,i\in \{0,1\}} is thea posteriori probability (APP) of thedk{\displaystyle \textstyle d_{k}} data bit which shows the probability of interpreting a receiveddk{\displaystyle \textstyle d_{k}} bit asi{\displaystyle \textstyle i}. Taking theLLR into account,DEC2{\displaystyle \textstyle DEC_{2}} yields a hard decision; i.e., a decoded bit.

It is known that theViterbi algorithm is unable to calculate APP, thus it cannot be used inDEC1{\displaystyle \textstyle DEC_{1}}. Instead of that, a modifiedBCJR algorithm is used. ForDEC2{\displaystyle \textstyle DEC_{2}}, theViterbi algorithm is an appropriate one.

However, the depicted structure is not an optimal one, becauseDEC1{\displaystyle \textstyle DEC_{1}} uses only a proper fraction of the available redundant information. In order to improve the structure, a feedback loop is used (see the dotted line on the figure).

Soft decision approach

[edit]

The decoder front-end produces an integer for each bit in the data stream. This integer is a measure of how likely it is that the bit is a 0 or 1 and is also calledsoft bit. The integer could be drawn from the range [−127, 127], where:

  • −127 means "certainly 0"
  • −100 means "very likely 0"
  • 0 means "it could be either 0 or 1"
  • 100 means "very likely 1"
  • 127 means "certainly 1"

This introduces a probabilistic aspect to the data-stream from the front end, but it conveys more information about each bit than just 0 or 1.

For example, for each bit, the front end of a traditional wireless-receiver has to decide if an internal analog voltage is above or below a given threshold voltage level. For a turbo code decoder, the front end would provide an integer measure of how far the internal voltage is from the given threshold.

To decode them +n-bit block of data, the decoder front-end creates a block of likelihood measures, with one likelihood measure for each bit in the data stream. There are two parallel decoders, one for each of then2-bit parity sub-blocks. Both decoders use the sub-block ofm likelihoods for the payload data. The decoder working on the second parity sub-block knows the permutation that the coder used for this sub-block.

Solving hypotheses to find bits

[edit]

The key innovation of turbo codes is how they use the likelihood data to reconcile differences between the two decoders. Each of the two convolutional decoders generates a hypothesis (with derived likelihoods) for the pattern ofm bits in the payload sub-block. The hypothesis bit-patterns are compared, and if they differ, the decoders exchange the derived likelihoods they have for each bit in the hypotheses. Each decoder incorporates the derived likelihood estimates from the other decoder to generate a new hypothesis for the bits in the payload. Then they compare these new hypotheses. This iterative process continues until the two decoders come up with the same hypothesis for them-bit pattern of the payload, typically in 15 to 18 cycles.

An analogy can be drawn between this process and that of solving cross-reference puzzles likecrossword orsudoku. Consider a partially completed, possibly garbled crossword puzzle. Two puzzle solvers (decoders) are trying to solve it: one possessing only the "down" clues (parity bits), and the other possessing only the "across" clues. To start, both solvers guess the answers (hypotheses) to their own clues, noting down how confident they are in each letter (payload bit). Then, they compare notes, by exchanging answers and confidence ratings with each other, noticing where and how they differ. Based on this new knowledge, they both come up with updated answers and confidence ratings, repeating the whole process until they converge to the same solution.

Performance

[edit]

Turbo codes perform well due to the attractive combination of the code's random appearance on the channel together with the physically realisable decoding structure. Turbo codes are affected by anerror floor.

Practical applications using turbo codes

[edit]

Telecommunications:

Bayesian formulation

[edit]

From anartificial intelligence viewpoint, turbo codes can be considered as an instance of loopybelief propagation inBayesian networks.[8]

See also

[edit]

References

[edit]
  1. ^US 5446747 
  2. ^Erico Guizzo (1 March 2004)."CLOSING IN ON THE PERFECT CODE".IEEE Spectrum. Archived fromthe original on 23 April 2023. "Another advantage, perhaps the biggest of all, is that the LDPC patents have expired, so companies can use them without having to pay for intellectual-property rights."
  3. ^Hagenauer, Joachim; Offer, Elke; Papke, Luiz (March 1996)."Iterative Decoding of Binary Block and Convolutional Codes"(PDF).IEEE Transactions on Information Theory.42 (2):429–445.doi:10.1109/18.485714. Archived fromthe original(PDF) on 11 June 2013. Retrieved20 March 2014.
  4. ^Berrou, Claude;Glavieux, Alain;Thitimajshima, Punya (1993),"Near Shannon Limit Error – Correcting",Proceedings of IEEE International Communications Conference, vol. 2, pp. 1064–70,doi:10.1109/ICC.1993.397441,S2CID 17770377, retrieved11 February 2010
  5. ^Erico Guizzo (1 March 2004)."CLOSING IN ON THE PERFECT CODE".IEEE Spectrum. Archived fromthe original on 23 April 2023.
  6. ^Berrou, Claude,The ten-year-old turbo codes are entering into service, Bretagne, France, retrieved11 February 2010
  7. ^Digital Video Broadcasting (DVB); Interaction channel for Satellite Distribution Systems, ETSI EN 301 790, V1.5.1, May 2009.
  8. ^McEliece, Robert J.;MacKay, David J. C.; Cheng, Jung-Fu (1998),"Turbo decoding as an instance of Pearl's "belief propagation" algorithm"(PDF),IEEE Journal on Selected Areas in Communications,16 (2):140–152,doi:10.1109/49.661103,ISSN 0733-8716.

Further reading

[edit]

Publications

[edit]

External links

[edit]
Data compression
Error Correction
Telemetry command uplink
Telemetry downlink
Telemetry general
Telemetry modulation systems
Current
BPSK
QPSK
OQPSK
Proposed
GMSK
Frequencies
Networking, interoperability and monitoring
Authority control databases: NationalEdit this at Wikidata
Retrieved from "https://en.wikipedia.org/w/index.php?title=Turbo_code&oldid=1292214221"
Categories:
Hidden categories:

[8]ページ先頭

©2009-2025 Movatter.jp