Turbo Codes
This module supports encoding and decoding of Turbo codes[Berrou], e.g., asused in the LTE wireless standard. The convolutional component encoders anddecoders are composed of theConvEncoder andBCJRDecoder layers, respectively.
Please note that various notations are used in literature to represent thegenerator polynomials for the underlying convolutional codes. For simplicity,TurboEncoder only accepts the binaryformat, i.e.,10011, for the generator polynomial which corresponds to thepolynomial\(1 + D^3 + D^4\).
The following code snippet shows how to set-up a rate-1/3, constraint-length-4TurboEncoder and the correspondingTurboDecoder.You can find further examples in theChannel Coding Tutorial Notebook.
Setting-up:
encoder=TurboEncoder(constraint_length=4,# Desired constraint length of the polynomialsrate=1/3,# Desired rate of Turbo codeterminate=True)# Terminate the constituent convolutional encoders to all-zero state# orencoder=TurboEncoder(gen_poly=gen_poly,# Generator polynomials to use in the underlying convolutional encodersrate=1/3,# Rate of the desired Turbo codeterminate=False)# Do not terminate the constituent convolutional encoders# the decoder can be initialized with a reference to the encoderdecoder=TurboDecoder(encoder,num_iter=6,# Number of iterations between component BCJR decodersalgorithm="map",# can be also "maxlog"hard_out=True)# hard_decide output
Running the encoder / decoder:
# --- encoder ---# u contains the information bits to be encoded and has shape [...,k].# c contains the turbo encoded codewords and has shape [...,n], where n=k/rate when terminate is False.c=encoder(u)# --- decoder ---# llr contains the log-likelihood ratio values from the de-mapper and has shape [...,n].# u_hat contains the estimated information bits and has shape [...,k].u_hat=decoder(llr)
- classsionna.phy.fec.turbo.TurboEncoder(gen_poly=None,constraint_length=3,rate=0.3333333333333333,terminate=False,interleaver_type='3GPP',precision=None,**kwargs)[source]
Performs encoding of information bits to a Turbo code codeword
Implements the standard Turbo code framework[Berrou]: Two identicalrate-1/2 convolutional encoders
ConvEncoderare combined to produce arate-1/3 Turbo code. Further, puncturing to attain a rate-1/2 Turbo code issupported.- Parameters:
gen_poly (tuple |None (default)) – Tuple of strings with each string being a 0,1 sequence. IfNone,
constraint_lengthmust be provided.constraint_length (int,3...6) – Valid values are between 3 and 6 inclusive. Only required if
gen_polyisNone.rate (float,1/2 |1/3) – Valid values are 1/3 and 1/2. Note that
ratehere denotesthedesign rate of the Turbo code. IfterminateisTrue, asmall rate-loss occurs.terminate (bool, (defaultFalse)) – Underlying convolutional encoders are terminated to all zero stateifTrue. If terminated, the true rate of the code is slightly lowerthan
rate.interleaver_type (str,"3GPP" |"random") – Valid values are“3GPP” or“random”. Determines the choice ofthe interleaver to interleave the message bits before input to thesecond convolutional encoder. If“3GPP”, the Turbo code interleaverfrom the 3GPP LTE standard[3GPPTS36212_Turbo] is used. If“random”,a random interleaver is used.
precision (None (default) | ‘single’ | ‘double’) – Precision used for internal calculations and outputs.If set toNone,
precisionis used.
- Input:
inputs ([…,k], tf.float32) – Tensor of information bits wherek is the information length
- Output:
[…,k/rate], tf.float32 – Tensor whererate is provided as input parameter. The output is theencoded codeword for the input information tensor. When
terminateisTrue, the effective rate of the Turbo code is slightly less thanrate.
Note
Various notations are used in literature to represent the generatorpolynomials for convolutional codes. For simplicity
TurboEncoderonlyaccepts the binary format, i.e.,10011, for thegen_polyargumentwhich corresponds to the polynomial\(1 + D^3 + D^4\).Note that Turbo codes require the underlying convolutional encodersto be recursive systematic encoders. Only then the channel outputfrom the systematic part of the first encoder can be used to decodethe second encoder.
Also note that
constraint_lengthandmemoryare two differentterms often used to denote the strength of the convolutional code. Inthis sub-package we useconstraint_length. For example, the polynomial10011 has aconstraint_lengthof 5, however itsmemoryisonly 4.When
terminateisTrue, the true rate of the Turbo code isslightly lower thanrate. It can be computed as\(\frac{k}{\frac{k}{r}+\frac{4\mu}{3r}}\) wherer denotesrateand\(\mu\) is theconstraint_length- 1. For example, in3GPP,constraint_length= 4,terminate=True, forrate= 1/3, true rate is equal to\(\frac{k}{3k+12}\).- propertycoderate
Rate of the code used in the encoder
- propertyconstraint_length
Constraint length of the encoder
- propertygen_poly
Generator polynomial used by the encoder
- propertyk
Number of information bits per codeword
- propertyn
Number of codeword bits
- propertypunct_pattern
Puncturing pattern for the Turbo codeword
- propertyterminate
Indicates if the convolutional encoders are terminated
- propertytrellis
Trellis object used during encoding
- classsionna.phy.fec.turbo.TurboDecoder(encoder=None,gen_poly=None,rate=0.3333333333333333,constraint_length=None,interleaver='3GPP',terminate=False,num_iter=6,hard_out=True,algorithm='map',precision=None,**kwargs)[source]
Turbo code decoder based on BCJR component decoders[Berrou].
Takes as input LLRs and returns LLRs or hard decided bits, i.e., anestimate of the information tensor.
This decoder is based on the
BCJRDecoderand, thus, internallyinstantiates twoBCJRDecoderblocks.- Parameters:
encoder (
TurboEncoder) – Ifencoderis provided as input, the following input parametersare not required and will be ignored:gen_poly,rate,constraint_length,terminate,interleaver. They will be inferredfrom theencoderobject itself.IfencoderisNone, the above parameters must be providedexplicitly.gen_poly (tuple orNone) – Tuple of strings with each string being a 0, 1 sequence. IfNone,
rateandconstraint_lengthmust be provided.rate (float,1/3 |1/2) – Rate of the Turbo code. Valid values are 1/3 and 1/2. Note that
gen_poly, if provided, is used to encode the underlyingconvolutional code, which traditionally has rate 1/2.constraint_length (int,3...6) – Valid values are between 3 and 6 inclusive. Only required if
encoderandgen_polyareNone.interleaver (str,None (default) | “3GPP” | “Random”) –“3GPP” or“Random”. If“3GPP”, the internal interleaver for Turbocodes as specified in[3GPPTS36212_Turbo] will be used. Only requiredif
encoderisNone.terminate (bool, (defaultFalse)) – IfTrue, the two underlying convolutional encoders are assumedto have terminated to all zero state.
num_iter (int) – Number of iterations for the Turbo decoding to run. Each iteration ofTurbo decoding entails one BCJR decoder for each of the underlyingconvolutional code components.
hard_out (bool, (defaultTrue)) – Indicates whether to output hard or softdecisions on the decoded information vector.True implies a hard-decoded information vector of 0/1’s is output.False impliesdecoded LLRs of the information is output.
algorithm (str,"map" (default)|"log" |"maxlog") – Indicates the implemented BCJR algorithm,wheremap denotes the exact MAP algorithm,log indicates theexact MAP implementation, but in log-domain, andmaxlog indicates the approximated MAP implementation in log-domain,where\(\log(e^{a}+e^{b}) \sim \max(a,b)\).
precision (None (default) | ‘single’ | ‘double’) – Precision used for internal calculations and outputs.If set toNone,
precisionis used.
- Input:
llr_ch (tf.float) – Tensor of shape[…,n] containing the (noisy) channeloutput symbols wheren is the codeword length.
- Output:
tf.float – Tensor of shape[…, coderate * n] containing the estimates of theinformation bit tensor.
Note
For decoding, inputlogits defined as\(\operatorname{log} \frac{p(x=1)}{p(x=0)}\) are assumed forcompatibility with the rest of Sionna. Internally,log-likelihood ratios (LLRs) with definition\(\operatorname{log} \frac{p(x=0)}{p(x=1)}\) are used.
- propertycoderate
Rate of the code used in the encoder
- propertyconstraint_length
Constraint length of the encoder
- depuncture(y)[source]
Given a tensory of shape[batch, n], depuncture() scattersyelements into shape[batch, 3*rate*n] where theextra elements are filled with 0.
For e.g., if input isy, rate is 1/2 andpunct_pattern is [1, 1, 0, 1, 0, 1], then theoutput is [y[0], y[1], 0., y[2], 0., y[3], y[4], y[5], 0., … ,].
- propertygen_poly
Generator polynomial used by the encoder
- propertyk
Number of information bits per codeword
- propertyn
Number of codeword bits
- propertytrellis
Trellis object used during encoding
Utility Functions
- classsionna.phy.fec.turbo.TurboTermination(constraint_length,conv_n=2,num_conv_encs=2,num_bitstreams=3,**kwargs)[source]
Termination object, handles the transformation of termination bits fromthe convolutional encoders to a Turbo codeword.
Similarly, it handles the transformation of channel symbols correspondingto the termination of a Turbo codeword to the underlying convolutionalcodewords.
- Parameters:
constraint_length (int) – Constraint length of the convolutional encoder used in the Turbo code.Note that the memory of the encoder is
constraint_length- 1.conv_n (int) – Number of output bits for one state transition in the underlyingconvolutional encoder
num_conv_encs (int) – Number of parallel convolutional encoders used in the Turbo code
num_bit_streams (int) – Number of output bit streams from Turbo code
precision (None (default) | ‘single’ | ‘double’) – Precision used for internal calculations and outputs.If set toNone,
precisionis used.
- get_num_term_syms()[source]
Computes the number of termination symbols for the Turbocode based on the underlying convolutional code parameters,primarily the memory\(\mu\).Note that it is assumed that one Turbo symbol implies
num_bitstreamsbits.- Input:
None
- Output:
turbo_term_syms (float) – Total number of termination symbols for the Turbo Code. Onesymbol equals
num_bitstreamsbits.
- term_bits_turbo2conv(term_bits)[source]
This method splits the termination symbols from a Turbo codewordto the termination symbols corresponding to the two convolutionalencoders, respectively.
Let’s assume\(\mu=4\) and the underlying convolutional encodersare systematic and rate-1/2, for demonstration purposes.
Let
term_bitstensor, corresponding to the termination symbols ofthe Turbo codeword be as following:\(y = [x_1(K), z_1(K), x_1(K+1), z_1(K+1), x_1(K+2), z_1(K+2)\),\(x_1(K+3), z_1(K+3), x_2(K), z_2(K), x_2(K+1), z_2(K+1),\)\(x_2(K+2), z_2(K+2), x_2(K+3), z_2(K+3), 0, 0]\)
The two termination tensors corresponding to the convolutional encodersare:\(y[0,..., 2\mu]\),\(y[2\mu,..., 4\mu]\). The output from thismethod is a tuple of two tensors, each ofsize\(2\mu\) and shape\([\mu,2]\).
\([[x_1(K), z_1(K)]\),
\([x_1(K+1), z_1(K+1)]\),
\([x_1(K+2, z_1(K+2)]\),
\([x_1(K+3), z_1(K+3)]]\)
and
\([[x_2(K), z_2(K)],\)
\([x_2(K+1), z_2(K+1)]\),
\([x_2(K+2), z_2(K+2)]\),
\([x_2(K+3), z_2(K+3)]]\)
- Input:
term_bits (tf.float32) – Channel output of the Turbo codeword, corresponding to thetermination part
- Output:
tf.float32 – Two tensors of channel outputs, corresponding to encoders 1 and 2,respectively
- termbits_conv2turbo(term_bits1,term_bits2)[source]
This method merges
term_bits1andterm_bits2, terminationbit streams from the two convolutional encoders, to a bit streamcorresponding to the Turbo codeword.Let
term_bits1andterm_bits2be:\([x_1(K), z_1(K), x_1(K+1), z_1(K+1),..., x_1(K+\mu-1),z_1(K+\mu-1)]\)
\([x_2(K), z_2(K), x_2(K+1), z_2(K+1),..., x_2(K+\mu-1), z_2(K+\mu-1)]\)
where\(x_i, z_i\) are the systematic and parity bit streamsrespectively for a rate-1/2 convolutional encoder i, for i = 1, 2.
In the example output below, we assume\(\mu=4\) to demonstrate zeropadding at the end. Zero padding is done such that the total length isdivisible by
num_bitstreams(defaults to 3) which is the number ofTurbo bit streams.Assume
num_bitstreams= 3. Then number of termination symbols forthe TurboEncoder is\(\lceil \frac{2*conv\_n*\mu}{3} \rceil\):\([x_1(K), z_1(K), x_1(K+1)]\)
\([z_1(K+1), x_1(K+2, z_1(K+2)]\)
\([x_1(K+3), z_1(K+3), x_2(K)]\)
\([z_2(K), x_2(K+1), z_2(K+1)]\)
\([x_2(K+2), z_2(K+2), x_2(K+3)]\)
\([z_2(K+3), 0, 0]\)
Therefore, the output from this method is a single dimension vectorwhere all Turbo symbols are concatenated together.
\([x_1(K), z_1(K), x_1(K+1), z_1(K+1), x_1(K+2, z_1(K+2), x_1(K+3),\)
\(z_1(K+3), x_2(K),z_2(K), x_2(K+1), z_2(K+1), x_2(K+2), z_2(K+2),\)
\(x_2(K+3), z_2(K+3), 0, 0]\)
- Input:
term_bits1 (tf.int32) – 2+D Tensor containing termination bits from convolutional encoder 1
term_bits2 (tf.int32) – 2+D Tensor containing termination bits from convolutional encoder 2
- Output:
tf.int32 – Tensor of termination bits. The output is obtained byconcatenating the inputs and then adding right zero-padding ifneeded.
- sionna.phy.fec.turbo.utils.polynomial_selector(constraint_length)[source]
Returns the generator polynomials for rate-1/2 convolutional codesfor a given
constraint_length- Input:
constraint_length (int) – An integer defining the desired constraint length of the encoder.The memory of the encoder is
constraint_length- 1.- Output:
gen_poly (tuple) – Tuple of strings with each string being a 0,1 sequence whereeach polynomial is represented in binary form.
Note
Please note that the polynomials are optimized for rsc codes and arenot necessarily the same as used in the polynomial selector
polynomial_selectorof theconvolutional codes.
- sionna.phy.fec.turbo.utils.puncture_pattern(turbo_coderate,conv_coderate)[source]
This method returns puncturing pattern such that theTurbo code has rate
turbo_coderategiven the underlyingconvolutional encoder is of rateconv_coderate.- Input:
turbo_coderate (float) – Desired coderate of the Turbo code
conv_coderate (float) – Coderate of the underlying convolutional encoder.Currently, only rate=0.5 is supported.
- Output:
tf.bool – 2D tensor indicating the positions to be punctured.
- References: