Low-Density Parity-Check (LDPC)

The low-density parity-check (LDPC) code module supports 5G compliant LDPC codes and allows iterative belief propagation (BP) decoding.Further, the module supports rate-matching for 5G and provides a generic linear encoder.

The following code snippets show how to setup and run a rate-matched 5G compliant LDPC encoder and a corresponding belief propagation (BP) decoder.

First, we need to create instances ofLDPC5GEncoder andLDPC5GDecoder:

encoder=LDPC5GEncoder(k=100,# number of information bits (input)n=200)# number of codeword bits (output)decoder=LDPC5GDecoder(encoder=encoder,num_iter=20,# number of BP iterationsreturn_infobits=True)

Now, the encoder and decoder can be used by:

# --- encoder ---# u contains the information bits to be encoded and has shape [...,k].# c contains the encoded codewords and has shape [...,n].c=encoder(u)# --- decoder ---# llr contains the log-likelihood ratios from the demapper and has shape [...,n].# u_hat contains the estimated information bits and has shape [...,k].u_hat=decoder(llr)
classsionna.phy.fec.ldpc.encoding.LDPC5GEncoder(k,n,num_bits_per_symbol=None,bg=None,precision=None,**kwargs)[source]

5G NR LDPC Encoder following the 3GPP 38.212 including rate-matching.

The implementation follows the 3GPP NR Initiative[3GPPTS38212_LDPC].

Parameters:
  • k (int) – Defining the number of information bit per codeword.

  • n (int) – Defining the desired codeword length.

  • num_bits_per_symbol (None (default) | int) – Defining the number of bits per QAM symbol. If this parameter isexplicitly provided, the codeword will be interleaved afterrate-matching as specified in Sec. 5.4.2.2 in[3GPPTS38212_LDPC].

  • bg (None (default) | “bg1” | “bg2”) – Basegraph to be used for the code construction.IfNone is provided, the encoder will automatically selectthe basegraph according to[3GPPTS38212_LDPC].

  • precision (None (default) | ‘single’ | ‘double’) – Precision used for internal calculations and outputs.If set toNone,precision is used.

Input:

bits ([…,k], tf.float) – Binary tensor containing the information bits to be encoded.

Output:

[…,n], tf.float – Binary tensor of same shape as inputs besides last dimension haschanged ton containing the encoded codeword bits.

Note

As specified in[3GPPTS38212_LDPC], the encoder also performsrate-matching (puncturing and shortening). Thus, the correspondingdecoder needs toinvert these operations, i.e., must be compatible withthe 5G encoding scheme.

propertycoderate

Coderate of the LDPC code after rate-matching

generate_out_int(n,num_bits_per_symbol)[source]

Generates LDPC output interleaver sequence as defined inSec 5.4.2.2 in[3GPPTS38212_LDPC].

Parameters:
  • n (int) – Desired output sequence length.

  • num_bits_per_symbol (int) – Number of symbols per QAM symbol, i.e., the modulation order.

Output:
  • perm_seq (ndarray of length n) – Containing the permuted indices.

  • perm_seq_inv (ndarray of length n) – Containing the inverse permuted indices.

Note

The interleaver pattern depends on the modulation order and helps toreduce dependencies in bit-interleaved coded modulation (BICM) schemescombined with higher order modulation.

propertyk

Number of input information bits

propertyk_ldpc

Number of LDPC information bits after rate-matching

propertyn

Number of output codeword bits

propertyn_ldpc

Number of LDPC codeword bits before rate-matching

propertynum_bits_per_symbol

Modulation order used for the rate-matching output interleaver

propertyout_int

Output interleaver sequence as defined in 5.4.2.2

propertyout_int_inv

Inverse output interleaver sequence as defined in 5.4.2.2

propertypcm

Parity-check matrix for given code parameters

propertyz

Lifting factor of the basegraph

classsionna.phy.fec.ldpc.decoding.LDPCBPDecoder(pcm,cn_update='boxplus-phi',vn_update='sum',cn_schedule='flooding',hard_out=True,num_iter=20,llr_max=20.0,v2c_callbacks=None,c2v_callbacks=None,return_state=False,precision=None,**kwargs)[source]

Iterative belief propagation decoder for low-density parity-check (LDPC)codes and othercodes on graphs.

This class defines a generic belief propagation decoder for decodingwith arbitrary parity-check matrices. It can be used to iterativelyestimate/recover the transmitted codeword (or information bits) based on theLLR-values of the received noisy codeword observation.

Per default, the decoder implements the flooding message passing algorithm[Ryan], i.e., all nodes are updated in a parallel fashion. Different check node update functions are available

  1. boxplus

    \[y_{j \to i} = 2 \operatorname{tanh}^{-1} \left( \prod_{i' \in \mathcal{N}(j) \setminus i} \operatorname{tanh} \left( \frac{x_{i' \to j}}{2} \right) \right)\]
  2. boxplus-phi

    \[y_{j \to i} = \alpha_{j \to i} \cdot \phi \left( \sum_{i' \in \mathcal{N}(j) \setminus i} \phi \left( |x_{i' \to j}|\right) \right)\]

    with\(\phi(x)=-\operatorname{log}(\operatorname{tanh} \left(\frac{x}{2}) \right)\)

  3. minsum

    \[\qquad y_{j \to i} = \alpha_{j \to i} \cdot {min}_{i' \in \mathcal{N}(j) \setminus i} \left(|x_{i' \to j}|\right)\]
  4. offset-minsum

\[\qquad y_{j \to i} = \alpha_{j \to i} \cdot {max} \left( {min}_{i' \in \mathcal{N}(j) \setminus i} \left(|x_{i' \to j}| \right)-\beta , 0\right)\]

where\(\beta=0.5\) and and\(y_{j \to i}\) denotes the messagefrom check node (CN)j to variable node (VN)i and\(x_{i \to j}\)from VNi to CNj, respectively. Further,\(\mathcal{N}(j)\)denotes all indices of connected VNs to CNj and

\[\alpha_{j \to i} = \prod_{i' \in \mathcal{N}(j) \setminus i} \operatorname{sign}(x_{i' \to j})\]

is the sign of the outgoing message. For further details we refer to[Ryan] and[Chen] for offset corrected minsum.

Note that for full 5G 3GPP NR compatibility, the correct puncturing andshortening patterns must be applied (cf.[Richardson] for details), thiscan be done byLDPC5GEncoder andLDPC5GDecoder, respectively.

If required, the decoder can be made trainable and is fully differentiableby following the concept ofweighted BP[Nachmani]. For this, customcallbacks can be registered that scale the messages during decoding. Pleasesee the corresponding tutorial notebook for details.

For numerical stability, the decoder applies LLR clipping of +/-llr_maxto the input LLRs.

Parameters:
  • pcm (ndarray) – An ndarray of shape[n-k, n] defining the parity-check matrixconsisting only of0 or1 entries. Can be also of typescipy.sparse.csr_matrix orscipy.sparse.csc_matrix.

  • cn_update (str,"boxplus-phi" (default)|"boxplus" |"minsum" |"offset-minsum" |"identity" |callable) – Check node update rule to be used as described above.If a callable is provided, it will be used instead as CN update.The input of the function is a ragged tensor of v2c messages of shape[num_cns, None, batch_size] where the second dimension is ragged(i.e., depends on the individual CN degree).

  • vn_update (str,"sum" (default)|"identity" |callable) – Variable node update rule to be used.If a callable is provided, it will be used instead as VN update.The input of the function is a ragged tensor of c2v messages of shape[num_vns, None, batch_size] where the second dimension is ragged(i.e., depends on the individual VN degree).

  • cn_schedule ("flooding" |[num_update_steps,num_active_nodes],tf.int) – Defines the CN update scheduling per BP iteration. Can be either“flooding” to update all nodes in parallel (recommended) or an 2D tensorwhere each row defines thenum_active_nodes node indices to beupdated per subiteration. In this case each BP iteration runsnum_update_steps subiterations, thus the decoder’s level ofparallelization is lower and usually the decoding throughput decreases.

  • hard_out (bool, (defaultTrue)) – IfTrue, the decoder provides hard-decided codeword bits instead ofsoft-values.

  • num_iter (int) – Defining the number of decoder iteration (due to batching, no earlystopping used at the moment!).

  • llr_max (float (default 20) |None) – Internal clipping value for all internal messages. IfNone, noclipping is applied.

  • v2c_callbacks – Each callable will be executed after each VN update with the followingargumentsmsg_vn_rag_,it,x_hat,wheremsg_vn_rag_ are the v2cmessages as ragged tensor of shape[num_vns, None, batch_size],x_hat is the current estimate of each VN of shape[num_vns, batch_size] andit is the current iteration counter.It must return and updated version ofmsg_vn_rag_ of same shape.

  • callables (None (default)|list of) – Each callable will be executed after each VN update with the followingargumentsmsg_vn_rag_,it,x_hat,wheremsg_vn_rag_ are the v2cmessages as ragged tensor of shape[num_vns, None, batch_size],x_hat is the current estimate of each VN of shape[num_vns, batch_size] andit is the current iteration counter.It must return and updated version ofmsg_vn_rag_ of same shape.

  • c2v_callbacks (None (default) | list of callables) – Each callable will be executed after each CN update with the followingargumentsmsg_cn_rag_ andit wheremsg_cn_rag_ are the c2vmessages as ragged tensor of shape[num_cns, None, batch_size] andit is the current iteration counter.It must return and updated version ofmsg_cn_rag_ of same shape.

  • return_state (bool, (defaultFalse)) – IfTrue, the internal VN messagesmsg_vn from the last decodingiteration are returned, andmsg_vn orNone needs to be given as asecond input when calling the decoder.This can be used for iterative demapping and decoding.

  • precision (None (default) | ‘single’ | ‘double’) – Precision used for internal calculations and outputs.If set toNone,precision is used.

Input:
  • llr_ch ([…,n], tf.float) – Tensor containing the channel logits/llr values.

  • msg_v2c (None | [num_edges, batch_size], tf.float) – Tensor of VN messages representing the internal decoder state.Required only if the decoder shall use its previous internal state, e.g.for iterative detection and decoding (IDD) schemes.

Output:
  • […,n], tf.float – Tensor of same shape asllr_ch containingbit-wise soft-estimates (or hard-decided bit-values) of allcodeword bits.

  • [num_edges, batch_size], tf.float: – Tensor of VN messages representing the internal decoder state.Returned only ifreturn_state is set toTrue.

Note

As decoding input logits\(\operatorname{log} \frac{p(x=1)}{p(x=0)}\)are assumed for compatibility with the learning framework, but internallylog-likelihood ratios (LLRs) with definition\(\operatorname{log} \frac{p(x=0)}{p(x=1)}\) are used.

The decoder is not (particularly) optimized for quasi-cyclic (QC) LDPCcodes and, thus, supports arbitrary parity-check matrices.

The decoder is implemented by using ‘“ragged Tensors”’[TF_ragged] toaccount for arbitrary node degrees. To avoid a performance degradationcaused by a severe indexing overhead, the batch-dimension is shifted tothe last dimension during decoding.

propertycoderate

codrate assuming independent parity checks

propertyllr_max

Max LLR value used for internal calculations and rate-matching

propertyn

codeword length

propertynum_cns

Number of check nodes

propertynum_edges

Number of edges in decoding graph

propertynum_iter

Number of decoding iterations

propertynum_vns

Number of variable nodes

propertypcm

Parity-check matrix of LDPC code

propertyreturn_state

Return internal decoder state for IDD schemes

classsionna.phy.fec.ldpc.decoding.LDPC5GDecoder(encoder,cn_update='boxplus-phi',vn_update='sum',cn_schedule='flooding',hard_out=True,return_infobits=True,num_iter=20,llr_max=20.0,v2c_callbacks=None,c2v_callbacks=None,prune_pcm=True,return_state=False,precision=None,**kwargs)[source]

Iterative belief propagation decoder for 5G NR LDPC codes.

Inherits fromLDPCBPDecoder andprovides a wrapper for 5G compatibility, i.e., automatically handlesrate-matching according to[3GPPTS38212_LDPC].

Note that for full 5G 3GPP NR compatibility, the correct puncturing andshortening patterns must be applied and, thus, the encoder object isrequired as input.

If required the decoder can be made trainable and is differentiable(the training of some check node types may be not supported) following theconcept of “weighted BP”[Nachmani].

Parameters:
  • encoder (LDPC5GEncoder) – An instance ofLDPC5GEncodercontaining the correct code parameters.

  • cn_update (str, “boxplus-phi” (default) | “boxplus” | “minsum” | “offset-minsum” | “identity” | callable) – Check node update rule to be used as described above.If a callable is provided, it will be used instead as CN update.The input of the function is a ragged tensor of v2c messages of shape[num_cns, None, batch_size] where the second dimension is ragged(i.e., depends on the individual CN degree).

  • vn_update (str, “sum” (default) | “identity” | callable) – Variable node update rule to be used.If a callable is provided, it will be used instead as VN update.The input of the function is a ragged tensor of c2v messages of shape[num_vns, None, batch_size] where the second dimension is ragged(i.e., depends on the individual VN degree).

  • cn_schedule ("flooding" |"layered" |[num_update_steps,num_active_nodes],tf.int) – Defines the CN update scheduling per BP iteration. Can be either“flooding” to update all nodes in parallel (recommended) or “layered”to sequentally update all CNs in the same lifting group together or an2D tensor where each row defines thenum_active_nodes node indices tobe updated per subiteration. In this case each BP iteration runsnum_update_steps subiterations, thus the decoder’s level ofparallelization is lower and usually the decoding throughput decreases.

  • hard_out (bool, (defaultTrue)) – IfTrue, the decoder provides hard-decided codeword bits instead ofsoft-values.

  • return_infobits (bool, (defaultTrue)) – IfTrue, only thek info bits (soft or hard-decided) are returned.Otherwise alln positions are returned.

  • prune_pcm (bool, (defaultTrue)) – IfTrue, all punctured degree-1 VNs and connected check nodes areremoved from the decoding graph (see[Cammerer] for details). Besidesnumerical differences, this should yield the same decoding result butimproved the decoding throughput and reduces the memory footprint.

  • num_iter (int (default: 20)) – Defining the number of decoder iterations (due to batching, no earlystopping used at the moment!).

  • llr_max (float (default: 20) |None) – Internal clipping value for all internal messages. IfNone, noclipping is applied.

  • v2c_callbacks – Each callable will be executed after each VN update with the followingargumentsmsg_vn_rag_,it,x_hat,wheremsg_vn_rag_ are the v2cmessages as ragged tensor of shape[num_vns, None, batch_size],x_hat is the current estimate of each VN of shape[num_vns, batch_size] andit is the current iteration counter.It must return and updated version ofmsg_vn_rag_ of same shape.

  • callables (None (default)|list of) – Each callable will be executed after each VN update with the followingargumentsmsg_vn_rag_,it,x_hat,wheremsg_vn_rag_ are the v2cmessages as ragged tensor of shape[num_vns, None, batch_size],x_hat is the current estimate of each VN of shape[num_vns, batch_size] andit is the current iteration counter.It must return and updated version ofmsg_vn_rag_ of same shape.

  • c2v_callbacks (None (default) | list of callables) – Each callable will be executed after each CN update with the followingargumentsmsg_cn_rag_ andit wheremsg_cn_rag_ are the c2vmessages as ragged tensor of shape[num_cns, None, batch_size] andit is the current iteration counter.It must return and updated version ofmsg_cn_rag_ of same shape.

  • return_state (bool, (defaultFalse)) – IfTrue, the internal VN messagesmsg_vn from the last decodingiteration are returned, andmsg_vn orNone needs to be given as asecond input when calling the decoder.This can be used for iterative demapping and decoding.

  • precision (None (default) | “single” | “double”) – Precision used for internal calculations and outputs.If set toNone,precision is used.

Input:
  • llr_ch ([…,n], tf.float) – Tensor containing the channel logits/llr values.

  • msg_v2c (None | [num_edges, batch_size], tf.float) – Tensor of VN messages representing the internal decoder state.Required only if the decoder shall use its previous internal state, e.g.for iterative detection and decoding (IDD) schemes.

Output:
  • […,n] or […,k], tf.float – Tensor of same shape asllr_ch containingbit-wise soft-estimates (or hard-decided bit-values) of alln codeword bits or only thek information bits ifreturn_infobits is True.

  • [num_edges, batch_size], tf.float: – Tensor of VN messages representing the internal decoder state.Returned only ifreturn_state is set toTrue.Remark: always retruns entire decoder state, even ifreturn_infobits is True.

Note

As decoding input logits\(\operatorname{log} \frac{p(x=1)}{p(x=0)}\)are assumed for compatibility with the learning framework, but internallyLLRs with definition\(\operatorname{log} \frac{p(x=0)}{p(x=1)}\) areused.

The decoder is not (particularly) optimized for Quasi-cyclic (QC) LDPCcodes and, thus, supports arbitrary parity-check matrices.

The decoder is implemented by using ‘“ragged Tensors”’[TF_ragged] toaccount for arbitrary node degrees. To avoid a performance degradationcaused by a severe indexing overhead, the batch-dimension is shifted tothe last dimension during decoding.

propertyencoder

LDPC Encoder used for rate-matching/recovery

Node Update Functions

sionna.phy.fec.ldpc.decoding.vn_update_sum(msg_c2v_rag,llr_ch,llr_clipping=None)[source]

Variable node update function implementing thesum update.

This function implements the (extrinsic) variable node updatefunction. It takes the sum over all incoming messagesmsg excludingthe intrinsic (= outgoing) message itself.

Additionally, the channel LLRllr_ch is considered in each variablenode.

Parameters:
  • msg_c2v_rag ([num_nodes,None,batch_size],tf.ragged) – Ragged tensor of shape[num_nodes, None, batch_size] where the secondaxis is ragged (represents individual node degrees).Represents c2v messages.

  • llr_ch ([num_nodes,batch_size],tf.float) – Tensor containing the channel LLRs.

  • llr_clipping (None (default) | float) – Clipping value used for internal processing. IfNone, no internalclipping is applied.

Returns:

  • msg_v2c_rag (tf.ragged) – Updated v2c messages. Ragged tensor of same shape asmsg_c2v

  • x_tot (tf.float) – Mariginalized LLRs per variable node of shape[num_nodes, batch_size].

sionna.phy.fec.ldpc.decoding.cn_update_minsum(msg_v2c_rag,llr_clipping=None)[source]

Check node update function implementing theminsum update.

The function implements

\[\qquad y_{j \to i} = \alpha_{j \to i} \cdot {min}_{i' \in \mathcal{N}(j) \setminus i} \left(|x_{i' \to j}|\right)\]

where\(y_{j \to i}\) denotes the message from check node (CN)j tovariable node (VN)i and\(x_{i \to j}\) from VNi to CNj,respectively. Further,\(\mathcal{N}(j)\) denotes all indices ofconnected VNs to CNj and

\[\alpha_{j \to i} = \prod_{i' \in \mathcal{N}(j) \setminus i} \operatorname{sign}(x_{i' \to j})\]

is the sign of the outgoing message. For further details we refer to[Ryan] and[Chen].

Parameters:
  • msg_v2c_rag ([num_nodes,None,batch_size],tf.ragged) – Ragged tensor of shape[num_nodes, None, batch_size] where the secondaxis is ragged (represents individual node degrees).Represents v2c messages

  • llr_clipping (None (default) | float) – Clipping value used for internal processing. IfNone, no internalclipping is applied.

Returns:

msg_c2v – Ragged tensor of same shape asmsg_c2v containing the updated c2vmessages.

Return type:

[num_nodes, None, batch_size], tf.ragged

sionna.phy.fec.ldpc.decoding.cn_update_offset_minsum(msg_v2c_rag,llr_clipping=None,offset=0.5)[source]

Check node update function implementing the offset corrected minsum.

The function implements

\[\qquad y_{j \to i} = \alpha_{j \to i} \cdot {max} \left( {min}_{i' \in \mathcal{N}(j) \setminus i} \left(|x_{i' \to j}| \right)-\beta , 0\right)\]

where\(\beta=0.5\) and\(y_{j \to i}\) denotes the message fromcheck node (CN)j to variable node (VN)i and\(x_{i \to j}\) fromVNi to CNj, respectively. Further,\(\mathcal{N}(j)\) denotesall indices of connected VNs to CNj and

\[\alpha_{j \to i} = \prod_{i' \in \mathcal{N}(j) \setminus i} \operatorname{sign}(x_{i' \to j})\]

is the sign of the outgoing message. For further details we refer to[Chen].

Parameters:
  • msg_v2c_rag ([num_nodes,None,batch_size],tf.ragged) – Ragged tensor of shape[num_nodes, None, batch_size] where the secondaxis is ragged (represents individual node degrees).Represents v2c messages.

  • llr_clipping (None (default) | float) – Clipping value used for internal processing. IfNone, no internalclipping is applied.

  • offset (float (default0.5)) – Offset value to be subtracted from each outgoing message.

Returns:

msg_c2v – Ragged tensor of same shape asmsg_c2v containing the updated c2vmessages.

Return type:

[num_nodes, None, batch_size], tf.ragged

sionna.phy.fec.ldpc.decoding.cn_update_phi(msg,llr_clipping=None)[source]

Check node update function implementing theboxplus operation.

This function implements the (extrinsic) check node update functionbased on the numerically more stable“_phi” function (cf.[Ryan]).It calculates the boxplus function over all incoming messagesmsgexcluding the intrinsic (=outgoing) message itself.The exact boxplus function is implemented by using the“_phi” functionas in[Ryan].

The function implements

\[y_{j \to i} = \alpha_{j \to i} \cdot \phi \left( \sum_{i' \in \mathcal{N}(j) \setminus i} \phi \left( |x_{i' \to j}|\right) \right)\]

where\(\phi(x)=-\operatorname{log}(\operatorname{tanh} \left(\frac{x} {2}) \right)\)and\(y_{j \to i}\) denotes the message from check node(CN)j to variable node (VN)i and\(x_{i \to j}\) from VNi toCNj, respectively. Further,\(\mathcal{N}(j)\) denotes all indicesof connected VNs to CNj and

\[\alpha_{j \to i} = \prod_{i' \in \mathcal{N}(j) \setminus i} \operatorname{sign}(x_{i' \to j})\]

is the sign of the outgoing message. For further details we refer to[Ryan].

Note that for numerical stability clipping can be applied.

Parameters:
  • msg_v2c_rag ([num_nodes,None,batch_size],tf.ragged) – Ragged tensor of shape[num_nodes, None, batch_size] where the secondaxis is ragged (represents individual node degrees).Represents v2c messages

  • llr_clipping (None (default) | float) – Clipping value used for internal processing. IfNone, no internalclipping is applied.

Returns:

msg_c2v – Ragged tensor of same shape asmsg_c2v containing the updated c2vmessages.

Return type:

[num_nodes, None, batch_size], tf.ragged

sionna.phy.fec.ldpc.decoding.cn_update_tanh(msg,llr_clipping=None)[source]

Check node update function implementing theboxplus operation.

This function implements the (extrinsic) check node updatefunction. It calculates the boxplus function over all incoming messages“msg” excluding the intrinsic (=outgoing) message itself.The exact boxplus function is implemented by using the tanh function.

The function implements

\[y_{j \to i} = 2 \operatorname{tanh}^{-1} \left( \prod_{i' \in \mathcal{N}(j) \setminus i} \operatorname{tanh} \left( \frac{x_{i' \to j}}{2} \right) \right)\]

where\(y_{j \to i}\) denotes the message from check node (CN)j tovariable node (VN)i and\(x_{i \to j}\) from VNi to CNj,respectively. Further,\(\mathcal{N}(j)\) denotes all indices ofconnected VNs to CNj and

\[\alpha_{j \to i} = \prod_{i' \in \mathcal{N}(j) \setminus i} \operatorname{sign}(x_{i' \to j})\]

is the sign of the outgoing message. For further details we refer to[Ryan].

Note that for numerical stability clipping can be applied.

Parameters:
  • msg_v2c_rag ([num_nodes,None,batch_size],tf.ragged) – Ragged tensor of shape[num_nodes, None, batch_size] where the secondaxis is ragged (represents individual node degrees).Represents v2c messages

  • llr_clipping (None (default) | float) – Clipping value used for internal processing. IfNone, no internalclipping is applied.

Returns:

msg_c2v – Ragged tensor of same shape asmsg_c2v containing the updated c2vmessages.

Return type:

[num_nodes, None, batch_size], tf.ragged

Decoder Callbacks

TheLDPCBPDecoder andLDPC5GDecoder have the possibility toregister callbacks that are executed after each iteration. This allows tocustomize the behavior of the decoder (for example to implement weighted BP[Nachmani]) or to track the decoding process.

A simple example to track the decoder statistics is given in the following example

num_iter=10# init decoder stats moduledec_stats=DecoderStatisticsCallback(num_iter)encoder=LDPC5GEncoder(k=100,# number of information bits (input)n=200)# number of codeword bits (output)decoder=LDPC5GDecoder(encoder=encoder,num_iter=num_iter,# number of BP iterationsreturn_infobits=True,c2v_callbacks=[dec_stats,])# register stats callbacksource=GaussianPriorSource()# generate LLRsnoise_var=0.1batch_size=1000llr_ch=source([batch_size,encoder.n],noise_var)# and run decoder (this can be also a loop)decoder(llr_ch)# and print statisticsprint("Avg. iterations:",dec_stats.avg_number_iterations.numpy())print("Success rate after n iterations:",dec_stats.success_rate.numpy())>>Avg.iterations:5.404>>Successrateafterniterations:[0.2580.2350.6370.6380.6380.6380.6380.6380.6380.638]
sionna.phy.fec.ldpc.utils.DecoderStatisticsCallback(num_iter)[source]

Callback for the LDPCBPDecoder to track decoder statistics.

Can be registered asc2v_callbacks in theLDPCBPDecoder and theLDPC5GDecoder.

Remark: the decoding statistics are based on CN convergence, i.e.,successful decoding is assumed if all check nodes are fulfilled.This overestimates the success-rate as it includes cases where the decoderconverges to the wrong codeword.

Parameters:

num_iter (int) – Maximum number of decoding iterations.

Input:
  • msg ([num_vns, None, batch_size], tf.ragged, tf.float) – v2c messages as ragged tensor

  • it (int) – Current number of decoding iterations

Output:

tf.ragged, tf.float – Same asmsg

sionna.phy.fec.ldpc.utils.EXITCallback(num_iter)[source]

Callback for the LDPCBPDecoder to track EXIT statistics.

Can be registered asc2v_callbacksv2c_callbacks in theLDPCBPDecoder and theLDPC5GDecoder.

This callback requires all-zero codeword simulations.

Parameters:

num_iter (int) – Maximum number of decoding iterations.

Input:
  • msg ([num_vns, None, batch_size], tf.ragged, tf.float) – The v2c or c2v messages as ragged tensor

  • it (int) – Current number of decoding iterations

Output:

tf.ragged, tf.float – Same asmsg

sionna.phy.fec.ldpc.utils.WeightedBPCallback(num_edges,precision=None,**kwargs)[source]

Callback for the LDPCBPDecoder to enable weighted BP[Nachmani].

The BP decoder is fully differentiable and can be made trainableby following the concept ofweighted BP[Nachmani] as shown in Fig. 1leading to

\[y_{j \to i} = 2 \operatorname{tanh}^{-1} \left( \prod_{i' \in \mathcal{N}(j) \setminus i} \operatorname{tanh} \left( \frac{\textcolor{red}{w_{i' \to j}} \cdot x_{i' \to j}}{2} \right) \right)\]

where\(w_{i \to j}\) denotes the trainable weight of message\(x_{i \to j}\).Please note that the training of some check node types may be not supported.

../../_images/weighted_bp.png

Fig. 10Fig. 1: Weighted BP as proposed in[Nachmani].

Can be registered asc2v_callbacks andv2c_callbacks in theLDPCBPDecoder and theLDPC5GDecoder.

Parameters:
  • num_edges (int) – Number of edges in the decoding graph

  • precision (None (default) | ‘single’ | ‘double’) – Precision used for internal calculations and outputs.If set toNone,precision is used.

Input:

msg ([num_vns, None, batch_size], tf.ragged, tf.float) – v2c messages as ragged tensor

Output:

tf.ragged, tf.float – Same asmsg

References:

[3GPPTS38212_LDPC](1,2,3,4,5,6)

ETSI 3GPP TS 38.212 “5G NR Multiplexing and channelcoding”, v.16.5.0, 2021-03.

[Ryan](1,2,3,4,5,6,7)

W. Ryan, “An Introduction to LDPC codes”, CRC Handbook forCoding and Signal Processing for Recording Systems, 2004.

[Richardson]

T. Richardson and S. Kudekar. “Design of low-densityparity-check codes for 5G new radio,” IEEE CommunicationsMagazine 56.3, 2018.

[Nachmani](1,2,3,4,5,6)

E. Nachmani, Y. Be’ery, and D. Burshtein. “Learning todecode linear codes using deep learning,” IEEE Annual AllertonConference on Communication, Control, and Computing (Allerton),2016.

[Cammerer]

S. Cammerer, M. Ebada, A. Elkelesh, and S. ten Brink.“Sparse graphs for belief propagation decoding of polar codes.”IEEE International Symposium on Information Theory (ISIT), 2018.

[Chen](1,2,3)

J. Chen, et al. “Reduced-complexity Decoding of LDPC Codes.”IEEE Transactions on Communications, vol. 53, no. 8, Aug. 2005.