Part 4: Toward Learned Receivers

This tutorial will guide you through Sionna, from its basic principles to the implementation of a point-to-point link with a 5G NR compliant code and a 3GPP channel model. You will also learn how to write custom trainable layers by implementing a state of the art neural receiver, and how to train and evaluate end-to-end communication systems.

The tutorial is structured in four notebooks:

  • Part I: Getting started with Sionna

  • Part II: Differentiable Communication Systems

  • Part III: Advanced Link-level Simulations

  • Part IV: Toward Learned Receivers

Theofficial documentation provides key material on how to use Sionna and how its components are implemented.

Imports

[1]:
importos# Configure which GPUifos.getenv("CUDA_VISIBLE_DEVICES")isNone:gpu_num=0# Use "" to use the CPUos.environ["CUDA_VISIBLE_DEVICES"]=f"{gpu_num}"os.environ['TF_CPP_MIN_LOG_LEVEL']='3'# Import Sionnatry:importsionnaassnimportsionna.phyexceptImportErrorase:importsysif'google.colab'insys.modules:# Install Sionna in Google Colabprint("Installing Sionna and restarting the runtime. Please run the cell again.")os.system("pip install sionna")os.kill(os.getpid(),5)else:raisee# Configure the notebook to use only a single GPU and allocate only as much memory as needed# For more details, see https://www.tensorflow.org/guide/gpuimporttensorflowastfgpus=tf.config.list_physical_devices('GPU')ifgpus:try:tf.config.experimental.set_memory_growth(gpus[0],True)exceptRuntimeErrorase:print(e)# Avoid warnings from TensorFlowtf.get_logger().setLevel('ERROR')importnumpyasnp# For saving complex Python data structures efficientlyimportpickle# For plotting%matplotlib inlineimportmatplotlib.pyplotasplt# For the implementation of the neural receiverfromtensorflow.kerasimportModelfromtensorflow.keras.layersimportLayer,Conv2D,LayerNormalizationfromtensorflow.nnimportrelu# Set seed for reproducable resultssn.phy.config.seed=42

Simulation Parameters

[2]:
# Bit per channel useNUM_BITS_PER_SYMBOL=2# QPSK# Minimum value of Eb/N0 [dB] for simulationsEBN0_DB_MIN=-3.0# Maximum value of Eb/N0 [dB] for simulationsEBN0_DB_MAX=5.0# How many examples are processed by Sionna in parallelBATCH_SIZE=128# Coding rateCODERATE=0.5# Define the number of UT and BS antennasNUM_UT=1NUM_BS=1NUM_UT_ANT=1NUM_BS_ANT=2# The number of transmitted streams is equal to the number of UT antennas# in both uplink and downlinkNUM_STREAMS_PER_TX=NUM_UT_ANT# Create an RX-TX association matrix.# RX_TX_ASSOCIATION[i,j]=1 means that receiver i gets at least one stream# from transmitter j. Depending on the transmission direction (uplink or downlink),# the role of UT and BS can change.# For example, considering a system with 2 RX and 4 TX, the RX-TX# association matrix could be# [ [1 , 1, 0, 0],#   [0 , 0, 1, 1] ]# which indicates that the RX 0 receives from TX 0 and 1, and RX 1 receives from# TX 2 and 3.## In this notebook, as we have only a single transmitter and receiver,# the RX-TX association matrix is simply:RX_TX_ASSOCIATION=np.array([[1]])# Instantiate a StreamManagement object# This determines which data streams are determined for which receiver.# In this simple setup, this is fairly easy. However, it can get more involved# for simulations with many transmitters and receivers.STREAM_MANAGEMENT=sn.phy.mimo.StreamManagement(RX_TX_ASSOCIATION,NUM_STREAMS_PER_TX)RESOURCE_GRID=sn.phy.ofdm.ResourceGrid(num_ofdm_symbols=14,fft_size=76,subcarrier_spacing=30e3,num_tx=NUM_UT,num_streams_per_tx=NUM_STREAMS_PER_TX,cyclic_prefix_length=6,pilot_pattern="kronecker",pilot_ofdm_symbol_indices=[2,11])# Carrier frequency in Hz.CARRIER_FREQUENCY=2.6e9# Antenna settingUT_ARRAY=sn.phy.channel.tr38901.Antenna(polarization="single",polarization_type="V",antenna_pattern="38.901",carrier_frequency=CARRIER_FREQUENCY)BS_ARRAY=sn.phy.channel.tr38901.AntennaArray(num_rows=1,num_cols=int(NUM_BS_ANT/2),polarization="dual",polarization_type="cross",antenna_pattern="38.901",# Try 'omni'carrier_frequency=CARRIER_FREQUENCY)# Nominal delay spread in [s]. Please see the CDL documentation# about how to choose this value.DELAY_SPREAD=100e-9# The `direction` determines if the UT or BS is transmitting.# In the `uplink`, the UT is transmitting.DIRECTION="uplink"# Suitable values are ["A", "B", "C", "D", "E"]CDL_MODEL="C"# UT speed [m/s]. BSs are always assumed to be fixed.# The direction of travel will chosen randomly within the x-y plane.SPEED=10.0# Configure a channel impulse reponse (CIR) generator for the CDL model.CDL=sn.phy.channel.tr38901.CDL(CDL_MODEL,DELAY_SPREAD,CARRIER_FREQUENCY,UT_ARRAY,BS_ARRAY,DIRECTION,min_speed=SPEED)

Implemention of an Advanced Neural Receiver

We will implement a state-of-the-art neural receiver that operates over the entire resource grid of received symbols.

The neural receiver computes LLRs on the coded bits from the received resource grid of frequency-domain baseband symbols.

Neural RX

As shown in the following figure, the neural receiver substitutes to the channel estimator, equalizer, and demapper.

LS-PSCI-Neural RX

As in [1] and [2], a neural receiver using residual convolutional layers is implemented.

Convolutional layers are leveraged to efficienly process the 2D resource grid that is fed as an input to the neural receiver.

Residual (skip) connections are used to avoid gradient vanishing [3].

For convenience, a Keras layer that implements aresidual block is first defined. The Keras layer that implements the neural receiver is built by stacking such blocks. The following figure shows the architecture of the neural receiver.

Neural RX
[3]:
classResidualBlock(Layer):def__init__(self):super().__init__()# Layer normalization is done over the last three dimensions: time, frequency, conv 'channels'self._layer_norm_1=LayerNormalization(axis=(-1,-2,-3))self._conv_1=Conv2D(filters=128,kernel_size=[3,3],padding='same',activation=None)# Layer normalization is done over the last three dimensions: time, frequency, conv 'channels'self._layer_norm_2=LayerNormalization(axis=(-1,-2,-3))self._conv_2=Conv2D(filters=128,kernel_size=[3,3],padding='same',activation=None)defcall(self,inputs):z=self._layer_norm_1(inputs)z=relu(z)z=self._conv_1(z)z=self._layer_norm_2(z)z=relu(z)z=self._conv_2(z)# [batch size, num time samples, num subcarriers, num_channels]# Skip connectionz=z+inputsreturnzclassNeuralReceiver(Layer):def__init__(self):super().__init__()# Input convolutionself._input_conv=Conv2D(filters=128,kernel_size=[3,3],padding='same',activation=None)# Residual blocksself._res_block_1=ResidualBlock()self._res_block_2=ResidualBlock()self._res_block_3=ResidualBlock()self._res_block_4=ResidualBlock()# Output convself._output_conv=Conv2D(filters=NUM_BITS_PER_SYMBOL,kernel_size=[3,3],padding='same',activation=None)defcall(self,y,no):# Assuming a single receiver, remove the num_rx dimensiony=tf.squeeze(y,axis=1)# Feeding the noise power in log10 scale helps with the performanceno=sn.phy.utils.log10(no)# Stacking the real and imaginary components of the different antennas along the 'channel' dimensiony=tf.transpose(y,[0,2,3,1])# Putting antenna dimension lastno=sn.phy.utils.insert_dims(no,3,1)no=tf.tile(no,[1,y.shape[1],y.shape[2],1])# z : [batch size, num ofdm symbols, num subcarriers, 2*num rx antenna + 1]z=tf.concat([tf.math.real(y),tf.math.imag(y),no],axis=-1)# Input convz=self._input_conv(z)# Residual blocksz=self._res_block_1(z)z=self._res_block_2(z)z=self._res_block_3(z)z=self._res_block_4(z)# Output convz=self._output_conv(z)# Reshape the input to fit what the resource grid demapper is expectedz=sn.phy.utils.insert_dims(z,2,1)returnz

The task of the receiver is to jointly solve, for each resource element,NUM_BITS_PER_SYMBOL binary classification problems in order to reconstruct the transmitted bits. Therefore, a natural choice for the loss function is thebinary cross-entropy (BCE) applied to each bit and to each received symbol.

Remark: The LLRs computed by the demapper arelogits on the transmitted bits, and can therefore be used as-is to compute the BCE without any additional processing.Remark 2: The BCE is closely related to an achieveable information rate for bit-interleaved coded modulation systems [4,5]

The next cell defines an end-to-end communication system using the neural receiver layer.

At initialization, the paramatertraining indicates if the system is instantiated to be trained (True) or evaluated (False).

If the system is instantiated to be trained, the outer encoder and decoder are not used as they are not required for training. Moreover, the estimated BCE is returned. This significantly reduces the computational complexity of training.

If the system is instantiated to be evaluated, the outer encoder and decoder are used, and the transmited information and corresponding LLRs are returned.

[4]:
classOFDMSystemNeuralReceiver(Model):# Inherits from Keras Modeldef__init__(self,training):super().__init__()# Must call the Keras model initializerself.training=trainingn=int(RESOURCE_GRID.num_data_symbols*NUM_BITS_PER_SYMBOL)# Number of coded bitsk=int(n*CODERATE)# Number of information bitsself.k=kself.n=n# The binary source will create batches of information bitsself.binary_source=sn.phy.mapping.BinarySource()# The encoder maps information bits to coded bitsself.encoder=sn.phy.fec.ldpc.LDPC5GEncoder(k,n)# The mapper maps blocks of information bits to constellation symbolsself.mapper=sn.phy.mapping.Mapper("qam",NUM_BITS_PER_SYMBOL)# The resource grid mapper maps symbols onto an OFDM resource gridself.rg_mapper=sn.phy.ofdm.ResourceGridMapper(RESOURCE_GRID)# Frequency domain channelself.channel=sn.phy.channel.OFDMChannel(CDL,RESOURCE_GRID,add_awgn=True,normalize_channel=True,return_channel=False)# Neural receiverself.neural_receiver=NeuralReceiver()# Used to extract data-carrying resource elementsself.rg_demapper=sn.phy.ofdm.ResourceGridDemapper(RESOURCE_GRID,STREAM_MANAGEMENT)# The decoder provides hard-decisions on the information bitsself.decoder=sn.phy.fec.ldpc.LDPC5GDecoder(self.encoder,hard_out=True)# Loss functionself.bce=tf.keras.losses.BinaryCrossentropy(from_logits=True)# Loss function@tf.function# Graph execution to speed things updef__call__(self,batch_size,ebno_db):no=sn.phy.utils.ebnodb2no(ebno_db,num_bits_per_symbol=NUM_BITS_PER_SYMBOL,coderate=CODERATE,resource_grid=RESOURCE_GRID)# The neural receiver is expected no to have shape [batch_size].iflen(no.shape)==0:no=tf.fill([batch_size],no)# Transmitter# Outer coding is only performed if not trainingifself.training:codewords=self.binary_source([batch_size,NUM_UT,NUM_UT_ANT,self.n])else:bits=self.binary_source([batch_size,NUM_UT,NUM_UT_ANT,self.k])codewords=self.encoder(bits)x=self.mapper(codewords)x_rg=self.rg_mapper(x)# Channely=self.channel(x_rg,no)# Receiverllr=self.neural_receiver(y,no)llr=self.rg_demapper(llr)# Extract data-carrying resource elements. The other LLrs are discardedllr=tf.reshape(llr,[batch_size,NUM_UT,NUM_UT_ANT,self.n])# Reshape the LLRs to fit what the outer decoder is expectedifself.training:loss=self.bce(codewords,llr)returnlosselse:bits_hat=self.decoder(llr)returnbits,bits_hat

Training the Neural Receiver

The next cell implements a training loop ofNUM_TRAINING_ITERATIONS iterations.

At each iteration:

  • A batch of SNRs\(E_b/N_0\) is sampled

  • A forward pass through the end-to-end system is performed within a gradient tape

  • The gradients are computed using the gradient tape, and applied using the Adam optimizer

  • A progress bar is periodically updated to follow the progress of training

After training, the weights of the models are saved in a file usingpickle.

Executing the next cell will take quite a while. If you do not want to train your own neural receiver, you can download the weightshere and use them later on.

[5]:
train=False# Chane to train your own modeliftrain:# Number of iterations used for trainingNUM_TRAINING_ITERATIONS=100000# Instantiating the end-to-end model for trainingmodel=OFDMSystemNeuralReceiver(training=True)# Adam optimizer (SGD variant)optimizer=tf.keras.optimizers.Adam()# Training loopforiinrange(NUM_TRAINING_ITERATIONS):# Sample a batch of SNRs.ebno_db=tf.random.uniform(shape=[BATCH_SIZE],minval=EBN0_DB_MIN,maxval=EBN0_DB_MAX)# Forward passwithtf.GradientTape()astape:loss=model(BATCH_SIZE,ebno_db)# Computing and applying gradientsweights=model.trainable_weightsgrads=tape.gradient(loss,weights)optimizer.apply_gradients(zip(grads,weights))# Print progressifi%100==0:print(f"{i}/{NUM_TRAINING_ITERATIONS}  Loss:{loss:.2E}",end="\r")# Save the weightsin a fileweights=model.get_weights()withopen('weights-ofdm-neuralrx','wb')asf:pickle.dump(weights,f)

Benchmarking the Neural Receiver

We evaluate the trained model and benchmark it against the previously introduced baselines.

We first define and evaluate the baselines.

[6]:
classOFDMSystem(Model):# Inherits from Keras Modeldef__init__(self,perfect_csi):super().__init__()# Must call the Keras model initializerself.perfect_csi=perfect_csin=int(RESOURCE_GRID.num_data_symbols*NUM_BITS_PER_SYMBOL)# Number of coded bitsk=int(n*CODERATE)# Number of information bitsself.k=k# The binary source will create batches of information bitsself.binary_source=sn.phy.mapping.BinarySource()# The encoder maps information bits to coded bitsself.encoder=sn.phy.fec.ldpc.LDPC5GEncoder(k,n)# The mapper maps blocks of information bits to constellation symbolsself.mapper=sn.phy.mapping.Mapper("qam",NUM_BITS_PER_SYMBOL)# The resource grid mapper maps symbols onto an OFDM resource gridself.rg_mapper=sn.phy.ofdm.ResourceGridMapper(RESOURCE_GRID)# Frequency domain channelself.channel=sn.phy.channel.OFDMChannel(CDL,RESOURCE_GRID,add_awgn=True,normalize_channel=True,return_channel=True)# The LS channel estimator will provide channel estimates and error variancesself.ls_est=sn.phy.ofdm.LSChannelEstimator(RESOURCE_GRID,interpolation_type="nn")# The LMMSE equalizer will provide soft symbols together with noise variance estimatesself.lmmse_equ=sn.phy.ofdm.LMMSEEqualizer(RESOURCE_GRID,STREAM_MANAGEMENT)# The demapper produces LLR for all coded bitsself.demapper=sn.phy.mapping.Demapper("app","qam",NUM_BITS_PER_SYMBOL)# The decoder provides hard-decisions on the information bitsself.decoder=sn.phy.fec.ldpc.LDPC5GDecoder(self.encoder,hard_out=True)@tf.function# Graph execution to speed things updef__call__(self,batch_size,ebno_db):no=sn.phy.utils.ebnodb2no(ebno_db,num_bits_per_symbol=NUM_BITS_PER_SYMBOL,coderate=CODERATE,resource_grid=RESOURCE_GRID)# Transmitterbits=self.binary_source([batch_size,NUM_UT,RESOURCE_GRID.num_streams_per_tx,self.k])codewords=self.encoder(bits)x=self.mapper(codewords)x_rg=self.rg_mapper(x)# Channely,h_freq=self.channel(x_rg,no)# Receiverifself.perfect_csi:h_hat,err_var=h_freq,0.else:h_hat,err_var=self.ls_est(y,no)x_hat,no_eff=self.lmmse_equ(y,h_hat,err_var,no)llr=self.demapper(x_hat,no_eff)bits_hat=self.decoder(llr)returnbits,bits_hat
[7]:
ber_plots=sn.phy.utils.PlotBER("Advanced neural receiver")baseline_ls=OFDMSystem(False)ber_plots.simulate(baseline_ls,ebno_dbs=np.linspace(EBN0_DB_MIN,EBN0_DB_MAX,20),batch_size=BATCH_SIZE,num_target_block_errors=100,# simulate until 100 block errors occuredlegend="Baseline: LS Estimation",soft_estimates=True,max_mc_iter=100,# run 100 Monte-Carlo simulations (each with batch_size samples)show_fig=False);baseline_pcsi=OFDMSystem(True)ber_plots.simulate(baseline_pcsi,ebno_dbs=np.linspace(EBN0_DB_MIN,EBN0_DB_MAX,20),batch_size=BATCH_SIZE,num_target_block_errors=100,# simulate until 100 block errors occuredlegend="Baseline: Perfect CSI",soft_estimates=True,max_mc_iter=100,# run 100 Monte-Carlo simulations (each with batch_size samples)show_fig=False);
EbNo [dB] |        BER |       BLER |  bit errors |    num bits | block errors |  num blocks | runtime [s] |    status---------------------------------------------------------------------------------------------------------------------------------------     -3.0 | 3.6858e-01 | 1.0000e+00 |       43026 |      116736 |          128 |         128 |        10.8 |reached target block errors   -2.579 | 3.5468e-01 | 1.0000e+00 |       41404 |      116736 |          128 |         128 |         0.1 |reached target block errors   -2.158 | 3.4774e-01 | 1.0000e+00 |       40594 |      116736 |          128 |         128 |         0.1 |reached target block errors   -1.737 | 3.3398e-01 | 1.0000e+00 |       38987 |      116736 |          128 |         128 |         0.1 |reached target block errors   -1.316 | 3.2021e-01 | 1.0000e+00 |       37380 |      116736 |          128 |         128 |         0.1 |reached target block errors   -0.895 | 3.0953e-01 | 1.0000e+00 |       36133 |      116736 |          128 |         128 |         0.1 |reached target block errors   -0.474 | 2.9626e-01 | 1.0000e+00 |       34584 |      116736 |          128 |         128 |         0.1 |reached target block errors   -0.053 | 2.7653e-01 | 1.0000e+00 |       32281 |      116736 |          128 |         128 |         0.1 |reached target block errors    0.368 | 2.5855e-01 | 1.0000e+00 |       30182 |      116736 |          128 |         128 |         0.1 |reached target block errors    0.789 | 2.4379e-01 | 1.0000e+00 |       28459 |      116736 |          128 |         128 |         0.1 |reached target block errors    1.211 | 2.2006e-01 | 1.0000e+00 |       25689 |      116736 |          128 |         128 |         0.1 |reached target block errors    1.632 | 1.9551e-01 | 1.0000e+00 |       22823 |      116736 |          128 |         128 |         0.1 |reached target block errors    2.053 | 1.6506e-01 | 1.0000e+00 |       19269 |      116736 |          128 |         128 |         0.1 |reached target block errors    2.474 | 9.2636e-02 | 9.0625e-01 |       10814 |      116736 |          116 |         128 |         0.1 |reached target block errors    2.895 | 2.3466e-02 | 3.5677e-01 |        8218 |      350208 |          137 |         384 |         0.2 |reached target block errors    3.316 | 1.3559e-03 | 3.7202e-02 |        3324 |     2451456 |          100 |        2688 |         1.5 |reached target block errors    3.737 | 1.4032e-04 | 1.8750e-03 |        1638 |    11673600 |           24 |       12800 |         7.2 |reached max iterations    4.158 | 0.0000e+00 | 0.0000e+00 |           0 |    11673600 |            0 |       12800 |         7.3 |reached max iterationsSimulation stopped as no error occurred @ EbNo = 4.2 dB.EbNo [dB] |        BER |       BLER |  bit errors |    num bits | block errors |  num blocks | runtime [s] |    status---------------------------------------------------------------------------------------------------------------------------------------     -3.0 | 2.1829e-01 | 1.0000e+00 |       25482 |      116736 |          128 |         128 |         2.7 |reached target block errors   -2.579 | 1.9817e-01 | 1.0000e+00 |       23134 |      116736 |          128 |         128 |         0.1 |reached target block errors   -2.158 | 1.7092e-01 | 1.0000e+00 |       19952 |      116736 |          128 |         128 |         0.1 |reached target block errors   -1.737 | 1.3443e-01 | 9.9219e-01 |       15693 |      116736 |          127 |         128 |         0.1 |reached target block errors   -1.316 | 7.5178e-02 | 8.8281e-01 |        8776 |      116736 |          113 |         128 |         0.1 |reached target block errors   -0.895 | 1.4149e-02 | 3.6198e-01 |        4955 |      350208 |          139 |         384 |         0.2 |reached target block errors   -0.474 | 8.1262e-04 | 2.7209e-02 |        2751 |     3385344 |          101 |        3712 |         2.1 |reached target block errors   -0.053 | 4.4631e-05 | 9.3750e-04 |         521 |    11673600 |           12 |       12800 |         7.4 |reached max iterations    0.368 | 2.0559e-06 | 7.8125e-05 |          24 |    11673600 |            1 |       12800 |         7.4 |reached max iterations    0.789 | 0.0000e+00 | 0.0000e+00 |           0 |    11673600 |            0 |       12800 |         7.4 |reached max iterationsSimulation stopped as no error occurred @ EbNo = 0.8 dB.

We then instantiate and evaluate the end-to-end system equipped with the neural receiver.

[8]:
# Instantiating the end-to-end model for evaluationmodel_neuralrx=OFDMSystemNeuralReceiver(training=False)# Run one inference to build the layers and loading the weightsmodel_neuralrx(tf.constant(1,tf.int32),tf.constant(10.0,tf.float32))withopen('weights-ofdm-neuralrx','rb')asf:weights=pickle.load(f)model_neuralrx.set_weights(weights)
[9]:
# Computing and plotting BERber_plots.simulate(model_neuralrx,ebno_dbs=np.linspace(EBN0_DB_MIN,EBN0_DB_MAX,20),batch_size=BATCH_SIZE,num_target_block_errors=100,legend="Neural Receiver",soft_estimates=True,max_mc_iter=100,show_fig=True);
EbNo [dB] |        BER |       BLER |  bit errors |    num bits | block errors |  num blocks | runtime [s] |    status---------------------------------------------------------------------------------------------------------------------------------------     -3.0 | 2.2353e-01 | 1.0000e+00 |       26094 |      116736 |          128 |         128 |         0.3 |reached target block errors   -2.579 | 2.0426e-01 | 1.0000e+00 |       23845 |      116736 |          128 |         128 |         0.1 |reached target block errors   -2.158 | 1.8083e-01 | 1.0000e+00 |       21109 |      116736 |          128 |         128 |         0.1 |reached target block errors   -1.737 | 1.5117e-01 | 1.0000e+00 |       17647 |      116736 |          128 |         128 |         0.1 |reached target block errors   -1.316 | 7.8176e-02 | 9.2188e-01 |        9126 |      116736 |          118 |         128 |         0.1 |reached target block errors   -0.895 | 2.7661e-02 | 5.3125e-01 |        6458 |      233472 |          136 |         256 |         0.2 |reached target block errors   -0.474 | 2.0445e-03 | 6.5104e-02 |        2864 |     1400832 |          100 |        1536 |         1.1 |reached target block errors   -0.053 | 1.1290e-04 | 2.5781e-03 |        1318 |    11673600 |           33 |       12800 |         8.8 |reached max iterations    0.368 | 8.5749e-05 | 7.8125e-04 |        1001 |    11673600 |           10 |       12800 |         8.8 |reached max iterations    0.789 | 1.6276e-05 | 2.3437e-04 |         190 |    11673600 |            3 |       12800 |         8.8 |reached max iterations    1.211 | 2.3729e-05 | 1.5625e-04 |         277 |    11673600 |            2 |       12800 |         8.8 |reached max iterations    1.632 | 1.1136e-05 | 7.8125e-05 |         130 |    11673600 |            1 |       12800 |         8.9 |reached max iterations    2.053 | 0.0000e+00 | 0.0000e+00 |           0 |    11673600 |            0 |       12800 |         8.8 |reached max iterationsSimulation stopped as no error occurred @ EbNo = 2.1 dB.
../../_images/phy_tutorials_Sionna_tutorial_part4_29_1.png

Conclusion

We hope you are excited about Sionna - there is much more to be discovered:

  • TensorBoard debugging available

  • Scaling to multi-GPU simulation is simple

  • See theavailable tutorials for more examples

And if something is still missing - the project is open-source: you can modify, add, and extend any component at any time.

References

[1]M. Honkala, D. Korpi and J. M. J. Huttunen, “DeepRx: Fully Convolutional Deep Learning Receiver,” in IEEE Transactions on Wireless Communications, vol. 20, no. 6, pp. 3925-3940, June 2021, doi: 10.1109/TWC.2021.3054520.

[2]F. Ait Aoudia and J. Hoydis, “End-to-end Learning for OFDM: From Neural Receivers to Pilotless Communication,” in IEEE Transactions on Wireless Communications, doi: 10.1109/TWC.2021.3101364.

[3]Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun, “Deep Residual Learning for Image Recognition”, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016, pp. 770-778