Utility Functions

Linear Algebra

sionna.phy.utils.inv_cholesky(tensor)[source]

Inverse of the Cholesky decomposition of a matrix

Given a batch of\(M \times M\) Hermitian positive definitematrices\(\mathbf{A}\), this function computes\(\mathbf{L}^{-1}\), where\(\mathbf{L}\) isthe Cholesky decomposition, such that\(\mathbf{A}=\mathbf{L}\mathbf{L}^{\textsf{H}}\).

Input:

tensor ([…, M, M],tf.float |tf.complex) – Input tensor of rank greater than one

Output:

[…, M, M],tf.float |tf.complex – A tensor of the same shape and type astensor containingthe inverse of the Cholesky decomposition of its last two dimensions

sionna.phy.utils.matrix_pinv(tensor)[source]

Computes the Moore–Penrose (or pseudo) inverse of a matrix

Given a batch of\(M \times K\) matrices\(\mathbf{A}\) with rank\(K\) (i.e., linearly independent columns), the function returns\(\mathbf{A}^+\), such that\(\mathbf{A}^{+}\mathbf{A}=\mathbf{I}_K\).

The two inner dimensions are assumed to correspond to the matrix rowsand columns, respectively.

Input:

tensor ([…, M, K],tf.Tensor) – Input tensor of rank greater than or equal to two

Output:

[…, M, K],tf.Tensor – A tensor of the same shape and type astensor containingthe matrix pseudo inverse of its last two dimensions

Metrics

sionna.phy.utils.compute_ber(b,b_hat,precision='double')[source]

Computes the bit error rate (BER) between two binary tensors

Input:
  • b (tf.float ortf.int) – A tensor of arbitrary shape filled with ones andzeros

  • b_hat (tf.float ortf.int) – A tensor likeb

  • precision (str, “single” | “double” (default)) – Precision used for internal calculations and outputs

Output:

tf.float – BER

sionna.phy.utils.compute_bler(b,b_hat,precision='double')[source]

Computes the block error rate (BLER) between two binary tensors

A block error happens if at least one element ofb andb_hatdiffer in one block. The BLER is evaluated over the last dimension ofthe input, i. e., all elements of the last dimension are considered todefine a block.

This is also sometimes referred to asword error rate orframe errorrate.

Input:
  • b (tf.float ortf.int) – A tensor of arbitrary shape filled with ones andzeros

  • b_hat (tf.float ortf.int) – A tensor likeb

  • precision (str, “single” | “double” (default)) – Precision used for internal calculations and outputs

Output:

tf.float – BLER

sionna.phy.utils.compute_ser(s,s_hat,precision='double')[source]

Computes the symbol error rate (SER) between two integer tensors

Input:
  • s (tf.float ortf.int) – A tensor of arbitrary shape filled with integers

  • s_hat (tf.float ortf.int) – A tensor likes

  • precision (str, “single” | “double” (default)) – Precision used for internal calculations and outputs

Output:

tf.float – SER

sionna.phy.utils.count_block_errors(b,b_hat)[source]

Counts the number of block errors between two binary tensors

A block error happens if at least one element ofb andb_hatdiffer in one block. The BLER is evaluated over the last dimension ofthe input, i. e., all elements of the last dimension are considered todefine a block.

This is also sometimes referred to asword error rate orframe errorrate.

Input:
  • b (tf.float ortf.int) – A tensor of arbitrary shape filled with ones andzeros

  • b_hat (tf.float ortf.int) – A tensor likeb

Output:

tf.int64 – Number of block errors

sionna.phy.utils.count_errors(b,b_hat)[source]

Counts the number of bit errors between two binary tensors

Input:
  • b (tf.float ortf.int) – A tensor of arbitrary shape filled with ones andzeros

  • b_hat (tf.float ortf.int) – A tensor likeb

Output:

tf.int64 – Number of bit errors

Miscellaneous

sionna.phy.utils.dbm_to_watt(x_dbm,precision=None)[source]

Converts the input [dBm] to Watt

Input:
  • x_dbm (tf.float) – Input value [dBm]

  • precision (None (default) | “single” | “double”) – Precision used for internal calculations and outputs.If set toNone,precision is used.

Output:

tf.float – Input value converted to Watt

classsionna.phy.utils.db_to_lin(x,precision=None)[source]

Converts the input [dB] to linear scale

Input:
  • x (tf.float) – Input value [dB]

  • precision (None (default) | “single” | “double”) – Precision used for internal calculations and outputs.If set toNone,precision is used.

Output:

tf.float – Input value converted to linear scale

classsionna.phy.utils.DeepUpdateDict[source]

Class inheriting fromdict enabling nested merging of the dictionary witha new one

deep_update(delta,stop_at_keys=())[source]

Mergesself with the inputdelta in nested fasion.In case of conflict, the values of the new dictionary prevail.The two dictionary are merged at intermediate keysstop_at_keys, if provided.

Input:
  • delta (dict) – Dictionary to be merged withself

  • stop_at_keys (tuple) – Tuple of keys at which the subtree ofdelta replaces thecorresponding subtree ofself

Example

fromsionna.phy.utilsimportDeepUpdateDict# Merge without conflictsdict1=DeepUpdateDict({'a':1,'b':{'b1':10,'b2':20}})dict_delta1={'c':-2,'b':{'b3':30}}dict1.deep_update(dict_delta1)print(dict1)# {'a': 1, 'b': {'b1': 10, 'b2': 20, 'b3': 30}, 'c': -2}# Compare against the classic "update" method, which is not nesteddict1=DeepUpdateDict({'a':1,'b':{'b1':10,'b2':20}})dict1.update(dict_delta1)print(dict1)# {'a': 1, 'b': {'b3': 30}, 'c': -2}# Handle key conflictsdict2=DeepUpdateDict({'a':1,'b':{'b1':10,'b2':20}})dict_delta2={'a':-2,'b':{'b1':{'f':3,'g':4}}}dict2.deep_update(dict_delta2)print(dict2)# {'a': -2, 'b': {'b1': {'f': 3, 'g': 4}, 'b2': 20}}# Merge at intermediate keysdict2=DeepUpdateDict({'a':1,'b':{'b1':10,'b2':20}})dict2.deep_update(dict_delta2,stop_at_keys='b')print(dict2)# {'a': -2, 'b': {'b1': {'f': 3, 'g': 4}}}
sionna.phy.utils.dict_keys_to_int(x)[source]

Converts the string keys of an input dictionary to integers wheneverpossible

Input:

x (dict) – Input dictionary

Output:

dict – Dictionary with integer keys

Example

fromsionna.phy.utilsimportdict_keys_to_intdict_in={'1':{'2':[45,'3']},'4.3':6,'d':[5,'87']}print(dict_keys_to_int(dict_in))# {1: {'2': [45, '3']}, '4.3': 6, 'd': [5, '87']})
sionna.phy.utils.ebnodb2no(ebno_db,num_bits_per_symbol,coderate,resource_grid=None,precision=None)[source]

Computes the noise varianceNo for a givenEb/No in dB

The function takes into account the number of coded bits per constellationsymbol, the coderate, as well as possible additional overheads related toOFDM transmissions, such as the cyclic prefix and pilots.

The value ofNo is computed according to the following expression

\[N_o = \left(\frac{E_b}{N_o} \frac{r M}{E_s}\right)^{-1}\]

where\(2^M\) is the constellation size, i.e.,\(M\) is theaverage number of coded bits per constellation symbol,\(E_s=1\) is the average energy per constellation per symbol,\(r\in(0,1]\) is the coderate,\(E_b\) is the energy per information bit,and\(N_o\) is the noise power spectral density.For OFDM transmissions,\(E_s\) is scaledaccording to the ratio between the total number of resource elements ina resource grid with non-zero energy and the numberof resource elements used for data transmission. Also the additionallytransmitted energy during the cyclic prefix is taken into account, aswell as the number of transmitted streams per transmitter.

Input:
  • ebno_db (float) –Eb/No value in dB

  • num_bits_per_symbol (int) – Number of bits per symbol

  • coderate (float) – Coderate

  • resource_grid (None (default) |ResourceGrid) – An (optional) resource grid for OFDM transmissions

  • precision (None (default) | “single” | “double”) – Precision used for internal calculations and outputs.If set toNone,precision is used.

Output:

tf.float – Value of\(N_o\) in linear scale

sionna.phy.utils.complex_normal(shape,var=1.0,precision=None)[source]

Generates a tensor of complex normal random variables

Input:
  • shape (tf.shape, orlist) – Desired shape

  • var (float) – Total variance., i.e., each complex dimension hasvariancevar/2.

  • precision (None (default) | “single” | “double”) – Precision used for internal calculations and outputs.If set toNone,precision is used.

Output:

shape,tf.complex – Tensor of complex normal random variables

sionna.phy.utils.hard_decisions(llr)[source]

Transforms LLRs into hard decisions

Positive values are mapped to\(1\).Nonpositive values are mapped to\(0\).

Input:

llr (any non-complex tf.DType) – Tensor of LLRs

Output:

Same shape and dtype asllr – Hard decisions

classsionna.phy.utils.Interpolate[source]

Class template for interpolating data defined on unstructured or rectangulargrids. Used inPHYAbstraction forBLER and SNR interpolation.

abstractstruct(z,x,y,x_interp,y_interp,**kwargs)[source]

Interpolates data structured in rectangular grids

Input:
  • z ([N, M],array) – Co-domain sample values. Informally,z = f (x ,y )

  • x ([N],array) – First coordinate of the domain sample values

  • y ([M],array) – Second coordinate of the domain sample values

  • x_interp ([L],array) – Interpolation grid for the first (x) coordinate. Typically,\(L\gg N\)

  • y_interp ([J],array) – Interpolation grid for the second (y) coordinate. Typically,\(J\gg M\)

  • kwargs – Additional interpolation parameters

Output:

z_interp ([L, J],np.array) – Interpolated data

abstractunstruct(z,x,y,x_interp,y_interp,**kwargs)[source]

Interpolates unstructured data

Input:
  • z ([N],array) – Co-domain sample values. Informally,z = f (x ,y )

  • x ([N],array) – First coordinate of the domain sample values

  • y ([N],array) – Second coordinate of the domain sample values

  • x_interp ([L],array) – Interpolation grid for the first (x) coordinate. Typically,\(L\gg N\)

  • y_interp ([J],array) – Interpolation grid for the second (y) coordinate. Typically,\(J\gg N\)

  • griddata_method (linear |nearest,cubic) – Interpolation method. See Scipy’sinterpolate.griddata for more details

Output:

z_interp ([L, J],np.array) – Interpolated data

classsionna.phy.utils.lin_to_db(x,precision=None)[source]

Converts the input in linear scale to dB scale

Input:
  • x (tf.float) – Input value in linear scale

  • precision (None (default) | “single” | “double”) – Precision used for internal calculations and outputs.If set toNone,precision is used.

Output:

tf.float – Input value converted to [dB]

sionna.phy.utils.log2(x)[source]

TensorFlow implementation of NumPy’slog2 function

Simple extension totf.experimental.numpy.log2which casts the result to thedtype of the input.For more details see theTensorFlow andNumPy documentation.

sionna.phy.utils.log10(x)[source]

TensorFlow implementation of NumPy’slog10 function

Simple extension totf.experimental.numpy.log10which casts the result to thedtype of the input.For more details see theTensorFlow andNumPy documentation.

classsionna.phy.utils.MCSDecoder(*args,precision=None,**kwargs)[source]

Class template for mapping a Modulation and Coding Scheme (MCS) index to thecorresponding modulation order, i.e., number of bits per symbol, andcoderate.

Input:
  • mcs_index ([…],tf.int32) – MCS index

  • mcs_table_index ([…],tf.int32) – MCS table index. Different tables contain different mappings.

  • mcs_category ([…],tf.int32) – Table category which may correspond, e.g., to uplink ordownlink transmission

  • check_index_validity (bool (default:True)) – IfTrue, an ValueError is thrown is the input mcs indices are notvalid for the given configuration

Output:
  • modulation_order ([…],tf.int32) – Modulation order corresponding to the input MCS index

  • coderate ([…],tf.float) – Coderate corresponding to the input MCS index

sionna.phy.utils.scalar_to_shaped_tensor(inp,dtype,shape)[source]

Converts a scalar input to a tensor of specified shape, or validates and castsan existing input tensor. If the input is a scalar, creates atensor of the specified shape filled with that value. Otherwise, verifiesthe input tensor matches the required shape and casts it to the specified dtype.

Input:
  • inp (int |float |bool |tf.Tensor) – Input value. If scalar (int, float, bool, or shapeless tensor), it will beused to fill a new tensor. If a shaped tensor, its shape must match thespecified shape.

  • dtype (tf.dtype) – Desired data type of the output tensor

  • shape (list) – Required shape of the output tensor

Output:

tf.Tensor – A tensor of shapeshape and typedtype. Either filled with the scalarinput value or the input tensor cast to the specified dtype.

sionna.phy.utils.sim_ber(mc_fun,ebno_dbs,batch_size,max_mc_iter,soft_estimates=False,num_target_bit_errors=None,num_target_block_errors=None,target_ber=None,target_bler=None,early_stop=True,graph_mode=None,distribute=None,verbose=True,forward_keyboard_interrupt=True,callback=None,precision=None)[source]

Simulates until target number of errors is reached and returns BER/BLER

The simulation continues with the next SNR point if eithernum_target_bit_errors bit errors ornum_target_block_errors blockerrors is achieved. Further, it continues with the next SNR point aftermax_mc_iter batches of sizebatch_size have been simulated.Early stopping allows to stop the simulation after the first error-free SNRpoint or after reaching a certaintarget_ber ortarget_bler.

Input:
  • mc_fun (callable) – Callable that yields the transmitted bitsb and thereceiver’s estimateb_hat for a givenbatch_size andebno_db. Ifsoft_estimates is True,b_hat is interpreted aslogit.

  • ebno_dbs ([n],tf.float) – A tensor containing SNR points to be evaluated.

  • batch_size (tf.int) – Batch-size for evaluation

  • max_mc_iter (tf.int) – Maximum number of Monte-Carlo iterations per SNR point

  • soft_estimates (bool, (defaultFalse)) – IfTrue,b_hat is interpreted as logit and an additionalhard-decision is applied internally.

  • num_target_bit_errors (None (default) |tf.int32) – Target number of bit errors per SNR point untilthe simulation continues to next SNR point

  • num_target_block_errors (None (default) |tf.int32) – Target number of block errors per SNR pointuntil the simulation continues

  • target_ber (None (default) |tf.float32) – The simulation stops after the first SNR pointwhich achieves a lower bit error rate as specified bytarget_ber.This requiresearly_stop to beTrue.

  • target_bler (None (default) |tf.float32) – The simulation stops after the first SNR pointwhich achieves a lower block error rate as specified bytarget_bler.This requiresearly_stop to beTrue.

  • early_stop (None (default) |bool) – IfTrue, the simulation stops after thefirst error-free SNR point (i.e., no error occurred aftermax_mc_iter Monte-Carlo iterations).

  • graph_mode (None (default) | “graph” | “xla”) – A string describing the execution mode ofmc_fun.IfNone,mc_fun is executed as is.

  • distribute (None (default) | “all” | list of indices |tf.distribute.strategy) – Distributes simulation on multiple parallel devices. IfNone,multi-device simulations are deactivated. If “all”, the workload willbe automatically distributed across all available GPUs via thetf.distribute.MirroredStrategy.If an explicit list of indices is provided, only the GPUs with the givenindices will be used. Alternatively, a customtf.distribute.strategycan be provided. Note that the samebatch_size will beused for all GPUs in parallel, but the number of Monte-Carlo iterationsmax_mc_iter will be scaled by the number of devices such that thesame number of total samples is simulated. However, all stoppingconditions are still in-place which can cause slight differences in thetotal number of simulated samples.

  • verbose (bool, (defaultTrue)) – IfTrue, the current progress will be printed.

  • forward_keyboard_interrupt (bool, (defaultTrue)) – IfFalse, KeyboardInterrupts will becatched internally and not forwarded (e.g., will not stop outer loops).IfTrue, the simulation ends and returns the intermediate simulationresults.

  • callback (None (default) |callable) – If specified,callback will be called after each Monte-Carlo step.Can be used for logging or advanced early stopping. Input signature ofcallback must matchcallback(mc_iter, snr_idx, ebno_dbs,bit_errors, block_errors, nb_bits, nb_blocks) wheremc_iterdenotes the number of processed batches for the current SNR point,snr_idx is the index of the current SNR point,ebno_dbs is thevector of all SNR points to be evaluated,bit_errors the vector ofnumber of bit errors for each SNR point,block_errors the vector ofnumber of block errors,nb_bits the vector of number of simulatedbits,nb_blocks the vector of number of simulated blocks,respectively. Ifcallable returnssim_ber.CALLBACK_NEXT_SNR, earlystopping is detected and the simulation will continue with thenext SNR point. Ifcallable returnssim_ber.CALLBACK_STOP, the simulation is stoppedimmediately. Forsim_ber.CALLBACK_CONTINUE continues withthe simulation.

  • precision (None (default) | “single” | “double”) – Precision used for internal calculations and outputs.If set toNone,precision is used.

Output:
  • ber ([n],tf.float) – Bit-error rate.

  • bler ([n],tf.float) – Block-error rate

Note

This function is implemented based on tensors to allowfull compatibility with tf.function(). However, to run simulationsin graph mode, the providedmc_fun must use the@tf.function()decorator.

classsionna.phy.utils.SingleLinkChannel(num_bits_per_symbol,num_info_bits,target_coderate,precision=None)[source]

Class template for simulating a single-link, i.e., single-carrier andsingle-stream, channels. Used for generating BLER tables innew_bler_table().

Parameters:
  • num_bits_per_symbol (int) – Number of bits per symbol, i.e., modulation order

  • num_info_bits (int) – Number of information bits per code block

  • target_coderate (float) – Target code rate, i.e., the target ratio between the information and thecoded bits within a block

  • precision (None (default) | “single” | “double”) – Precision used for internal calculations and outputs.If set toNone,precision is used.

Input:
  • batch_size (int) – Size of the simulation batches

  • ebno_db (float) –Eb/No value in dB

Output:
  • bits ([batch_size,num_info_bits],int) – Transmitted bits

  • bits_hat ([batch_size,num_info_bits],int) – Decoded bits

propertynum_bits_per_symbol

Get/set the modulation order

Type:

int

propertynum_coded_bits

Number of coded bits in a code block

Type:

int (read-only)

propertynum_info_bits

Get/set the number of information bits per code block

Type:

int

set_num_coded_bits()[source]

Compute the number of coded bits per code block

propertytarget_coderate

Get/set the target coderate

Type:

float

classsionna.phy.utils.SplineGriddataInterpolation[source]

Interpolates data defined on rectangular or unstructured grids viaScipy’sinterpolate.RectBivariateSpline andinterpolate.griddata, respectively.It inherits fromInterpolate

struct(z,x,y,x_interp,y_interp,spline_degree=1,**kwargs)[source]

Perform spline interpolation via Scipy’sinterpolate.RectBivariateSpline

Input:
  • z ([N, M],array) – Co-domain sample values. Informally,z = f (x ,y ).

  • x ([N],array) – First coordinate of the domain sample values

  • y ([M],array) – Second coordinate of the domain sample values

  • x_interp ([L],array) – Interpolation grid for the first (x) coordinate. Typically,\(L\gg N\).

  • y_interp ([J],array) – Interpolation grid for the second (y) coordinate. Typically,\(J\gg M\).

  • spline_degree (int (default: 1)) – Spline interpolation degree

Output:

z_interp ([L, J],np.array) – Interpolated data

unstruct(z,x,y,x_interp,y_interp,griddata_method='linear',**kwargs)[source]

Interpolates unstructured data via Scipy’sinterpolate.griddata

Input:
  • z ([N],array) – Co-domain sample values. Informally,z = f (x ,y ).

  • x ([N],array) – First coordinate of the domain sample values

  • y ([N],array) – Second coordinate of the domain sample values

  • x_interp ([L],array) – Interpolation grid for the first (x) coordinate. Typically,\(L\gg N\).

  • y_interp ([J],array) – Interpolation grid for the second (y) coordinate. Typically,\(J\gg N\).

  • griddata_method (“linear” | “nearest” | “cubic”) – Interpolation method. See Scipy’sinterpolate.griddata for more details.

Output:

z_interp ([L, J],np.array) – Interpolated data

sionna.phy.utils.to_list(x)[source]

Converts the input to a list

Input:

x (list |float |int |str |None) – Input, to be converted to a list

Output:

list – Input converted to a list

classsionna.phy.utils.TransportBlock(*args,precision=None,**kwargs)[source]

Class template for computing the number and size (measured in n. bits) of codeblocks within a transport block, given the modulation order, coderate andthe total number of coded bits of a transport block. Used inPHYAbstraction.

Input:
  • modulation_order ([…],tf.int32) – Modulation order, i.e., number of bits per symbol

  • target_rate ([…],tf.float32) – Target coderate

  • num_coded_bits ([…],tf.float32) – Total number of coded bits across all codewords

Output:
  • cb_size ([…],tf.int32) – Code block (CB) size, i.e., number of information bits per code block

  • num_cb ([…],tf.int32) – Number of code blocks that the transport block is segmented into

sionna.phy.utils.watt_to_dbm(x_w,precision=None)[source]

Converts the input [Watt] to dBm

Input:
  • x_w (tf.float) – Input value [Watt]

  • precision (None (default) | “single” | “double”) – Precision used for internal calculations and outputs.If set toNone,precision is used.

Output:

tf.float – Input value converted to dBm

Numerics

sionna.phy.utils.bisection_method(f,left,right,regula_falsi=False,expand_to_left=True,expand_to_right=True,step_expand=2.0,eps_x=1e-05,eps_y=0.0001,max_n_iter=100,return_brackets=False,precision=None,**kwargs)[source]

Implements the classic bisection method for estimating the root of batches of decreasingunivariate functions

Input:
  • f (callable) – Generic function handle that takes batched inputs and returns batched outputs.Applies a different decreasing univariate function to each of its inputs.Must accept input batches of the same shape asleft andright.

  • left ([…],tf.float) – Left end point of the initial search interval, for each batch.The root is guessed to be contained within [left,right].

  • right ([…],tf.float) – Right end point of the initial search interval, for each batch

  • regula_falsi (bool (default:False)) – IfTrue, then theregula falsi method is employed to determine thenext root guess. This guess is computed as the x-intercept of the linepassing through the two points formed by the function evaluated at thecurrent search interval endpoints.Else, the next root guess is computed as the middle point of the currentsearch interval.

  • expand_to_left (bool (default:True)) – IfTrue andf(left) is negative, thenleft is decreased by ageometric progression ofstep_expand untilf becomes positive,for each batch.IfFalse, thenleft is not decreased.

  • expand_to_right (bool (default:True)) – IfTrue andf(left) is positive, thenright is increased by ageometric progression ofstep_expand untilf becomes negative,for each batch.IfFalse, thenright is not increased.

  • step_expand (float (default: 2.)) – Seeexpand_to_left andexpand_to_right

  • eps_x (float (default: 1e-4)) – Convergence criterion. Search terminates aftermax_n_iter iterationsor if, for each batch, either the search interval length is smaller thaneps_x or the function absolute value is smaller thaneps_y.

  • eps_y (float (default: 1e-4)) – Convergence criterion. Seeeps_x.

  • max_n_iter (int (default: 1000)) – Maximum number of iterations

  • return_brackets (bool (default:False)) – IfTrue, the final values of search intervalleft andrightend point are returned

  • precision (None (default) | “single” | “double”) – Precision used for internal calculations and outputs.If set toNone,precision is used.

  • kwargs (dict) – Additional arguments for functionf

Output:
  • x_opt ([…],tf.float) – Estimated roots of the input batch of functionsf

  • f_opt ([…],tf.float) – Value of functionf evaluated atx_opt

  • left ([…],tf.float) – Final value of left end points of the search intervals.Only returned ifreturn_brackets isTrue.

  • right ([…],tf.float) – Final value of right end points of the search intervals.Only returned ifreturn_brackets isTrue.

Example

importtensorflowastffromsionna.phy.utilsimportbisection_method# Define a decreasing univariate function of xdeff(x,a):return-tf.math.pow(x-a,3)# Initial search intervalleft,right=0.,2.# Input parameter for function aa=3# Perform bisection methodx_opt,_=bisection_method(f,left,right,eps_x=1e-4,eps_y=0,a=a)print(x_opt.numpy())# 2.9999084

Plotting

sionna.phy.utils.plotting.plot_ber(snr_db,ber,legend='',ylabel='BER',title='BitErrorRate',ebno=True,is_bler=None,xlim=None,ylim=None,save_fig=False,path='')[source]

Plot error-rates

Input:
  • snr_db (numpy.ndarray orlist ofnumpy.ndarray) – Array defining the simulated SNR points

  • ber (numpy.ndarray orlist ofnumpy.ndarray) – Array defining the BER/BLER per SNR point

  • legend (str, (default “”), orlist ofstr) – Legend entries

  • ylabel (str, (default “BER”)) – y-label

  • title (str, (default “Bit Error Rate”)) – Figure title

  • ebno (bool, (defaultTrue)) – IfTrue, the x-label is set to“EbNo [dB]” instead of “EsNo [dB]”.

  • is_bler (bool, (defaultFalse)) – IfTrue, the corresponding curve is dashed.

  • xlim (None (default) | (float,float)) – x-axis limits

  • ylim (None (default) | (float,float)) – y-axis limits

  • save_fig (bool, (defaultFalse)) – IfTrue, the figure is saved as.png.

  • path (str, (default “”)) – Path to save the figure (ifsave_fig isTrue)

Output:
  • fig (matplotlib.figure.Figure) – Figure handle

  • ax (matplotlib.axes.Axes) – Axes object

classsionna.phy.utils.plotting.PlotBER(title='Bit/BlockErrorRate')[source]

Provides a plotting object to simulate and store BER/BLER curves

Parameters:

title (str, (default “Bit/Block Error Rate”)) – Figure title

Input:
  • snr_db (numpy.ndarray orlist ofnumpy.ndarray,float) – SNR values

  • ber (numpy.ndarray orlist ofnumpy.ndarray,float) – BER values corresponding tosnr_db

  • legend (str orlist ofstr) – Legend entries

  • is_bler (bool orlist ofbool, (default [])) – IfTrue,ber will be interpreted as BLER.

  • show_ber (bool, (defaultTrue)) – IfTrue, BER curves will be plotted.

  • show_bler (bool, (defaultTrue)) – IfTrue, BLER curves will be plotted.

  • xlim (None (default) | (float,float)) – x-axis limits

  • ylim (None (default) | (float,float)) – y-axis limits

  • save_fig (bool, (defaultFalse)) – IfTrue, the figure is saved as.png.

  • path (str, (default “”)) – Path to save the figure (ifsave_fig isTrue)

Tensors

sionna.phy.utils.expand_to_rank(tensor,target_rank,axis=-1)[source]

Inserts as many axes to a tensor as needed to achieve a desired rank

This operation inserts additional dimensions to atensor starting ataxis, so that so that the rank of the resulting tensor has ranktarget_rank. The dimension index follows Python indexing rules, i.e.,zero-based, where a negative index is counted backward from the end.

Input:
  • tensor (tf.Tensor) – Input tensor

  • target_rank (int) – Rank of the output tensor.Iftarget_rank is smaller than the rank oftensor,the function does nothing.

  • axis (int) – Dimension index at which to expand theshape oftensor. Given atensor ofD dimensions,axis must be within the range[-(D+1), D] (inclusive).

Output:

tf.Tensor – A tensor with the same data astensor, withtarget_rank- rank(tensor) additional dimensions inserted at theindex specified byaxis.Iftarget_rank <= rank(tensor),tensor is returned.

sionna.phy.utils.flatten_dims(tensor,num_dims,axis)[source]

Flattens a specified set of dimensions of a tensor

This operation flattensnum_dims dimensions of atensorstarting at a givenaxis.

Input:
  • tensor (tf.Tensor) – Input tensor

  • num_dims (int) – Number of dimensions to combine. Must be larger thantwo and less or equal than the rank oftensor.

  • axis (int) – Index of the dimension from which to start

Output:

tf.Tensor – A tensor of the same type astensor withnum_dims-1 lesserdimensions, but the same number of elements

sionna.phy.utils.flatten_last_dims(tensor,num_dims=2)[source]

Flattens the lastn dimensions of a tensor

This operation flattens the lastnum_dims dimensions of atensor.It is a simplified version of the functionflatten_dims.

Input:
  • tensor (tf.Tensor) – Input tensor

  • num_dims (int) – Number of dimensions to combine.Must be greater than or equal to two and less or equalthan the rank oftensor.

Output:

tf.Tensor – A tensor of the same type astensor withnum_dims-1 lesserdimensions, but the same number of elements

sionna.phy.utils.insert_dims(tensor,num_dims,axis=-1)[source]

Adds multiple length-one dimensions to a tensor

This operation is an extension to TensorFlow`sexpand_dims function.It insertsnum_dims dimensions of length one starting from thedimensionaxis of atensor. The dimensionindex follows Python indexing rules, i.e., zero-based, where a negativeindex is counted backward from the end.

Input:
  • tensor (tf.Tensor) – Input tensor

  • num_dims (int) – Number of dimensions to add

  • axis (int) – Dimension index at which to expand theshape oftensor. Given atensor ofD dimensions,axis must be within the range[-(D+1), D] (inclusive).

Output:

tf.tensor – A tensor with the same data astensor, withnum_dims additionaldimensions inserted at the index specified byaxis

sionna.phy.utils.split_dim(tensor,shape,axis)[source]

Reshapes a dimension of a tensor into multiple dimensions

This operation splits the dimensionaxis of atensor intomultiple dimensions according toshape.

Input:
  • tensor (tf.Tensor) – Input tensor

  • shape ((list or TensorShape)) – Shape to which the dimension shouldbe reshaped

  • axis (int) – Index of the axis to be reshaped

Output:

tf.Tensor – A tensor of the same type astensor with len(shape)-1additional dimensions, but the same number of elements

sionna.phy.utils.diag_part_axis(tensor,axis,**kwargs)[source]

Extracts the batched diagonal part of a batched tensor over the specified axis

This is an extension of TensorFlow`stf.linalg.diag_part function, whichextracts the diagonal over the last two dimensions. This behavior can bereproduced by settingaxis =-2.

Input:
  • tensor ([s(1), …, s(N)],any) – A tensor of rank greater than or equal to two (\(N\ge 2\))

  • axis (int) – Axis index starting from which the diagonal part isextracted

  • kwargs (dict) – Optional inputs for TensorFlow’slinalg.diag_part, such as the diagonal offsetk or thepadding valuepadding_value. See TensorFlow’slinalg.diag_part formore details.

Output:

[s(1), …, min[s(axis),s(axis +1)], s(axis +2), …, s(N))],any – Tensor containing the diagonal part of inputtensor over axis(axis,axis +1)

Example

importtensorflowastffromsionna.phy.utilsimportdiag_part_axisa=tf.reshape(tf.range(27),[3,3,3])print(a.numpy())#  [[[ 0  1  2]#    [ 3  4  5]#    [ 6  7  8]]##    [[ 9 10 11]#    [12 13 14]#    [15 16 17]]##    [[18 19 20]#    [21 22 23]#    [24 25 26]]]dp_0=diag_part_axis(a,axis=0)print(dp_0.numpy())# [[ 0  1  2]#  [12 13 14]#  [24 25 26]]dp_1=diag_part_axis(a,axis=1)print(dp_1.numpy())# [[ 0  4  8]#  [ 9 13 17]#  [18 22 26]]
sionna.phy.utils.flatten_multi_index(indices,shape)[source]

Converts a tensor of index arrays into an tensor of flat indices

Input:
  • indices ([…, N],tf.int32) – Indices to flatten

  • shape ([N],tf.int32) – Shape of each index dimension.Note that it must hold thatindices[...,n]<shape[n] for all n andbatch dimension

Output:

flat_indices ([…],tf.int32) – Flattened indices

Example

importtensorflowastffromsionna.phy.utilsimportflatten_multi_indexindices=tf.constant([2,3])shape=[5,6]print(flatten_multi_index(indices,shape).numpy())# 15 = 2*6 + 3
sionna.phy.utils.gather_from_batched_indices(params,indices)[source]

Gathers the values of a tensorparams according to batch-specificindices

Input:
  • params ([s(1), …, s(N)],any) – Tensor containing the values to gather

  • indices ([…, N],tf.int32) – Tensor containing, for each batch[…], the indices at whichparams is gathered. Note that 0\(\le\)indices[...,n]\(<\)s(n) must hold for alln=1,…,N

Output:

[…],any – Tensor containing the gathered values

Example

importtensorflowastffromsionna.phy.utilsimportgather_from_batched_indicesparams=tf.constant([[10,20,30],[40,50,60],[70,80,90]])print(params.shape)# TensorShape([3, 3])indices=tf.constant([[[0,1],[1,2],[2,0],[0,0]],[[0,0],[2,2],[2,1],[0,1]]])print(indices.shape)# TensorShape([2, 4, 2])# Note that the batch shape is [2, 4]. Each batch contains a list of 2 indicesprint(gather_from_batched_indices(params,indices).numpy())# [[20, 60, 70, 10],#  [10, 90, 80, 20]]# Note that the output shape coincides with the batch shape.# Element [i,j] coincides with params[indices[i,j,:]]
sionna.phy.utils.tensor_values_are_in_set(tensor,admissible_set)[source]

Checks if the inputtensor values are contained in thespecifiedadmissible_set

Input:
  • tensor (tf.Tensor |list) – Tensor to validate

  • admissible_set (tf.Tensor |list) – Set of valid values that the inputtensor must be composed of

Output:

bool – ReturnsTrue if and only iftensor values are contained inadmissible_set

Example

importtensorflowastffromsionna.phy.utilsimporttensor_values_are_in_settensor=tf.Variable([[1,0],[0,1]])print(tensor_values_are_in_set(tensor,[0,1,2]).numpy())# Trueprint(tensor_values_are_in_set(tensor,[0,2]).numpy())# False
sionna.phy.utils.enumerate_indices(bounds)[source]

Enumerates all indices between 0 (included) andbounds (excluded) inlexicographic order

Input:

bounds (list |tf.Tensor |np.array,int) – Collection of index bounds

Output:

[prod(bounds), len(bounds)] – Collection of all indices, in lexicographic order

Example

fromsionna.phy.utilsimportenumerate_indicesprint(enumerate_indices([2,3]).numpy())# [[0 0]#  [0 1]#  [0 2]#  [1 0]#  [1 1]#  [1 2]]