scipy.special.

xlogy#

scipy.special.xlogy(x,y,out=None)#

Computex*log(y) so that the result is 0 ifx=0.

Parameters:
xarray_like

Multiplier

yarray_like

Argument

outndarray, optional

Optional output array for the function results

Returns:
zscalar or ndarray

Computed x*log(y)

Notes

The log function used in the computation is the natural log.

Added in version 0.13.0.

Array API Standard Support

xlogy has experimental support for Python Array API Standard compatiblebackends in addition to NumPy. Please consider testing these featuresby setting an environment variableSCIPY_ARRAY_API=1 and providingCuPy, PyTorch, JAX, or Dask arrays as array arguments. The followingcombinations of backend and device (or other capability) are supported.

Library

CPU

GPU

NumPy

n/a

CuPy

n/a

PyTorch

JAX

Dask

n/a

SeeSupport for the array API standard for more information.

Examples

We can use this function to calculate the binary logistic loss alsoknown as the binary cross entropy. This loss function is used forbinary classification problems and is defined as:

\[L = \frac{1}{n} \sum_{i=0}^n -[y_i*\log({y_{pred}}_i) + (1-y_i)*\log(1-{y_{pred}}_i)]\]

We can define the parametersx andy as y and y_pred respectively.y is the array of the actual labels which over here can be either 0 or 1.y_pred is the array of the predicted probabilities with respect tothe positive class (1).

>>>importnumpyasnp>>>fromscipy.specialimportxlogy>>>y=np.array([0,1,0,1,1,0])>>>y_pred=np.array([0.3,0.8,0.4,0.7,0.9,0.2])>>>n=len(y)>>>loss=-(xlogy(y,y_pred)+xlogy(1-y,1-y_pred)).sum()>>>loss/=n>>>loss0.29597052165495025

A lower loss is usually better as it indicates that the predictions aresimilar to the actual labels. In this example since our predictedprobabilities are close to the actual labels, we get an overall lossthat is reasonably low and appropriate.

On this page