Welcome to our Support Center

BinaryCrossentropy

Description

Computes the cross-entropy loss between true labels and predicted labels.​ Type : polymorphic.

 

 

 

Input parameters

 

BinaryCrossentropy in : class
Parameters : cluster,

from_logits : boolean, whether to interpret y_pred as a tensor of logit values. By default, we assume that y_pred is probabilities.
axis : integer, the axis along which to compute crossentropy (the features axis).
 label_smoothing : float in range [0, 1], when 0, no smoothing occurs. When > 0, we compute the loss between the predicted labels and a smoothed version of the true labels, where the smoothing squeezes the labels towards 0.5. Larger values of label_smoothing correspond to heavier smoothing.
reduction : enum, type of reduction to apply to the loss. In almost all cases this should be “Sum over Batch“.

 

Output parameters

 

BinaryCrossentropy out : class

Required data

 y_pred : array, predicted value. This is the model’s prediction, i.e, a single floating-point value which either represents a logit, (i.e, value in [-inf, inf] when from_logits = True) or a probability (i.e, value in [0., 1.] when from_logits = False).
 y_true : array, true label. This is either 0 or 1.

Use cases

Binary crossentropy loss, is a loss function commonly used in binary classification problems. It measures the difference between the predicted probabilities and the actual labels, which are typically encoded as 0 or 1.

This function is particularly effective for evaluating model performance when the goal is to predict accurate probabilities for two exclusive classes. It is often used in neural networks to train models on tasks such as spam detection, text classification, or determining whether something is an object or not.

Example

All these exemples are snippets PNG, you can drop these Snippet onto the block diagram and get the depicted code added to your VI (Do not forget to install HAIBAL library to run it).

Table of Contents