Welcome to our Support Center

CategoricalCrossentropy

Description

Computes the crossentropy loss between the labels and predictions.​ Type : polymorphic.

 

 

 

Input parameters

 

CategoricalCrossentropy in : class
Parameters : cluster,

from_logits : boolean, whether to interpret y_pred as a tensor of logit values. By default, we assume that y_pred is probabilities.
axis : integer, the axis along which to compute crossentropy (the features axis).
 label_smoothing : float in range [0, 1], when 0, no smoothing occurs. When > 0, we compute the loss between the predicted labels and a smoothed version of the true labels, where the smoothing squeezes the labels towards 0.5. Larger values of label_smoothing correspond to heavier smoothing.
reduction : enum, type of reduction to apply to the loss. In almost all cases this should be “Sum over Batch“.

 

Output parameters

 

CategoricalCrossentropy out : class

Required data

 y_pred : array, predicted values (if from_logits = true then one hot logits for example, [0.1, 0.8, 0.9] else one hot probabilities for example, [0.1, 0.3, 0.6] for 3-class problem).
 y_true : array, true values (one hot for example, [0, 0, 1] for 3-class problem).

Use cases

Categorical crossentropy loss is a loss function used in multiclass classification problems. It measures the difference between the predicted probabilities for each class and the actual labels, which are usually presented as one-hot vectors (a vector where only one value is 1 and the others are 0).

This function is ideal for cases where you have more than two classes to predict, such as classifying images into different categories, speech recognition, or classifying documents into multiple categories.

Example

All these exemples are snippets PNG, you can drop these Snippet onto the block diagram and get the depicted code added to your VI (Do not forget to install HAIBAL library to run it).

Table of Contents