Welcome to our Support Center

KLDivergence

Description

Computes Kullback-Leibler divergence metric between y_true and y_pred. Type : polymorphic.

 

 

Input parameters

 

 y_pred : array, predicted values.
 y_true : array, true values.

 

Output parameters

 

kl_divergence : float, result.

Use cases

KL divergence, or Kullback-Leibler divergence, is a measure used in information theory and machine learning to quantify the difference between two probability distributions. It is often used when modeling probabilistic problems and is particularly useful in the field of deep learning.

Here are some specific areas of application :

  • Deep generative models : in generative models such as variational autoencoders (VAE), KL divergence is used to measure the difference between the probability distribution generated by the model and the actual probability distribution of the data.
  • Unsupervised learning : KL divergence is used in clustering algorithms to measure the difference between the distribution of clusters formed by the algorithm and the actual distribution of the data.
  • Model optimization : in model selection, KL divergence is sometimes used as a measure of model complexity, helping to prevent overlearning.
  • Neural networks : in neural networks, KL divergence is used as a loss function when training probabilistic models.

 

Calculation

It measures the amount of information ‘lost’ when y_pred is used to approximate y_true. In other words, it gives an idea of how different the y_pred distribution is from the y_true distribution.

N : total of array elements
N’ : total of array elements except for the last axis

Example : y_pred shape [3,4,5] ⇒ N = 3*4*5 = 60 and N’ = 3*4 = 12

Example

All these exemples are snippets PNG, you can drop these Snippet onto the block diagram and get the depicted code added to your VI (Do not forget to install HAIBAL library to run it).

Easy to use

Table of Contents