Welcome to our Support Center

CosineSimilarity

Description

Computes the cosine similarity between true labels and predicted labels.​ Type : polymorphic.

 

 

 

Input parameters

 

Parameters : cluster,

axis : integer, the axis along which the cosine similarity is computed (the features axis). 
reduction : enum, type of reduction to apply to the loss. In almost all cases this should be “Sum over Batch“.
sample weights : boolean, if enabled, adds an input for weighting each sample individually.

 

Output parameters

 

Loss : cluster, this cluster defines the loss function used for model training.

enum : enum, an enumeration indicating the loss type (e.g., MSE, CrossEntropy, etc.). If enum is set to CustomLoss, the custom class on the right will be used as the loss function. Otherwise, the selected loss will be applied with its default configuration.
 Class : object, a custom loss class instance.

Required data

 y_pred : array, predicted vector. This is the model’s output, typically a dense vector of floating-point values representing a direction in feature space. It does not need to be normalized, as the cosine similarity function internally handles normalization.
 y_true : array, true label vector. This is the target vector, usually of the same shape as y_pred, indicating the desired direction the model should learn to match. Must have the same shape as y_pred.

Use cases

Cosine similarity loss is commonly used to measure the directional alignment between two vectors, regardless of their magnitude. The cosine similarity value ranges from -1 (completely opposite) to 1 (identical), and in the context of loss computation, it is usually negated so that the loss decreases as similarity increases.

This loss function is particularly effective in tasks where the angle between vectors matters more than their absolute values, such as :

  • learning meaningful vector representations (embeddings),
  • image or text retrieval based on similarity,
  • recommendation systems,
  • training Siamese networks or contrastive models to detect similarity or matching pairs.

 

By minimizing the cosine similarity loss, the model learns to produce output vectors that align closely with the target direction in the embedding space.

Example

All these exemples are snippets PNG, you can drop these Snippet onto the block diagram and get the depicted code added to your VI (Do not forget to install Deep Learning library to run it).
Table of Contents