Welcome to our Support Center

SquaredHinge

Description

Computes the squared hinge loss between y_true and y_pred.​ Type : polymorphic.

 

 

 

Input parameters

 

Parameters : cluster,

reduction : enum, type of reduction to apply to the loss. In almost all cases this should be “Sum over Batch“.
sample weights : boolean, if enabled, adds an input for weighting each sample individually.

 

Output parameters

 

Loss : cluster, this cluster defines the loss function used for model training.

enum : enum, an enumeration indicating the loss type (e.g., MSE, CrossEntropy, etc.). If enum is set to CustomLoss, the custom class on the right will be used as the loss function. Otherwise, the selected loss will be applied with its default configuration.
 Class : object, a custom loss class instance.

Required data

 y_pred : array, predicted values.
 y_true : array, true values are expected to be -1 or 1. If binary (0 or 1) labels are provided we will convert them to -1 or 1.

Use cases

Squared Hinge loss is a variation of hinge loss that is primarily used in classification tasks, especially with Support Vector Machines (SVM). Unlike standard hinge loss, which linearizes margin errors, squared hinge loss squares the margin errors, punishing margin violations more severely.

It is often preferred in cases where a greater penalty for incorrect classifications is desired to encourage a clearer margin between classes.

Example

All these exemples are snippets PNG, you can drop these Snippet onto the block diagram and get the depicted code added to your VI (Do not forget to install Deep Learning library to run it).

Table of Contents