Welcome to our Support Center

GroupNorm

Description

Carries out instance normalization as described in the paper https://arxiv.org/abs/1607.08022.

 

 

y = scale * (x – mean) / sqrt(variance + epsilon) + B, where mean and variance are computed per instance per channel.

 

Input parameters

 

specified_outputs_namearray, this parameter lets you manually assign custom names to the output tensors of a node.

 Graphs in : cluster, ONNX model architecture.

input (heterogeneous) – T : object, input data tensor from the previous operator; dimensions for image case are (N x C x H x W), where N is the batch size, C is the number of channels, and H and W are the height and the width of the data. For non image case, the dimensions are in the form of (N x C x D1 x D2 … Dn), where N is the batch size.
scale (heterogeneous) – T : object, the input 1-dimensional scale tensor of size C.
B (heterogeneous) – T : object, the input 1-dimensional bias tensor of size C.

 Parameters : cluster,

epsilon : float, the epsilon value to use to avoid division by zero.
Default value “1E-5”.
 training? : boolean, whether the layer is in training mode (can store data for backward).
Default value “True”.
 lda coeff : float, defines the coefficient by which the loss derivative will be multiplied before being sent to the previous layer (since during the backward run we go backwards).
Default value “1”.

 name (optional) : string, name of the node.

Output parameters

 

 output (heterogeneous) – T : object, the output tensor of the same shape as input.

Type Constraints

T in (tensor(double)tensor(float)tensor(float16)) : Constrain input and output types to float tensors.

Example

All these exemples are snippets PNG, you can drop these Snippet onto the block diagram and get the depicted code added to your VI (Do not forget to install Deep Learning library to run it).
Table of Contents