Welcome to our Support Center

GroupNorm

Description

Applies Group Normalization over a mini-batch of inputs as described in the paper Group Normalization (https://arxiv.org/abs/1803.08494).

 

 

 

This operator transforms input according to : y = gamma * (x – mean) / sqrt(variance + epsilon) + beta

The input channels are separated into num_groups groups, each containing num_channels / num_groups channels. num_channels must be divisible by num_groups. The mean and standard-deviation are calculated separately over the each group. The weight and bias are per-channel affine transform parameter vectors of size num_channels.

The activation attribute can be used to enable activation after group normalization.

 

Input parameters

 

specified_outputs_namearray, this parameter lets you manually assign custom names to the output tensors of a node.

 Graphs in : cluster, ONNX model architecture.

X (heterogeneous) – T : object, input data tensor. Dimensions are (N x H x W x C) when channels_last is 1 or (N x C x H x W) otherwise, where N is the batch size, C is the number of channels, and H and W are the height and width of the data.
gamma (heterogeneous) – M : object, 1D gamma tensor for normalization with shape (C), where C is number of channels.
beta (heterogeneous) – M : object, 1D beta tensor for normalization with shape (C), where C is number of channels.

 Parameters : cluster,

activation : enum, activation after group normalization.
Default value “None”.
channels_last : boolean, true if the input and output are in the NHWC layout, false if it is in the NCHW layout.
Default value “True”.
epsilon : float, the epsilon value to use to avoid division by zero.
Default value “1E-7”.
groups : integer, the number of groups of channels. It should be a divisor of the number of channels C.
Default value “0”.
 training? : boolean, whether the layer is in training mode (can store data for backward).
Default value “True”.
 lda coeff : float, defines the coefficient by which the loss derivative will be multiplied before being sent to the previous layer (since during the backward run we go backwards).
Default value “1”.

 name (optional) : string, name of the node.

Output parameters

 

 Y (heterogeneous) – T : object, the output tensor of the same shape as X.

Type Constraints

T in (tensor(float)tensor(float16)) : Constrain input X and output Y types to float tensors.

M in (tensor(float)tensor(float16)) : Constrain gamma and beta to float tensors.

Example

All these exemples are snippets PNG, you can drop these Snippet onto the block diagram and get the depicted code added to your VI (Do not forget to install Deep Learning library to run it).
Table of Contents