Welcome to our Support Center

Dropout

Description

Dropout takes an input floating-point tensor, an optional input ratio (floating-point scalar) and an optional input training_mode (boolean scalar). It produces two tensor outputs, output (floating-point tensor) and mask (optional Tensor<bool>). If training_mode is true then the output Y will be a random dropout; Note that this Dropout scales the masked input data by the following equation, so to convert the trained model into inference mode, the user can simply not pass training_mode input or set it to false.

output = scale * data * mask,

where

scale = 1. / (1. - ratio).

This operator has optional inputs/outputs. See ONNX IR for more details about the representation of optional arguments. An empty string may be used in the place of an actual argument’s name to indicate a missing argument. Trailing optional arguments (those not followed by an argument that is present) may also be simply omitted.

 

Input parameters

 

specified_outputs_namearray, this parameter lets you manually assign custom names to the output tensors of a node.

 Graphs in : cluster, ONNX model architecture.

data (heterogeneous) – T : object, the input data as tensor.
ratio (optional, heterogeneous) – T1 : object, the ratio of random dropout, with value in [0, 1). If this input was not set, or if it was set to 0, the output would be a simple copy of the input. If it’s non-zero, output will be a random dropout of the scaled input, which is typically the case during training. It is an optional value, if not specified it will default to 0.5.
training mode (optional, heterogeneous) – T2 : object, if set to true then it indicates dropout is being used for training. It is an optional value hence unless specified explicitly, it is false. If it is false, ratio is ignored and the operation mimics inference mode where nothing will be dropped from the input data and if mask is requested as output it will contain all ones.

 Parameters : cluster,

seed : integer, seed to the random generator, if not specified we will auto generate one.
Default value “0”.
 training? : boolean, whether the layer is in training mode (can store data for backward).
Default value “True”.
 lda coeff : float, defines the coefficient by which the loss derivative will be multiplied before being sent to the previous layer (since during the backward run we go backwards).
Default value “1”.

 name (optional) : string, name of the node.

Output parameters

 Graphs out : cluster, ONNX model architecture.

output (heterogeneous) – T : object, the output.
mask (optional, heterogeneous) – T2 : object, the output mask.

Type Constraints

T in (tensor(bfloat16)tensor(double)tensor(float)tensor(float16)) : Constrain input and output types to float tensors.

T1 in (tensor(double)tensor(float)tensor(float16)) : Constrain input ‘ratio’ types to float tensors.

T2 in (tensor(bool)) : Constrain output ‘mask’ types to boolean tensors.

Example

All these exemples are snippets PNG, you can drop these Snippet onto the block diagram and get the depicted code added to your VI (Do not forget to install Deep Learning library to run it).
Table of Contents