-
Quick start
-
API
-
-
-
-
-
-
- Dense
- PReLU 2D
- PReLU 3D
- PReLU 4D
- PReLU 5D
- AdditiveAttention
- Attention
- MutiHeadAttention
- Conv1D
- Conv2D
- Conv3D
- ConvLSTM1D
- ConvLSTM2D
- ConvLSTM3D
- Conv1DTranspose
- Conv2DTranspose
- Conv3DTranspose
- DepthwiseConv2D
- SeparableConv1D
- SeparableConv2D
- Embedding
- BatchNormalization
- LayerNormalization
- Bidirectional
- GRU
- LSTM
- SimpleRNN
- Show All Articles ( 12 ) Collapse Articles
-
- Dense
- PReLU 2D
- PReLU 3D
- PReLU 4D
- PReLU 5D
- AdditiveAttention
- Attention
- MultiHeadAttention
- Conv1D
- Conv2D
- Conv3D
- ConvLSTM1D
- ConvLSTM2D
- ConvLSTM3D
- Conv1DTranspose
- Conv2DTranspose
- Conv3DTranspose
- DepthwiseConv2D
- SeparableConv1D
- SeparableConv2D
- Embedding
- BatchNormalization
- LayerNormalization
- Bidirectional
- GRU
- LSTM
- SimpleRNN
- Show All Articles ( 12 ) Collapse Articles
-
-
- Dense
- AdditiveAttention
- Attention
- MultiHeadAttention
- BatchNormalization
- LayerNormalization
- Bidirectional
- GRU
- LSTM
- SimpleRNN
- Conv1D
- Conv2D
- Conv3D
- Conv1DTranspose
- Conv2DTranspose
- Conv3DTranspose
- ConvLSTM1D
- ConvLSTM2D
- ConvLSTM3D
- DepthwiseConv2D
- SeparableConv1D
- SeparableConv2D
- Embedding
- PReLU 2D
- PReLU 3D
- PReLU 4D
- PReLU 5D
- Show All Articles ( 12 ) Collapse Articles
-
-
- Dense
- Embedding
- AdditiveAttention
- Attention
- MultiHeadAttention
- Conv1D
- Conv2D
- Conv3D
- ConvLSTM1D
- ConvLSTM2D
- ConvLSTM3D
- Conv1DTranspose
- Conv2DTranspose
- Conv3DTranspose
- DepthwiseConv2D
- SeparableConv1D
- SeparableConv2D
- BatchNormalization
- LayerNormalization
- PReLU 2D
- PReLU 3D
- PReLU 4D
- PReLU 5D
- Bidirectional
- GRU
- LSTM
- RNN (GRU)
- RNN (LSTM)
- RNN (SimpleRNN)
- SimpleRNN
- Show All Articles ( 15 ) Collapse Articles
-
- Dense
- Embedding
- AdditiveAttention
- Attention
- MultiHeadAttention
- Conv1D
- Conv2D
- Conv3D
- ConvLSTM1D
- ConvLSTM2D
- ConvLSTM3D
- Conv1DTranspose
- Conv2DTranspose
- Conv3DTranspose
- DepthwiseConv2D
- SeparableConv1D
- SeparableConv2D
- BatchNormalization
- LayerNormalization
- PReLU 2D
- PReLU 3D
- PReLU 4D
- PReLU 5D
- Bidirectional
- GRU
- LSTM
- RNN (GRU)
- RNN (LSTM)
- RNN (SimpleRNN)
- SimpleRNN
- Show All Articles ( 15 ) Collapse Articles
-
-
-
- Dense
- Embedding
- AdditiveAttention
- Attention
- MultiHeadAttention
- Conv1D
- Conv2D
- Conv3D
- ConvLSTM1D
- ConvLSTM2D
- ConvLSTM3D
- Conv1DTranspose
- Conv2DTranspose
- Conv3DTranspose
- DepthwiseConv2D
- SeparableConv1D
- SeparableConv2D
- BatchNormalization
- LayerNormalization
- PReLU 2D
- PReLU 3D
- PReLU 4D
- PReLU 5D
- Bidirectional
- GRU
- LSTM
- RNN (GRU)
- RNN (LSTM)
- RNN (SimpleRNN)
- SimpleRNN
- Show All Articles ( 15 ) Collapse Articles
-
- Dense
- Embedding
- AdditiveAttention
- Attention
- MultiHeadAttention
- Conv1D
- Conv2D
- Conv3D
- ConvLSTM1D
- ConvLSTM2D
- ConvLSTM3D
- Conv1DTranspose
- Conv2DTranspose
- Conv3DTranspose
- DepthwiseConv2D
- SeparableConv1D
- SeparableConv2D
- BatchNormalization
- LayerNormalization
- PReLU 2D
- PReLU 3D
- PReLU 4D
- PReLU 5D
- Bidirectional
- GRU
- LSTM
- RNN (GRU)
- RNN (LSTM)
- RNN (SimpleRNN)
- SimpleRNN
- Show All Articles ( 15 ) Collapse Articles
-
-
-
-
- Add
- AdditiveAttention
- AlphaDropout
- Attention
- Average
- AvgPool1D
- AvgPool2D
- AvgPool3D
- BatchNormalization
- Bidirectional
- Concatenate
- Conv1D
- Conv1DTranspose
- Conv2D
- Conv2DTranspose
- Conv3D
- Conv3DTranspose
- ConvLSTM1D
- ConvLSTM2D
- ConvLSTM3D
- Cropping1D
- Cropping2D
- Cropping3D
- Dense
- DepthwiseConv2D
- Dropout
- Embedding
- Flatten
- GaussianDropout
- GaussianNoise
- GlobalAvgPool1D
- GlobalAvgPool2D
- GlobalAvgPool3D
- GlobalMaxPool1D
- GlobalMaxPool2D
- GlobalMaxPool3D
- GRU
- Input
- LayerNormalization
- LSTM
- MaxPool1D
- MaxPool2D
- MaxPool3D
- MultiHeadAttention
- Multiply
- Permute3D
- Reshape
- RNN
- SeparableConv1D
- SeparableConv2D
- SimpleRNN
- SpatialDropout
- Substract
- TimeDistributed
- UpSampling1D
- UpSampling2D
- UpSampling3D
- ZeroPadding1D
- ZeroPadding2D
- ZeroPadding3D
- Show All Articles ( 45 ) Collapse Articles
-
- AlphaDropout
- AvgPool1D
- AvgPool2D
- AvgPool3D
- BatchNormalization
- Bidirectional
- Conv1D
- Conv1DTranspose
- Conv2D
- Conv2DTranspose
- Conv3D
- Conv3DTranspose
- Cropping1D
- Cropping2D
- Cropping3D
- Dense
- DepthwiseConv2D
- Dropout
- Embedding
- Flatten
- GaussianDropout
- GaussianNoise
- GlobalAvgPool1D
- GlobalAvgPool2D
- GlobalAvgPool3D
- GlobalMaxPool1D
- GlobalMaxPool2D
- GlobalMaxPool3D
- GRU
- LayerNormalization
- LSTM
- MaxPool1D
- MaxPool2D
- MaxPool3D
- Permute3D
- Reshape
- RNN
- SeparableConv1D
- SeparableConv2D
- SimpleRNN
- SpatialDropout
- UpSampling1D
- UpSampling2D
- UpSampling3D
- ZeroPadding1D
- ZeroPadding2D
- ZeroPadding3D
- Show All Articles ( 32 ) Collapse Articles
-
-
-
- Resume
- Accuracy
- BinaryAccuracy
- BinaryCrossentropy
- BinaryIoU
- CategoricalAccuracy
- CategoricalCrossentropy
- CategoricalHinge
- CosineSimilarity
- FalseNegatives
- FalsePositives
- Hinge
- Huber
- IoU
- KLDivergence
- LogCoshError
- Mean
- MeanAbsoluteError
- MeanAbsolutePercentageError
- MeanIoU
- MeanRelativeError
- MeanSquaredError
- MeanSquaredLogarithmicError
- MeanTensor
- OneHotIoU
- OneHotMeanIoU
- Poisson
- Precision
- PrecisionAtRecall
- Recall
- RecallAtPrecision
- RootMeanSquaredError
- SensitivityAtSpecificity
- SparseCategoricalAccuracy
- SparseCategoricalCrossentropy
- SparseTopKCategoricalAccuracy
- Specificity
- SpecificityAtSensitivity
- SquaredHinge
- Sum
- TopKCategoricalAccuracy
- TrueNegatives
- TruePositives
- Show All Articles ( 28 ) Collapse Articles
-
- Resume
- Constant
- GlorotNormal
- GlorotUniform
- HeNormal
- HeUniform
- Identity
- LecunNormal
- LecunUniform
- Ones
- Orthogonal
- RandomNormal
- RandomUnifom
- TruncatedNormal
- VarianceScaling
- Zeros
- Show All Articles ( 1 ) Collapse Articles
-
Flatten
Description
Setup and add the flatten layer into the model during the definition graph step. Type : polymorphic.
Input parameters
Β Graph in : model architecture.
parameters : layer parameters.
Β data_format :Β enum, one ofΒ channels_lastΒ orΒ channels_firstΒ (default) . The ordering of the dimensions in the inputs.Β channel_lastΒ corresponds to inputs with shapeΒ (batch, steps, features)Β whileΒ channels_firstΒ corresponds to inputs with shapeΒ (batch, features, steps).
Default value βchannels_firstβ.Β training?Β :Β boolean, whether the layer is in training mode (can store data for backward).
Default value βTrueβ.Β lda_coeff :Β float, defines the coefficient by which the loss derivative will be multiplied before being sent to the previous layer (since during the backward run we go backwards).
Default value β1β.
Β in/out paramΒ :
Β input_shapeΒ :Β integer array, shape (not including the batch axis). NB : To be used only if it is the first layer of the model.
Β output_behaviorΒ :Β enum, setup if the layer is an output layer.
Default βNot Outputβ.
name (optional) : string, name of the layer.
Output parameters
Β Graph out : model architecture.
Dimension
Input shape
2D tensor with shape :
- If data_format = βchannels_lastβ : (batch_size, input_dim).
- If data_format = βchannels_firstβ : (batch_size, input_dim).
3D tensor with shape :
- If data_format = βchannels_lastβ : (batch_size, steps, features).
- If data_format = βchannels_firstβ : (batch_size, features, steps).
4D tensor with shape :
- If data_format is βchannels_lastβ : (batch_size, rows, cols, channels).
- If data_format is βchannels_firstβ : (batch_size, channels, rows, cols).
5D tensor with shape :
- If data_format = βchannels_lastβ : (batch_size, input_dim1, input_dim2, input_dim3, channels).
- If data_format = βchannels_firstβ : (batch_size, channels, input_dim1, input_dim2, input_dim3).
Output shape
2D tensor with shape (with a 2D tensor as input) :
- If data_format = βchannels_lastβ : (batch_size, new_input).
- If data_format = βchannels_firstβ : (batch_size, new_input).
2D tensor with shape (with a 3D tensor as input) :
- If data_format = βchannels_lastβ : (batch_size, steps * features).
- If data_format = βchannels_firstβ : (batch_size, features * steps).
2D tensor with shape (with a 4D tensor as input) :
- If data_format is βchannels_lastβ : (batch_size, rows * cols * channels).
If data_format is βchannels_firstβ : (batch_size, channels * rows * cols).
2D tensor with shape (with a 5D tensor as input) :
- If data_format = βchannels_lastβ : (batch_size, input_dim1 * input_dim2 * input_dim3 * channels).
- If data_format = βchannels_firstβ : (batch_size, channels * input_dim1 * input_dim2 * input_dim3).
Example
All these exemples are snippets PNG, you can drop these Snippet onto the block diagram and get the depicted code added to your VI (Do not forget to install HAIBAL library to run it).
Flatten layer

1 β Generate a set of data
We generate an array of data of type single and shape [batch_size, channels, rows, cols] (channel first is default layer configuration).
In case of channel last layer configuration, shape is [batch_size, rows, cols, channels].
2 β Define graph
First, we define the first layer of the graph which is an Input layer (explicit input layer method). This layer is setup as an input array shaped [channels = 7, rows = 5, cols = 3].
Then we add to the graph the Flatten layer.
3 β Run graph
We call the forward method and retrieve the result with the βPrediction 2Dβ method.
This method returns two variables, the first one is the layer information (cluster composed of the layer name, the graph index and the shape of the output layer) and the second one is the prediction with a shape of [batch_size, channels * rows * cols].
Flatten layer, batch and dimension

1 β Generate a set of data
We generate an array of data of type single and shape [number of batch, batch_size, channels, rows, cols] (channel first is default layer configuration).
In case of channel last layer configuration, shape is [number of batch, batch_size, rows, cols, channels].
2 β Define graph
First, we define the first layer of the graph which is an Input layer (explicit input layer method). This layer is setup as an input array shaped [channels = 7, rows = 5, cols = 3].
Then we add to the graph the Flatten layer.
3 β Run graph
We call the forward method and retrieve the result with the βPrediction 2Dβ method.
This method returns two variables, the first one is the layer information (cluster composed of the layer name, the graph index and the shape of the output layer) and the second one is the prediction with a shape of [batch_size, channels * rows * cols].