-
Quick start
-
API
-
-
- Resume
- Add
- AdditiveAttention
- AlphaDropout
- Attention
- Average
- AvgPool1D
- AvgPool2D
- AvgPool3D
- BatchNormalization
- Bidirectional
- Concatenate
- Conv1D
- Conv1DTranspose
- Conv2D
- Conv2DTranspose
- Conv3D
- Conv3DTranspose
- ConvLSTM1D
- ConvLSTM2D
- ConvLSTM3D
- Cropping1D
- Cropping2D
- Cropping3D
- Dense
- DepthwiseConv2D
- Dropout
- Embedding
- Flatten
- GaussianDropout
- GaussianNoise
- GlobalAvgPool1D
- GlobalAvgPool2D
- GlobalAvgPool3D
- GlobalMaxPool1D
- GlobalMaxPool2D
- GlobalMaxPool3D
- GRU
- Input
- LayerNormalization
- LSTM
- MaxPool1D
- MaxPool2D
- MaxPool3D
- MultiHeadAttention
- Multiply
- Permute3D
- Reshape
- RNN
- SeparableConv1D
- SeparableConv2D
- SimpleRNN
- SpatialDropout
- Split
- Substract
- UpSampling1D
- UpSampling2D
- UpSampling3D
- ZeroPadding1D
- ZeroPadding2D
- ZeroPadding3D
- Show All Articles ( 46 ) Collapse Articles
-
-
-
-
-
- Resume
- Constant
- GlorotNormal
- GlorotUniform
- HeNormal
- HeUniform
- Identity
- LecunNormal
- LecunUniform
- Ones
- Orthogonal
- RandomNormal
- RandomUnifom
- TruncatedNormal
- VarianceScaling
- Zeros
- Show All Articles ( 1 ) Collapse Articles
-
-
-
-
-
-
-
- Dense
- PReLU 2D
- PReLU 3D
- PReLU 4D
- PReLU 5D
- AdditiveAttention
- Attention
- MutiHeadAttention
- Conv1D
- Conv2D
- Conv3D
- ConvLSTM1D
- ConvLSTM2D
- ConvLSTM3D
- Conv1DTranspose
- Conv2DTranspose
- Conv3DTranspose
- DepthwiseConv2D
- SeparableConv1D
- SeparableConv2D
- Embedding
- BatchNormalization
- LayerNormalization
- Bidirectional
- GRU
- LSTM
- SimpleRNN
- Show All Articles ( 12 ) Collapse Articles
-
- Dense
- PReLU 2D
- PReLU 3D
- PReLU 4D
- PReLU 5D
- AdditiveAttention
- Attention
- MultiHeadAttention
- Conv1D
- Conv2D
- Conv3D
- ConvLSTM1D
- ConvLSTM2D
- ConvLSTM3D
- Conv1DTranspose
- Conv2DTranspose
- Conv3DTranspose
- DepthwiseConv2D
- SeparableConv1D
- SeparableConv2D
- Embedding
- BatchNormalization
- LayerNormalization
- Bidirectional
- GRU
- LSTM
- SimpleRNN
- Show All Articles ( 12 ) Collapse Articles
-
-
- Dense
- AdditiveAttention
- Attention
- MultiHeadAttention
- BatchNormalization
- LayerNormalization
- Bidirectional
- GRU
- LSTM
- SimpleRNN
- Conv1D
- Conv2D
- Conv3D
- Conv1DTranspose
- Conv2DTranspose
- Conv3DTranspose
- ConvLSTM1D
- ConvLSTM2D
- ConvLSTM3D
- DepthwiseConv2D
- SeparableConv1D
- SeparableConv2D
- Embedding
- PReLU 2D
- PReLU 3D
- PReLU 4D
- PReLU 5D
- Show All Articles ( 12 ) Collapse Articles
-
-
- Dense
- Embedding
- AdditiveAttention
- Attention
- MultiHeadAttention
- Conv1D
- Conv2D
- Conv3D
- ConvLSTM1D
- ConvLSTM2D
- ConvLSTM3D
- Conv1DTranspose
- Conv2DTranspose
- Conv3DTranspose
- DepthwiseConv2D
- SeparableConv1D
- SeparableConv2D
- BatchNormalization
- LayerNormalization
- PReLU 2D
- PReLU 3D
- PReLU 4D
- PReLU 5D
- Bidirectional
- GRU
- LSTM
- RNN (GRU)
- RNN (LSTM)
- RNN (SimpleRNN)
- SimpleRNN
- Show All Articles ( 15 ) Collapse Articles
-
- Dense
- Embedding
- AdditiveAttention
- Attention
- MultiHeadAttention
- Conv1D
- Conv2D
- Conv3D
- ConvLSTM1D
- ConvLSTM2D
- ConvLSTM3D
- Conv1DTranspose
- Conv2DTranspose
- Conv3DTranspose
- DepthwiseConv2D
- SeparableConv1D
- SeparableConv2D
- BatchNormalization
- LayerNormalization
- PReLU 2D
- PReLU 3D
- PReLU 4D
- PReLU 5D
- Bidirectional
- GRU
- LSTM
- RNN (GRU)
- RNN (LSTM)
- RNN (SimpleRNN)
- SimpleRNN
- Show All Articles ( 15 ) Collapse Articles
-
-
-
- Dense
- Embedding
- AdditiveAttention
- Attention
- MultiHeadAttention
- Conv1D
- Conv2D
- Conv3D
- ConvLSTM1D
- ConvLSTM2D
- ConvLSTM3D
- Conv1DTranspose
- Conv2DTranspose
- Conv3DTranspose
- DepthwiseConv2D
- SeparableConv1D
- SeparableConv2D
- BatchNormalization
- LayerNormalization
- PReLU 2D
- PReLU 3D
- PReLU 4D
- PReLU 5D
- Bidirectional
- GRU
- LSTM
- RNN (GRU)
- RNN (LSTM)
- RNN (SimpleRNN)
- SimpleRNN
- Show All Articles ( 15 ) Collapse Articles
-
- Dense
- Embedding
- AdditiveAttention
- Attention
- MultiHeadAttention
- Conv1D
- Conv2D
- Conv3D
- ConvLSTM1D
- ConvLSTM2D
- ConvLSTM3D
- Conv1DTranspose
- Conv2DTranspose
- Conv3DTranspose
- DepthwiseConv2D
- SeparableConv1D
- SeparableConv2D
- BatchNormalization
- LayerNormalization
- PReLU 2D
- PReLU 3D
- PReLU 4D
- PReLU 5D
- Bidirectional
- GRU
- LSTM
- RNN (GRU)
- RNN (LSTM)
- RNN (SimpleRNN)
- SimpleRNN
- Show All Articles ( 15 ) Collapse Articles
-
-
-
- Resume
- Accuracy
- BinaryAccuracy
- BinaryCrossentropy
- BinaryIoU
- CategoricalAccuracy
- CategoricalCrossentropy
- CategoricalHinge
- CosineSimilarity
- FalseNegatives
- FalsePositives
- Hinge
- Huber
- IoU
- KLDivergence
- LogCoshError
- Mean
- MeanAbsoluteError
- MeanAbsolutePercentageError
- MeanIoU
- MeanRelativeError
- MeanSquaredError
- MeanSquaredLogarithmicError
- MeanTensor
- OneHotIoU
- OneHotMeanIoU
- Poisson
- Precision
- PrecisionAtRecall
- Recall
- RecallAtPrecision
- RootMeanSquaredError
- SensitivityAtSpecificity
- SparseCategoricalAccuracy
- SparseCategoricalCrossentropy
- SparseTopKCategoricalAccuracy
- Specificity
- SpecificityAtSensitivity
- SquaredHinge
- Sum
- TopKCategoricalAccuracy
- TrueNegatives
- TruePositives
- Show All Articles ( 28 ) Collapse Articles
-
-
ConvLSTM1D
Description
Setup and add the convolution lstm 1D layer into the model during the definition graph step. Type : polymorphic.
Input parameters
Model in : model architecture.
Β Parameters :Β layer parameters.
Β filters :Β integer, the dimensionality of the output space.
Default value β3β.Β sizeΒ :Β integer, specify the length of the 1D convolution window.
Default value β3β.Β strideΒ :Β integer, specify the stride length of the convolution.
Default value β1β.Β explicit paddingΒ :Β array,Β specifies the number of pixels to pad at the beginning and end of each spatial axis. Batch and channel axes are not padded. Only used when padding =Β EXPLICIT.
Default value βemptyβ.Β paddingΒ :Β enum,Β type of padding to apply.
Default value βVALIDβ.Β ActivationΒ :Β cluster,Β activation function to use.
Β Recurrent ActivationΒ :Β cluster,Β activation function to use for the recurrent step.
Β Output ActivationΒ :Β cluster,Β activation function to use.
Β use bias? :Β boolean, whether the layer uses a bias vector.
Default value βTrueβ.Β Kernel InitializerΒ :Β cluster,Β initializer for theΒ
kernel
Β weights matrix, used for the linear transformation of the inputs.Β Recurrent InitializerΒ :Β cluster,Β initializer for theΒ
recurrent_kernel
Β weights matrix, used for the linear transformation of the recurrent state.Β Bias InitializerΒ :Β cluster,Β initializer for the bias vector.
Β unit forget bias? :Β boolean, If true, add 1 to the bias of the forget gate at initialization. Use in combination with Bias Initializer = ‘Zeros’.
Default value βTrueβ. dropout : float, between 0 and 1. Fraction of the units to drop for the linear transformation of the inputs.
recurrent dropout : float, between 0 and 1. Fraction of the units to drop for the linear transformation of the recurrent state.
Β return sequences? : boolean, Whether to return the last output in the output sequence, or the full sequence.
Default value βFalseβ.Β stateful? :Β boolean, if True, the last state for each sample at index i in a batch will be used as initial state for the sample of index i in the following batch.
Default value βFalseβ.Β Kernel RegularizerΒ :Β cluster,Β regularizer function applied to theΒ
kernel
Β weights matrix.Β Recurrent RegularizerΒ :Β cluster,Β regularizer function applied to theΒ
recurrent_kernel
Β weights matrix.Β Bias RegularizerΒ :Β cluster,Β regularizer function applied to the bias vector.
Β training?Β :Β boolean, whether the layer is in training mode (can store data for backward).
Default value βTrueβ.Β store?Β :Β boolean, whether the layer stores the last iteration gradient (accessible via the βget_gradientsβ function).
Default value βFalseβ.Β update?Β :Β boolean, whether the layerβs variables should be updated during backward. Equivalent to freeze the layer.
Default value βTrueβ.Β lda coeff :Β float, defines the coefficient by which the loss derivative will be multiplied before being sent to the previous layer (since during the backward run we go backwards).
Default value β1β.
name (optional) : string, name of the layer.
Output parameters
Model out : model architecture.
Dimension
Input shape
4D tensor with shape
- If data_format = βchannels_lastβ : (samples, time, rows, channels).
- If data_format = βchannels_firstβ : (samples, time, channels, rows).
Output shape
- if “return_sequences” = True :
-
- If data_format = βchannels_lastβ : 4D tensor with shape (samples, timesteps, new_rows, filters).
- If data_format = βchannels_firstβ : 4D tensor with shape (samples, timesteps, filters, new_rows).
- if “return_sequences” = False :
-
- If data_format = βchannels_lastβ : 3D tensor with shape (samples, new_rows, filters).
- If data_format = βchannels_firstβ : 3D tensor with shape (samples, filters, new_rows).
Example
All these exemples are snippets PNG, you can drop these Snippet onto the block diagram and get the depicted code added to your VI (Do not forget to install Deep Learning library to run it).
ConvLSTM1D layer

1 β Generate a set of data
We generate an array of data of type single and shape [samples = 10, time = 7, channels = 6, rows = 5].
2 β Define graph
First, we define the first layer of the graph which is an Input layer (explicit input layer method). This layer is setup as an input array shaped [time = 7, channels = 6, rows = 5].
Then we add to the graph the ConvLSTM1D layer.
3 β Run graph
We call the forward method and retrieve the result with the βPrediction 3Dβ method.
This method returns two variables, the first one is the layer information (cluster composed of the layer name, the graph index and the shape of the output layer) and the second one is the prediction with a shape of [samples, filters, new_rows].
The output dimension depends on the parameters βreturn-sequencesβ refer to the chapter βDimensionβ of this documentation.
ConvLSTM1D layer, batch and dimension

1 β Generate a set of data
We generate an array of data of type single and shape [number of batch = 9, samples = 10, time = 7, channels = 6, rows = 5].
2 β Define graph
First, we define the first layer of the graph which is an Input layer (explicit input layer method). This layer is setup as an input array shaped [time = 7, channels = 6, rows = 5].
Then we add to the graph the ConvLSTM1D layer.
3 β Run graph
We call the forward method and retrieve the result with the βPrediction 3Dβ method.
This method returns two variables, the first one is the layer information (cluster composed of the layer name, the graph index and the shape of the output layer) and the second one is the prediction with a shape of [samples, filters, new_rows].
The output dimension depends on the parameters βreturn-sequencesβ refer to the chapter βDimensionβ of this documentation.