-
Quick start
-
API
-
-
-
- Get All Grad
- Get Grad by index
- Get Grad by name
- Get All Store Grad
- Get Store Grad by index
- Get Store Grad by name
- Get All Index/Name
- Get Index by name
- Get Name by index
- Get All "lda_coeff"
- Get "lda_coeff" by index
- Get "lda_coeff" by name
- Get All Layer Params
- Get Layer Params by index
- Get Layer Params by name
- Get All Opti Params
- Get Opti Params by index
- Get Opti Params by name
- Get All Train Status
- Get Train Status by index
- Get Train Status by name
- Get All Loss Type
- Get Model Name
- Get Platform
- Warning Param
- Get All Input Layer Shape
- Get All Output Layer Shape
- Get All Input Shape
- Get Input Shape by index
- Get Input Shape by name
- Get All Output Shape
- Get Output Shape by index
- Get Output Shape by name
- Get All Init Weight
- Get Init Weight by index
- Get Init Weight by name
- Get All Weights
- Get Weights by index
- Get Weights by name
- Get All Weights Shape
- Get Weights Shape by index
- Get Weights Shape by name
- Get All Update Weights
- Get Update Weights by index
- Get Update Weights by name
- Show All Articles ( 30 ) Collapse Articles
-
- Set All Store Grad
- Set Store Grad by index
- Set Store Grad by name
- Set All "lda_coeff"
- Set "lda_coeff" by index
- Set "lda_coeff" by name
- Set All Opti Params
- Set Opti Params by index
- Set Opti Params by name
- Set All Train Status
- Set Train Status by index
- Set Train Status by name
- Set All Loss Type
- Set Model Name
- Set Platform
- Warning Param
- Set All Update Weights
- Set Update Weights by index
- Set Update Weights by name
- Load All Weights
- Load All Weights Model
- Set All Random Weights
- Set Weights by index
- Set Weights by name
- Show All Articles ( 9 ) Collapse Articles
-
-
-
-
- PReLU 2D
- PReLU 3D
- PReLU 4D
- PReLU 5D
- AdditiveAttention
- Attention
- MutiHeadAttention
- Conv1D
- Conv2D
- Conv3D
- ConvLSTM1D
- ConvLSTM2D
- ConvLSTM3D
- Conv1DTranspose
- Conv2DTranspose
- Conv3DTranspose
- DepthwiseConv2D
- SeparableConv1D
- SeparableConv2D
- Dense
- Embedding
- BatchNormalization
- LayerNormalization
- Bidirectional
- GRU
- LSTM
- SimpleRNN
- Show All Articles ( 12 ) Collapse Articles
-
- PReLU 2D
- PReLU 3D
- PReLU 4D
- PReLU 5D
- AdditiveAttention
- Attention
- MultiHeadAttention
- Conv1D
- Conv2D
- Conv3D
- ConvLSTM1D
- ConvLSTM2D
- ConvLSTM3D
- Conv1DTranspose
- Conv2DTranspose
- Conv3DTranspose
- DepthwiseConv2D
- SeparableConv1D
- SeparableConv2D
- Dense
- Embedding
- BatchNormalization
- LayerNormalization
- Bidirectional
- GRU
- LSTM
- SimpleRNN
- Show All Articles ( 12 ) Collapse Articles
-
-
- PReLU 2D
- PReLU 3D
- PReLU 4D
- PReLU 5D
- AdditiveAttention
- Attention
- MultiHeadAttention
- Conv1D
- Conv2D
- Conv3D
- ConvLSTM1D
- ConvLSTM2D
- ConvLSTM3D
- Conv1DTranspose
- Conv2DTranspose
- Conv3DTranspose
- DepthwiseConv2D
- SeparableConv1D
- SeparableConv2D
- Dense
- Embedding
- BatchNormalization
- LayerNormalization
- Bidirectional
- GRU
- LSTM
- SimpleRNN
- Show All Articles ( 12 ) Collapse Articles
-
-
- PReLU 2D
- PReLU 3D
- PReLU 4D
- PReLU 5D
- AdditiveAttention
- Attention
- MultiHeadAttention
- Conv1D
- Conv2D
- Conv3D
- ConvLSTM1D
- ConvLSTM2D
- ConvLSTM3D
- Conv1DTranspose
- Conv2DTranspose
- Conv3DTranspose
- DepthwiseConv2D
- SeparableConv1D
- SeparableConv2D
- Dense
- Embedding
- BatchNormalization
- LayerNormalization
- Bidirectional
- GRU
- LSTM
- SimpleRNN
- Show All Articles ( 12 ) Collapse Articles
-
- PReLU 2D
- PReLU 3D
- PReLU 4D
- PReLU 5D
- AdditiveAttention
- Attention
- MultiHeadAttention
- Conv1D
- Conv2D
- Conv3D
- ConvLSTM1D
- ConvLSTM2D
- ConvLSTM3D
- Conv1DTranspose
- Conv2DTranspose
- Conv3DTranspose
- DepthwiseConv2D
- SeparableConv1D
- SeparableConv2D
- Dense
- Embedding
- BatchNormalization
- LayerNormalization
- Bidirectional
- GRU
- LSTM
- SimpleRNN
- Show All Articles ( 12 ) Collapse Articles
-
-
-
-
- Add
- AdditiveAttention
- AlphaDropout
- Attention
- Average
- AvgPool1D
- AvgPool2D
- AvgPool3D
- BatchNormalization
- Bidirectional
- Concatenate
- Conv1D
- Conv1DTranspose
- Conv2D
- Conv2DTranspose
- Conv3D
- Conv3DTranspose
- ConvLSTM1D
- ConvLSTM2D
- ConvLSTM3D
- Cropping1D
- Cropping2D
- Cropping3D
- Dense
- DepthwiseConv2D
- Dropout
- Embedding
- Flatten
- GaussianDropout
- GaussianNoise
- GlobalAvgPool1D
- GlobalAvgPool2D
- GlobalAvgPool3D
- GlobalMaxPool1D
- GlobalMaxPool2D
- GlobalMaxPool3D
- GRU
- Input
- LayerNormalization
- LSTM
- MaxPool1D
- MaxPool2D
- MaxPool3D
- MultiHeadAttention
- Multiply
- Permute3D
- Reshape
- RNN
- SeparableConv1D
- SeparableConv2D
- SimpleRNN
- SatialDropout
- Substract
- TimeDistributed
- UpSampling1D
- UpSampling2D
- UpSampling3D
- ZeroPadding1D
- ZeroPadding2D
- ZeroPadding3D
- Show All Articles ( 45 ) Collapse Articles
-
- AlphaDropout
- AvgPool1D
- AvgPool2D
- AvgPool3D
- BatchNormalization
- Bidirectional
- Conv1D
- Conv1DTranspose
- Conv2D
- Conv2DTranspose
- Conv3D
- Conv3DTranspose
- Cropping1D
- Cropping2D
- Cropping3D
- Dense
- DepthwiseConv2D
- Dropout
- Embedding
- Flatten
- GaussianDropout
- GaussianNoise
- GlobalAvgPool1D
- GlobalAvgPool2D
- GlobalAvgPool3D
- GlobalMaxPool1D
- GlobalMaxPool2D
- GlobalMaxPool3D
- GRU
- LayerNormalization
- LSTM
- MaxPool1D
- MaxPool2D
- MaxPool3D
- Permute3D
- Reshape
- RNN
- SeparableConv1D
- SeparableConv2D
- SimpleRNN
- SpatialDropout
- UpSampling1D
- UpSampling2D
- UpSampling3D
- ZeroPadding1D
- ZeroPadding2D
- ZeroPadding3D
- Show All Articles ( 32 ) Collapse Articles
-
-
-
GRU
Description
Returns the type def of the GRU layer weights. Type : polymorphic.
Output parameters
typedef : cluster
input_weights : array, 2D values. input_weights = [features, 3*units].
hidden_weights : array, 2D values. hidden_weights = [units, 3*units].
input_biases : array, 1D values. input_biases = [3*units].
hidden_biases : array, 1D values. hidden_biases = [3*units].

Dimension
- input_weights = [features, 3*units]
The size depends on the GRU layer input and the units parameter.
For example, if the input has a size of [batch = 10, timesteps = 8, features = 5] and units a value of 3 then input_weights will have a size of [features = 5, 3*units = 3].
Another example, if the input has a size of [batch = 15, timesteps = 8, features = 6] and units a value of 2 then input_weights will have a size of [features = 6, 3*units = 2].
- hidden_weights = [units, 3*units].
The size depends on the units parameter of the GRU layer.
For example, if units has a value of 6 then hidden_weights will have a size of [units = 6, 3*units = 6].
Another example, if units has a value of 4 then hidden_weights will have a size of [units = 4, 3*units = 4].
- input_biases = [3*units]
The size depends on the units parameter of the GRU layer.
For example, if units has a value of 6, then input_biases will have a size of [3*units = 6].
Another example, if units has a value of 4, then input_biases will have a size of [3*units = 4].
- hidden_biases = [3*units]
The size of hidden_biases is based on the same principle as the size of input_biases.
Example
All these exemples are snippets PNG, you can drop these Snippet onto the block diagram and get the depicted code added to your VI (Do not forget to install HAIBAL library to run it).