Welcome to our Support Center

Convolution 1D Transpose

Description

Setup and add the convolution 1D transpose layer into the model during the definition graph step. Type : polymorphic.

 

Input parameters

 

Graph in : model architecture.

parameters : layer parameters.

ย n_filters :ย integer, the dimensionality of the output space.
Default value โ€œ3โ€.
ย sizeย :ย integer, specify the length of the 1D convolution window.
Default value โ€œ3โ€.
ย strideย :ย integer, specify the stride length of the convolution.
Default value โ€œ1โ€.
ย activationย :ย enum, activation function to use.
Default value โ€œreluโ€.
ย use_bias? :ย boolean, whether the layer uses a bias vector.
Default value โ€œTrueโ€.
ย paddingย :ย boolean, False = โ€œvalidโ€ means no padding. True = โ€œsameโ€ results in padding with zeros evenly to the left/right or up/down of the input such that output has the same height/width dimension as the input.
Default value โ€œFalseโ€.
ย data_format :ย enum, one ofย channels_lastย orย channels_firstย (default) . The ordering of the dimensions in the inputs.ย channel_lastย corresponds to inputs with shapeย (batch, steps, features)ย whileย channels_firstย corresponds to inputs with shapeย (batch, features, steps).
Default value โ€œchannels_firstโ€.
ย optimizer :

ย algorithmย :ย enum, (name of optimizer) for optimizer instance.
Default value โ€œadamโ€.
ย learning_rate :ย float, define the learning rate to use.
Default value โ€œ0.001โ€.
ย beta_1 :ย float, define the exponential decay rate for the 1st moment estimates.
Default value โ€œ0.9โ€.
ย beta_2 :ย float, define the exponential decay rate for the 2nd moment estimates.
Default value โ€œ0.999โ€.

ย filter_initializerย :ย enum, initializer for the kernel weights matrix.
Default value โ€œglorot_uniformโ€.
ย bias_initializerย :ย enum, initializer for the bias vector.
Default value โ€œzeroโ€.
ย training?ย :ย boolean, whether the layer is in training mode (can store data for backward).
Default value โ€œTrueโ€.
ย store?ย :ย boolean, whether the layer stores the last iteration gradient (accessible via the โ€œget_gradientsโ€ function).
Default value โ€œFalseโ€.
ย update?ย :ย boolean, whether the layerโ€™s variables should be updated during backward. Equivalent to freeze the layer.
Default value โ€œTrueโ€.
ย lda_coeff :ย float, defines the coefficient by which the loss derivative will be multiplied before being sent to the previous layer (since during the backward run we go backwards).
Default value โ€œ1โ€.

in/out param :

input_shape : integer array, shape (not including the batch axis). NB : To be used only if it is the first layer of the model.
ย output_behaviorย :ย enum, setup if the layer is an output layer.
Default โ€œNot Outputโ€โ€‹โ€‹โ€‹.

name (optional) : string, name of the layer.

 

Output parameters

 

Graph out : model architecture.

Dimension

Input shape

3-Dimension tensor with shape : [batch_size, channel, width] (default “channel_first” parameters).
In case of “channel_last” setup, forward function will input shape [batch_size, width, channels].

 

Output shape

Same shape as input 3-Dimension tensor with shape : [batch_size, channel, width] (default “channel_first” parameters).
In case of “channel_last” setup, forward function will input shape [batch_size, width, channel].

Example

All these exemples are snippets PNG, you can drop these Snippet onto the block diagram and get the depicted code added to your VI (Do not forget to install HAIBAL library to run it).

Convolution 1D Transpose layer with explicit input layer

1 โ€“ Generate a set of data

We generate an array of data of type single and shape [batch_size, channel, width] (channel first default layer configuration).
In case of channel last layer configuration, shape is [batch_size, width, channel].

2 โ€“ Define graph

First, we define the first layer of the graph which is an Input layer (explicit input layer method). This layer is setup as an input array shaped [channel = 5, width = 128].
Then we add to the graph the Conv1DTranspose layer.

3 โ€“ Run graph

We call the forward method and retrieve the result with the โ€œPrediction 3Dโ€ method.
This method returns two variables, the first one is the layer information (cluster composed of the layer name, the graph index and the shape of the output layer) and the second one is the prediction with a shape of [batch_size, filter, new_width].

 

Convolution 1D Transpose layer with implicit input layer

1 โ€“ Generate a set of data

We generate an array of data of type single and shape [batch_size, channel, width] (channel first default layer configuration).
In case of channel last layer configuration, shape is [batch_size, width, channel].

2 โ€“ Define graph

First, we define the Conv1DTranspose layer as the input layer of the graph (implicit input layer method). To do this, we send in the โ€œinput_shapeโ€ variable of the โ€œin/out paramโ€ cluster an array of shape [channel = 5, width = 128].
An input layer will be implicitly created and the name of this input layer will be the same name as its parent prefixed with โ€œinput_โ€.
Then we add to the graph the Conv1DTranspose layer.

3 โ€“ Run graph

We call the forward method and retrieve the result with the โ€œPrediction 3Dโ€ method.
This method returns two variables, the first one is the layer information (cluster composed of the layer name, the graph index and the shape of the output layer) and the second one is the prediction with a shape of [batch_size, filter, new_width].

 

Convolution 1D Transpose layer, batch and dimension

1 โ€“ Generate a set of data

We generate an array of data of type single and shape [number of batch = 9, batch_size = 10, channel = 5, width = 128] (channel first default layer configuration).
In case of channel last layer configuration, shape is [batch_size, width, channel].

2 โ€“ Define graph

First, we define the first layer of the graph which is an Input layer (explicit input layer method). This layer is setup as an input array shaped [channel = 5, width = 128].
Then we add to the graph the Conv1DTranspose layer.

3 โ€“ Run graph

We call the forward method and retrieve the result with the โ€œPrediction 3Dโ€ method.
This method returns two variables, the first one is the layer information (cluster composed of the layer name, the graph index and the shape of the output layer) and the second one is the prediction with a shape of [batch_size, filter, new_width].

 

Table of Contents