Welcome to our Support Center

MultiHeadAttention

Description

Setup and add the multi head attention layer into the model during the definition graph step. Type : polymorphic.

 

Input parameters

 

Graphs in : array, model architecture. Must be query, value, key (key is optional).

 parameters : layer parameters.

num_heads : integer, number of attention heads.
key_dim : integer, size of each attention head for query and key.
value_dim : integer, size of each attention head for value.
 use_bias? : boolean, whether the dense layers use bias vectors/matrices.
Default value “True”.
kernel_initializer : enum, initializer for dense layer kernels.
Default value “GlorotUniform”.
bias_initializer : enum, initializer for dense layer biases.
Default value “Zeros”.
 optimizer :

 algorithm : enum, (name of optimizer) for optimizer instance.
Default value “adam”.
 learning_rate : float, define the learning rate to use.
Default value “0.001”.
 beta_1 : float, define the exponential decay rate for the 1st moment estimates.
Default value “0.9”.
 beta_2 : float, define the exponential decay rate for the 2nd moment estimates.
Default value “0.999”.

 training? : boolean, whether the layer is in training mode (can store data for backward).
Default value “True”.
 store? : boolean, whether the layer stores the last iteration gradient (accessible via the “get_gradients” function).
Default value “False”.
 update? : boolean, whether the layer’s variables should be updated during backward. Equivalent to freeze the layer.
Default value “True”.
 lda_coeff : float, defines the coefficient by which the loss derivative will be multiplied before being sent to the previous layer (since during the backward run we go backwards).
Default value “1”.

 output_behavior : enum, setup if the layer is an output layer.
Default “Not Output”​​.
name (optional) : string, name of the layer.

 

Output parameters

 

Graph out : model architecture.

Dimension

Input shape

List of the following tensors:

  • query : Query Tensor of shape [batch_size, Tq, dim].
  • value : Value Tensor of shape [batch_size, Tv, dim].
  • key : Optional key Tensor of shape [batch_size, Tv, dim]. If not given, will use value for both key and value, which is the most common case.

Output shape

Attention outputs of shape [batch_size, Tq, dim].

Example

All these exemples are snippets PNG, you can drop these Snippet onto the block diagram and get the depicted code added to your VI (Do not forget to install HAIBAL library to run it).

MultiHeadAttention layer with two identical input layer shape

1 – Generate a set of data

We generate two array of data of type single and shape [batch_size = 10, Tq & Tv = 7, dim = 15] (same input shape).

2 – Define graph

We first define two input layers named “query_input” and “value_input”. This layers is setup as an input array shaped [Tq = 7, dim = 15] and [Tv = 7, dim = 15].
Finally, we construct an array of the two graphs generated at the input of MultiHeadAttention.

3 – Summarize graph

Returns the summary of the model in file text.

4 – Run graph

We call the forward method and retrieve the result with the “Prediction 3D” method.
This method returns two variables, the first one is the layer information (cluster composed of the layer name, the graph index and the shape of the output layer) and the second one is the prediction with a shape of [batch_size, Tq, dim].

 

MultiHeadAttention layer with two different input layer shape

1 – Generate a set of data

We generate two array of data of type single and shape1 [batch_size = 10, Tq = 7, dim = 15] and shape2 [batch_size = 10, Tv = 3, dim = 15] (different input shape).
We can only modify the first dimension (Tq or Tv) because the layer won’t accept different dimension between query, value and key.

2 – Define graph

We first define two input layers named “query_input” and “value_input”. This layers is setup as an input array shaped [Tq = 7, dim = 15] and [Tv = 3, dim = 15].
Finally, we construct an array of the two graphs generated at the input of MultiHeadAttentionAttention.

3 – Summarize graph

Returns the summary of the model in file text.

4 – Run graph

We call the forward method and retrieve the result with the “Prediction 3D” method.
This method returns two variables, the first one is the layer information (cluster composed of the layer name, the graph index and the shape of the output layer) and the second one is the prediction with a shape of [batch_size, Tq, dim].

 

Table of Contents