Welcome to our Support Center

MicrosoftMultiHeadAttention

Description

Multi-Head Self/Cross Attention. Bias from input projection is included. The key padding mask is optional. When its shape is (batch_size, kv_sequence_length), value 0 means padding or 1 otherwise. When key has right-side padding, its shape could be (batch_size): it is actual length of each key sequence excluding paddings.

 

Input parameters

 

specified_outputs_namearray, this parameter lets you manually assign custom names to the output tensors of a node.

 Graphs in : cluster, ONNX model architecture.

query – T : object, query with shape (batch_size, sequence_length, hidden_size), or packed QKV with shape (batch_size, kv_sequence_length, num_heads, 3, head_size).
key (optional) – T : object, key with shape (batch_size, kv_sequence_length, hidden_size), or packed KV with shape (batch_size, kv_sequence_length, num_heads, 2, head_size), or past_key with shape (batch_size, num_heads, kv_sequence_length, head_size).
value (optional) – T : object, value with shape (batch_size, kv_sequence_length, v_hidden_size), or past_value with shape (batch_size, num_heads, kv_sequence_length, head_size).
bias (optional) – T : object, bias tensor with shape (hidden_size + hidden_size + v_hidden_size) from input projection.
key_padding_mask (optional) – M : object, key padding mask with shape (batch_size), (3 * batch_size + 2), (batch_size, kv_sequence_length), (batch_size, total_sequence_length), or (batch_size, sequence_length, total_sequence_length).
attention_bias (optional) – T : object, bias added to QxK’ with shape (batch_size or 1, num_heads or 1, sequence_length, total_sequence_length).
past_key (optional) – T : object, past state for key with shape (batch_size, num_heads, past_sequence_length, head_size) or (batch_size, num_heads, max_sequence_length, head_size) when buffer sharing is used.
past_value (optional) – T : object, past state for value with shape (batch_size, num_heads, past_sequence_length, head_size) or (batch_size, num_heads, max_sequence_length, head_size) when buffer sharing is used.

 Parameters : cluster,

mask_filter_value : float, the value to be filled in the attention mask.
Default value “-10000”.
num_heads : integer, number of attention heads.
Default value “0”.
scale : float, custom scale will be used if specified.
Default value “0”.
unidirectional : boolean, whether every token can only attend to previous tokens.
Default value “False”.
 training? : boolean, whether the layer is in training mode (can store data for backward).
Default value “True”.
 lda coeff : float, defines the coefficient by which the loss derivative will be multiplied before being sent to the previous layer (since during the backward run we go backwards).
Default value “1”.

 name (optional) : string, name of the node.

Output parameters

 Graphs out : cluster, ONNX model architecture.

output – T : object, 3D output tensor with shape (batch_size, sequence_length, v_hidden_size).
present_key (optional) – T : object, present state for key with shape (batch_size, num_heads, total_sequence_length, head_size) or (batch_size, num_heads, max_sequence_length, head_size) when buffer sharing is used.
present_value_cache (optional) – T : object, present state for value with shape (batch_size, num_heads, total_sequence_length, head_size) or (batch_size, num_heads, max_sequence_length, head_size) when buffer sharing is used.
qk (optional) – QK : object, normalized Q * K, of shape (batch_size, num_heads, sequence_length, total_sequence_length).

Type Constraints

T in (tensor(float)tensor(float16)) : Constrain input and output to float tensors.

QK in (tensor(float)tensor(float16)) : Constrain QK output to float32 or float16 tensors, independent of input type or output type.

M in (tensor(int32)) : Constrain mask to integer types.

Example

All these exemples are snippets PNG, you can drop these Snippet onto the block diagram and get the depicted code added to your VI (Do not forget to install Deep Learning library to run it).
Table of Contents