Welcome to our Support Center

GroupQueryAttention

Description

Group Query Self/Cross Attention.

 

Input parameters

 

specified_outputs_namearray, this parameter lets you manually assign custom names to the output tensors of a node.

 Graphs in : cluster, ONNX model architecture.

query (heterogeneous) – T : object, query with shape (batch_size, sequence_length, hidden_size), or packed QKV with shape(batch_size, sequence_length, d) where d is (num_heads * head_size + 2 * kv_num_heads * head_size).
key (optional, heterogeneous) – T : object, key with shape (batch_size, kv_sequence_length, kv_hidden_size).
value (optional, heterogeneous) – T : object, value with shape (batch_size, kv_sequence_length, kv_hidden_size).
past_key (optional, heterogeneous) – T : object, past state key with support for format BNSH. When past_key uses same tensor as present_key(k-v cache), it is of length max_sequence_length… otherwise of length past_sequence_length.
past_value (optional, heterogeneous) – T : object, past state value with support for format BNSH. When past_value uses same tensor as present_value(k-v cache), it is of length max_sequence_length… otherwise of length past_sequence_length.
seqlens_k (heterogeneous) – M : object, 1D Tensor of shape (batch_size). Equivalent to (total_sequence_lengths – 1).
total_sequence_length (heterogeneous) – M : object, scalar tensor equivalent to the maximum total sequence length (past + new) of the batch. Used for checking inputs and determining prompt vs token generation case.
cos_cache (optional, heterogeneous) – T : object, 2D tensor with shape (max_sequence_length, head_size / 2).
sin_cache (optional, heterogeneous) – T : object, 2D tensor with shape (max_sequence_length, head_size / 2).

 Parameters : cluster,

do rotary : boolean, whether to use rotary position embedding.
Default value “False”.
kv_num_heads : integer, number of attention heads for k and v.
Default value “0”.
local_window_size : integer, left_window_size for local attention (like Mistral).
Default value “0”.
num_heads : integer, number of attention heads for q.
Default value “0”.
qk_output : enum, output values of QK matrix multiplication before (1) or after (2) softmax normalization.
Default value “None”.
rotary_interleaved : boolean, rotate using interleaved pattern. 
Default value “False”.
scale : float, custom scale will be used if specified.
Default value “0”.
smooth_softmax : boolean, use a smooth factor in softmax.
Default value “False”.
softcap : float, softcap value for attention weights.
Default value “0”.
 training? : boolean, whether the layer is in training mode (can store data for backward).
Default value “True”.
 lda coeff : float, defines the coefficient by which the loss derivative will be multiplied before being sent to the previous layer (since during the backward run we go backwards).
Default value “1”.

 name (optional) : string, name of the node.

Output parameters

 

 Graphs out : cluster, ONNX model architecture.

output (heterogeneous) – T : object, 3D output tensor with shape (batch_size, sequence_length, hidden_size).
present_key (heterogeneous) – T : object, present state key with support for format BNSH. When past_key uses same tensor as present_key(k-v buffer), it is of length max_sequence_length… otherwise of length past_sequence_length +kv_sequence_length.
present_value (heterogeneous) – T : object, present state value with support for format BNSH. When past_value uses same tensor as present_value(k-v buffer), it is of length max_sequence_length… otherwise of length past_sequence_length +kv_sequence_length.
output_qk (optional, heterogeneous) – T : object, values of QK matrix multiplication, either before or after softmax normalization.

Type Constraints

T in (tensor(float)tensor(float16), tensor(bfloat16)) : Constrain input and output to float tensors.

M in (tensor(int32)) : Constrain mask to int tensor.

Example

All these exemples are snippets PNG, you can drop these Snippet onto the block diagram and get the depicted code added to your VI (Do not forget to install Deep Learning library to run it).
Table of Contents