Welcome to our Support Center

QMoE

Description

Quantized mixture of experts (MoE).

 

Input parameters

 

specified_outputs_namearray, this parameter lets you manually assign custom names to the output tensors of a node.

 Graphs in : cluster, ONNX model architecture.

input (heterogeneous) – T : object, 2D tensor with shape (num_tokens, hidden_size), or 3D tensor with shape (batch_size, sequence_length, hidden_size).
router_probs (heterogeneous) – T : object, 2D tensor with shape (num_tokens, num_experts).
fc1_experts_weights (heterogeneous) – T1 : object, 3D tensor with shape (num_experts, fusion_size * inter_size, hidden_size / pack_size), The fusion_size is 2 for fused swiglu, or 1 otherwise. The pack_size is 8 / expert_weight_bits.
fc1_scales (heterogeneous) – T2 : object, 2D tensor with shape (num_experts, fusion_size * inter_size), or 3D tensor with shape (num_experts, fusion_size * inter_size, hidden_size / block_size) when block_size is provided.
fc1_experts_bias (optional, heterogeneous) – T : object, 2D optional tensor with shape (num_experts, fusion_size * inter_size).
fc2_experts_weights (heterogeneous) – T1 : object, 3D tensor with shape (num_experts, hidden_size, inter_size / pack_size).
fc2_scales (heterogeneous) – T2 : object, 2D tensor with shape (num_experts, hidden_size), or 3D tensor with shape (num_experts, hidden_size, inter_size / block_size) when block_size is provided.
fc2_experts_bias (optional, heterogeneous) – T : object, 2D optional tensor with shape (num_experts, hidden_size).
fc3_experts_weights (optional, heterogeneous) – T1 : object, 3D optional tensor with shape (num_experts, inter_size, hidden_size / pack_size).
fc3_scales (optional, heterogeneous) – T2 : object, 2D optional tensor with shape (num_experts, inter_size), or 3D optional tensor with shape (num_experts, inter_size, hidden_size / block_size) when block_size is provided.
fc3_experts_bias (optional, heterogeneous) – T : object, 2D optional tensor with shape (num_experts, inter_size).

 Parameters : cluster,

 activation_alpha : float, alpha parameter used in activation function.
Default value “0”.
 activation_beta : float, beta parameter used in activation function.
Default value “0”.
activation_type : enum, activation function to use. Choose from relu, gelu, silu, swiglu and identity.
Default value “relu”.
block_size : enum, size of each quantization block along the K (input feature) dimension. Must be power of two and ≥ 16 (e.g., 16, 32, 64, 128). If provided, both hidden_size and inter_size must be divisible by the block size. Otherwise, there is no blocking and a whole column shares one scaling factor.
Default value “0”.
expert_weight_bits : integer, number of bits used in quantized weights.
Default value “0”.
k : integer, number of top experts to select from expert pool.
Default value “0”.
 normalize_routing_weights : boolean, whether to normalize routing weights.
Default value “False”.
swiglu_fusion : enum, 0: not fused, 1: fused and interleaved. 2: fused and not interleaved.
Default value “Not Fused”.
 swiglu_limit : float, the limit used to clamp inputs in SwiGLU. It is infinite when limit is not provided.
Default value “0”.
 use_sparse_mixer : boolean, whether to use sparse mixer.
Default value “False”.
 training? : boolean, whether the layer is in training mode (can store data for backward).
Default value “True”.
 lda coeff : float, defines the coefficient by which the loss derivative will be multiplied before being sent to the previous layer (since during the backward run we go backwards).
Default value “1”.

 name (optional) : string, name of the node.

Output parameters

 

 output (heterogeneous) – T : object, output tensor with same shape of input.

Type Constraints

T in (tensor(bfloat16), tensor(float)tensor(float16)) : Constrain input and output types to float tensors.

T1 in (tensor(uint8)) : Constrain weights type to uint8 tensors.

T2 in (tensor(bfloat16), tensor(float)tensor(float16)) : Constrain scales type to float tensors.

Example

All these exemples are snippets PNG, you can drop these Snippet onto the block diagram and get the depicted code added to your VI (Do not forget to install Deep Learning library to run it).
Table of Contents