Welcome to our Support Center

MoE

Description

Mixture of experts. Examples: Switch transformer(https://arxiv.org/pdf/2101.03961.pdf) use top 1, GLaM(https://arxiv.org/abs/2112.06905) activates top 2 FFN, Vision MOE(https://arxiv.org/pdf/2106.05974.pdf) usually uses top 32 experts and Mixtral(https://huggingface.co/blog/mixtral).

 

Input parameters

 

specified_outputs_namearray, this parameter lets you manually assign custom names to the output tensors of a node.

 Graphs in : cluster, ONNX model architecture.

input (heterogeneous) – T : object, 2D input tensor with shape (num_tokens, hidden_size) or 3D input tensor with shape (batch_size, sequence_length, hidden_size).
router_probs (heterogeneous) – T : object, 2D input tensor with shape (num_tokens, num_experts).
fc1_experts_weights (heterogeneous) – T : object, 3D input tensor with shape (num_experts, fusion_size * inter_size, hidden_size), where fusion_size is 2 for fused swiglu, and 1 otherwise.
fc1_experts_bias (optional, heterogeneous) – T : object, 2D optional input tensor with shape (num_experts, fusion_size * inter_size).
fc2_experts_weights (heterogeneous) – T : object, 3D input tensor with shape (num_experts, hidden_size, inter_size).
fc2_experts_bias (optional, heterogeneous) – T : object, 2D optional input tensor with shape (num_experts, hidden_size).
fc3_experts_weights (optional, heterogeneous) – T : object, 3D optional input tensor with shape (num_experts, inter_size, hidden_size).
fc3_experts_bias (optional, heterogeneous) – T : object, 2D optional input tensor with shape (num_experts, inter_size).

 Parameters : cluster,

activation_type : enum, activation function to use. Choose from relu, gelu, silu, swiglu and identity.
Default value “relu”.
k : integer, number of top experts to select from expert pool.
Default value “0”.
normalize_routing_weights : integer, whether to normalize routing weights.
Default value “0”.
 training? : boolean, whether the layer is in training mode (can store data for backward).
Default value “True”.
 lda coeff : float, defines the coefficient by which the loss derivative will be multiplied before being sent to the previous layer (since during the backward run we go backwards).
Default value “1”.

 name (optional) : string, name of the node.

Output parameters

 

 output (heterogeneous) – T : object, 2D input tensor with shape (num_tokens, hidden_size) or 3D input tensor with shape (batch_size, sequence_length, hidden_size).

Type Constraints

T in (tensor(bfloat16), tensor(float)tensor(float16)) : Constrain input and output types to float tensors.

Example

All these exemples are snippets PNG, you can drop these Snippet onto the block diagram and get the depicted code added to your VI (Do not forget to install Deep Learning library to run it).
Table of Contents