-
Quick start
-
API
-
-
- Resume
- Add
- AdditiveAttention
- AlphaDropout
- Attention
- Average
- AvgPool1D
- AvgPool2D
- AvgPool3D
- BatchNormalization
- Bidirectional
- Concatenate
- Conv1D
- Conv1DTranspose
- Conv2D
- Conv2DTranspose
- Conv3D
- Conv3DTranspose
- ConvLSTM1D
- ConvLSTM2D
- ConvLSTM3D
- Cropping1D
- Cropping2D
- Cropping3D
- Dense
- DepthwiseConv2D
- Dropout
- ELU
- Embedding
- Exponential
- Flatten
- GaussianDropout
- GaussianNoise
- GELU
- GlobalAvgPool1D
- GlobalAvgPool2D
- GlobalAvgPool3D
- GlobalMaxPool1D
- GlobalMaxPool2D
- GlobalMaxPool3D
- GRU
- HardSigmoid
- Input
- LayerNormalization
- LeakyReLU
- Linear
- LSTM
- MaxPool1D
- MaxPool2D
- MaxPool3D
- MultiHeadAttention
- Multiply
- Output Predict
- Output Train
- Permute3D
- PReLU
- ReLU
- Reshape
- RNN
- SELU
- SeparableConv1D
- SeparableConv2D
- Sigmoid
- SimpleRNN
- SoftMax
- SoftPlus
- SoftSign
- SpatialDropout
- Split
- Substract
- Swish
- TanH
- ThresholdedReLU
- UpSampling1D
- UpSampling2D
- UpSampling3D
- ZeroPadding1D
- ZeroPadding2D
- ZeroPadding3D
- Show All Articles ( 64 ) Collapse Articles
-
-
- Abs
- Acos
- Acosh
- Add
- AffineGrid
- And
- ArgMax
- ArgMin
- Asin
- Asinh
- Atan
- Atanh
- Attention
- AttnLSTM
- AveragePool
- BatchNormalization
- Bernouilli
- BiasAdd
- BiasDropout
- BiasGelu
- BiasSoftmax
- BiasSplitGelu
- BifurcationDetector
- BitmaskBiasDropout
- BitmaskDropout
- BitShift
- BitwiseAnd
- BitwiseNot
- BitwiseOr
- BitwiseXor
- BlackmanWindow
- Cast
- CastLike
- CDist
- Ceil
- Celu
- CenterCropPad
- Clip
- Col2lm
- ComplexMul
- ComplexMulConj
- Compress
- Concat
- ConcatFromSequence
- Conv
- ConvInteger
- ConvTranspose
- ConvTransposeWithDynamicPads
- Cos
- Cosh
- CropAndResize
- CumSum
- DecoderAttention
- DecoderMaskedMultiHeadAttention
- DecoderMaskedSelfAttention
- DeformConv
- DepthToSpace
- DequantizeBFP
- DequantizeLinear
- DequantizeWithOrder
- Det
- DFT
- Div
- Dropout
- DynamicQuantizeLinear
- DynamicQuantizeLSTM
- DynamicQuantizeMatMul
- DynamicTimeWarping
- Einsum
- EmbedLayerNormalization
- EPContext
- Equal
- Erf
- Exp
- Expand
- ExpandDims
- EyeLike
- FastGelu
- Flatten
- Floor
- FusedConv
- FusedGemm
- FusedMatMul
- FusedMatMulActivation
- GatedRelativePositionBias
- Gather
- GatherElements
- GatherND
- Gemm
- GemmaRotaryEmbedding
- GemmFastGelu
- GemmFloat8
- GlobalAveragePool
- GlobalLpPool
- GlobalMaxPool
- Greater
- GreaterOrEqual
- GreedySearch
- GridSample
- GroupNorm
- GroupQueryAttention
- GRU
- HammingWindow
- HannWindow
- HardMax
- HardSwish
- Identity
- If
- ImageDecoder
- InstanceNormalization
- Inverse
- lrfft
- lslnf
- lsNaN
- LayerNormalization
- Less
- LessOrEqual
- Log
- LogSoftmax
- LongformerAttention
- Loop
- LpNormalization
- LpPool
- LRN
- LSTM
- MatMul
- MatMulBnb4
- MatMulFpQ4
- MatMulInteger
- MatMulInteger16
- MatMulIntergerToFloat
- MatMulNBits
- Max
- MaxPool
- MaxPoolWithMask
- MaxRoiPool
- MaxUnPool
- Mean
- MeanVarianceNormalization
- MelWeightMatrix
- MicrosoftDequantizeLinear
- MicrosoftGatherND
- MicrosoftGelu
- MicrosoftGridSample
- MicrosoftMultiHeadAttention
- MicrosoftPad
- MicrosoftQLinearConv
- MicrosoftQuantizeLinear
- MicrosoftRange
- MicrosoftTrilu
- MicrosoftUnique
- Min
- Mish
- Mod
- MoE
- Mul
- MulInteger
- Multinomial
- MurmurHash3
- Neg
- NegativeLogLikelihoodLoss
- NGramRepeatBlock
- NhwcConv
- NhwcFusedConv
- NhwcMaxPool
- NonMaxSuppression
- NonZero
- Not
- OneHot
- OptionalGetElement
- OptionalHasElement
- Or
- PackedAttention
- PackedMultiHeadAttention
- Pad
- Pow
- PRelu
- QAttention
- QGemm
- QLinearAdd
- QLinearAveragePool
- QLinearConcat
- QLinearConv
- QLinearGlobalAveragePool
- QLinearLeakyRelu
- QLinearMatMul
- QLinearMul
- QLinearReduceMean
- QLinearSigmoid
- QLinearSoftmax
- QLinearWhere
- QMoE
- QOrderedAttention
- QOrderedGelu
- QOrderedLayerNormalization
- QOrderedLongformerAttention
- Show All Articles ( 181 ) Collapse Articles
-
-
-
-
-
- Resume
- Constant
- GlorotNormal
- GlorotUniform
- HeNormal
- HeUniform
- Identity
- LecunNormal
- LecunUniform
- Ones
- Orthogonal
- RandomNormal
- RandomUnifom
- TruncatedNormal
- VarianceScaling
- Zeros
- Show All Articles ( 1 ) Collapse Articles
-
- Resume
- BinaryCrossentropy
- CategoricalCrossentropy
- CategoricalHinge
- CosineSimilarity
- Hinge
- Huber
- KLDivergence
- LogCosh
- MeanAbsoluteError
- MeanAbsolutePercentageError
- MeanSquaredError
- MeanSquaredLogarithmicError
- Poisson
- SquaredHinge
- Custom
- Show All Articles ( 1 ) Collapse Articles
-
-
-
-
-
- Dense
- PReLU 2D
- PReLU 3D
- PReLU 4D
- PReLU 5D
- AdditiveAttention
- Attention
- MutiHeadAttention
- Conv1D
- Conv2D
- Conv3D
- ConvLSTM1D
- ConvLSTM2D
- ConvLSTM3D
- Conv1DTranspose
- Conv2DTranspose
- Conv3DTranspose
- DepthwiseConv2D
- SeparableConv1D
- SeparableConv2D
- Embedding
- BatchNormalization
- LayerNormalization
- Bidirectional
- GRU
- LSTM
- SimpleRNN
- Show All Articles ( 12 ) Collapse Articles
-
- Dense
- PReLU 2D
- PReLU 3D
- PReLU 4D
- PReLU 5D
- AdditiveAttention
- Attention
- MultiHeadAttention
- Conv1D
- Conv2D
- Conv3D
- ConvLSTM1D
- ConvLSTM2D
- ConvLSTM3D
- Conv1DTranspose
- Conv2DTranspose
- Conv3DTranspose
- DepthwiseConv2D
- SeparableConv1D
- SeparableConv2D
- Embedding
- BatchNormalization
- LayerNormalization
- Bidirectional
- GRU
- LSTM
- SimpleRNN
- Show All Articles ( 12 ) Collapse Articles
-
-
- Resume
- Dense
- AdditiveAttention
- Attention
- MultiHeadAttention
- BatchNormalization
- LayerNormalization
- Bidirectional
- GRU
- LSTM
- SimpleRNN
- Conv1D
- Conv2D
- Conv3D
- Conv1DTranspose
- Conv2DTranspose
- Conv3DTranspose
- ConvLSTM1D
- ConvLSTM2D
- ConvLSTM3D
- DepthwiseConv2D
- SeparableConv1D
- SeparableConv2D
- Embedding
- PReLU 2D
- PReLU 3D
- PReLU 4D
- PReLU 5D
- Show All Articles ( 13 ) Collapse Articles
-
-
- Dense
- Embedding
- AdditiveAttention
- Attention
- MultiHeadAttention
- Conv1D
- Conv2D
- Conv3D
- ConvLSTM1D
- ConvLSTM2D
- ConvLSTM3D
- Conv1DTranspose
- Conv2DTranspose
- Conv3DTranspose
- DepthwiseConv2D
- SeparableConv1D
- SeparableConv2D
- BatchNormalization
- LayerNormalization
- PReLU 2D
- PReLU 3D
- PReLU 4D
- PReLU 5D
- Bidirectional
- GRU
- LSTM
- RNN (GRU)
- RNN (LSTM)
- RNN (SimpleRNN)
- SimpleRNN
- Show All Articles ( 15 ) Collapse Articles
-
- Dense
- Embedding
- AdditiveAttention
- Attention
- MultiHeadAttention
- Conv1D
- Conv2D
- Conv3D
- ConvLSTM1D
- ConvLSTM2D
- ConvLSTM3D
- Conv1DTranspose
- Conv2DTranspose
- Conv3DTranspose
- DepthwiseConv2D
- SeparableConv1D
- SeparableConv2D
- BatchNormalization
- LayerNormalization
- PReLU 2D
- PReLU 3D
- PReLU 4D
- PReLU 5D
- Bidirectional
- GRU
- LSTM
- RNN (GRU)
- RNN (LSTM)
- RNN (SimpleRNN)
- SimpleRNN
- Show All Articles ( 15 ) Collapse Articles
-
-
-
- Dense
- Embedding
- AdditiveAttention
- Attention
- MultiHeadAttention
- Conv1D
- Conv2D
- Conv3D
- ConvLSTM1D
- ConvLSTM2D
- ConvLSTM3D
- Conv1DTranspose
- Conv2DTranspose
- Conv3DTranspose
- DepthwiseConv2D
- SeparableConv1D
- SeparableConv2D
- BatchNormalization
- LayerNormalization
- PReLU 2D
- PReLU 3D
- PReLU 4D
- PReLU 5D
- Bidirectional
- GRU
- LSTM
- RNN (GRU)
- RNN (LSTM)
- RNN (SimpleRNN)
- SimpleRNN
- Show All Articles ( 15 ) Collapse Articles
-
- Dense
- Embedding
- AdditiveAttention
- Attention
- MultiHeadAttention
- Conv1D
- Conv2D
- Conv3D
- ConvLSTM1D
- ConvLSTM2D
- ConvLSTM3D
- Conv1DTranspose
- Conv2DTranspose
- Conv3DTranspose
- DepthwiseConv2D
- SeparableConv1D
- SeparableConv2D
- BatchNormalization
- LayerNormalization
- PReLU 2D
- PReLU 3D
- PReLU 4D
- PReLU 5D
- Bidirectional
- GRU
- LSTM
- RNN (GRU)
- RNN (LSTM)
- RNN (SimpleRNN)
- SimpleRNN
- Show All Articles ( 15 ) Collapse Articles
-
-
- Resume
- Accuracy
- BinaryAccuracy
- BinaryCrossentropy
- BinaryIoU
- CategoricalAccuracy
- CategoricalCrossentropy
- CategoricalHinge
- CosineSimilarity
- FalseNegatives
- FalsePositives
- Hinge
- Huber
- IoU
- KLDivergence
- LogCoshError
- Mean
- MeanAbsoluteError
- MeanAbsolutePercentageError
- MeanIoU
- MeanRelativeError
- MeanSquaredError
- MeanSquaredLogarithmicError
- MeanTensor
- OneHotIoU
- OneHotMeanIoU
- Poisson
- Precision
- PrecisionAtRecall
- Recall
- RecallAtPrecision
- RootMeanSquaredError
- SensitivityAtSpecificity
- SparseCategoricalAccuracy
- SparseCategoricalCrossentropy
- SparseTopKCategoricalAccuracy
- Specificity
- SpecificityAtSensitivity
- SquaredHinge
- Sum
- TopKCategoricalAccuracy
- TrueNegatives
- TruePositives
- Show All Articles ( 28 ) Collapse Articles
-
-
QOrderedLongformerAttention
Description
Quantized version of Longformer Self Attention (using int8 with specific matrix Layout).
Input parameters
specified_outputs_name : array, this parameter lets you manually assign custom names to the output tensors of a node.
Graphs in : cluster, ONNX model architecture.
input (heterogeneous) – Q : object, 3D input tensor with shape (batch_size, sequence_length, hidden_size), hidden_size = num_heads * head_size.
scale_input (heterogeneous) – S : object, scale of the input.
weight (heterogeneous) – Q : object, 2D input tensor with shape (hidden_size, 3 * hidden_size).
scale_weight (heterogeneous) – S : object, scale of the weight.
bias (heterogeneous) – S : object, 1D input tensor with shape (3 * hidden_size), fp32 only currently.
scale_bias (heterogeneous) – S : object, reserved. (not used as add bias need float value in cublasLt for normal order).
scale_qkv_gemm (heterogeneous) – S : object, scale of the output for fused kqv gemm.
mask (heterogeneous) – F : object, attention mask with shape (batch_size, sequence_length).
global_weight (heterogeneous) – Q : object, 2D input tensor with shape (hidden_size, 3 * hidden_size).
scale_global_weight (heterogeneous) – S : object, scale of the global_weight.
global_bias (heterogeneous) – S : object, scale of the weight (scalar for per-tensor quantization or 1-D of dims [hidden_size] for per-channel quantization).
scale_global_gemm (heterogeneous) – S : object, 1D input tensor with shape (3 * hidden_size).
global (heterogeneous) – G : object, global attention flags with shape (batch_size, sequence_length).
scale_output (heterogeneous) – S : object, scale of the output.

Parameters : cluster,
num_heads : integer, number of attention heads.
Default value “0”. order_global_weight : integer, cublasLt order of weight matrix.
Default value “0”. order_input : integer, cublasLt order of input matrix. See the schema of QuantizeWithOrder for order definition.
Default value “0”. order_output : integer, cublasLt order of global bias.
Default value “0”. order_weight : integer, cublasLt order of weight matrix.
Default value “0”. window : integer, one sided attention windows length W, or half of total window length.
Default value “0”. training? : boolean, whether the layer is in training mode (can store data for backward).
Default value “True”. lda coeff : float, defines the coefficient by which the loss derivative will be multiplied before being sent to the previous layer (since during the backward run we go backwards).
Default value “1”.
name (optional) : string, name of the node.

Output parameters
output (heterogeneous) – Q : object, 3D output tensor with shape (batch_size, sequence_length, hidden_size).
Type Constraints
Q in (tensor(int8)
) : Constrain input and output types to int8 tensors.
S in (tensor(float)
) : Constrain scales to float32 tensors.
G in (tensor(int32)
) : Constrain to integer types.
F in (tensor(float16)
) : Be compatible with float version.