-
Quick start
-
API
-
-
- Resume
- Add
- AdditiveAttention
- AlphaDropout
- Attention
- Average
- AvgPool1D
- AvgPool2D
- AvgPool3D
- BatchNormalization
- Bidirectional
- Concatenate
- Conv1D
- Conv1DTranspose
- Conv2D
- Conv2DTranspose
- Conv3D
- Conv3DTranspose
- ConvLSTM1D
- ConvLSTM2D
- ConvLSTM3D
- Cropping1D
- Cropping2D
- Cropping3D
- Dense
- DepthwiseConv2D
- Dropout
- ELU
- Embedding
- Exponential
- Flatten
- GaussianDropout
- GaussianNoise
- GELU
- GlobalAvgPool1D
- GlobalAvgPool2D
- GlobalAvgPool3D
- GlobalMaxPool1D
- GlobalMaxPool2D
- GlobalMaxPool3D
- GRU
- HardSigmoid
- Input
- LayerNormalization
- LeakyReLU
- Linear
- LSTM
- MaxPool1D
- MaxPool2D
- MaxPool3D
- MultiHeadAttention
- Multiply
- Output Predict
- Output Train
- Permute3D
- PReLU
- ReLU
- Reshape
- RNN
- SELU
- SeparableConv1D
- SeparableConv2D
- Sigmoid
- SimpleRNN
- SoftMax
- SoftPlus
- SoftSign
- SpatialDropout
- Split
- Substract
- Swish
- TanH
- ThresholdedReLU
- UpSampling1D
- UpSampling2D
- UpSampling3D
- ZeroPadding1D
- ZeroPadding2D
- ZeroPadding3D
- Show All Articles ( 64 ) Collapse Articles
-
-
- Abs
- Acos
- Acosh
- Add
- AffineGrid
- And
- ArgMax
- ArgMin
- Asin
- Asinh
- Atan
- Atanh
- Attention
- AttnLSTM
- AveragePool
- BatchNormalization
- Bernouilli
- BiasAdd
- BiasDropout
- BiasGelu
- BiasSoftmax
- BiasSplitGelu
- BifurcationDetector
- BitmaskBiasDropout
- BitmaskDropout
- BitShift
- BitwiseAnd
- BitwiseNot
- BitwiseOr
- BitwiseXor
- BlackmanWindow
- Cast
- CastLike
- CDist
- Ceil
- Celu
- CenterCropPad
- Clip
- Col2lm
- ComplexMul
- ComplexMulConj
- Compress
- Concat
- ConcatFromSequence
- Conv
- ConvInteger
- ConvTranspose
- ConvTransposeWithDynamicPads
- Cos
- Cosh
- CropAndResize
- CumSum
- DecoderAttention
- DecoderMaskedMultiHeadAttention
- DecoderMaskedSelfAttention
- DeformConv
- DepthToSpace
- DequantizeBFP
- DequantizeLinear
- DequantizeWithOrder
- Det
- DFT
- Div
- Dropout
- DynamicQuantizeLinear
- DynamicQuantizeLSTM
- DynamicQuantizeMatMul
- DynamicTimeWarping
- Einsum
- EmbedLayerNormalization
- EPContext
- Equal
- Erf
- Exp
- Expand
- ExpandDims
- EyeLike
- FastGelu
- Flatten
- Floor
- FusedConv
- FusedGemm
- FusedMatMul
- FusedMatMulActivation
- GatedRelativePositionBias
- Gather
- GatherElements
- GatherND
- Gemm
- GemmaRotaryEmbedding
- GemmFastGelu
- GemmFloat8
- GlobalAveragePool
- GlobalLpPool
- GlobalMaxPool
- Greater
- GreaterOrEqual
- GreedySearch
- GridSample
- GroupNorm
- GroupQueryAttention
- GRU
- HammingWindow
- HannWindow
- HardMax
- HardSwish
- Identity
- If
- ImageDecoder
- InstanceNormalization
- Inverse
- lrfft
- lslnf
- lsNaN
- LayerNormalization
- Less
- LessOrEqual
- Log
- LogSoftmax
- LongformerAttention
- Loop
- LpNormalization
- LpPool
- LRN
- LSTM
- MatMul
- MatMulBnb4
- MatMulFpQ4
- MatMulInteger
- MatMulInteger16
- MatMulIntergerToFloat
- MatMulNBits
- Max
- MaxPool
- MaxPoolWithMask
- MaxRoiPool
- MaxUnPool
- Mean
- MeanVarianceNormalization
- MelWeightMatrix
- MicrosoftDequantizeLinear
- MicrosoftGatherND
- Show All Articles ( 127 ) Collapse Articles
-
-
-
-
-
- Resume
- Constant
- GlorotNormal
- GlorotUniform
- HeNormal
- HeUniform
- Identity
- LecunNormal
- LecunUniform
- Ones
- Orthogonal
- RandomNormal
- RandomUnifom
- TruncatedNormal
- VarianceScaling
- Zeros
- Show All Articles ( 1 ) Collapse Articles
-
- Resume
- BinaryCrossentropy
- CategoricalCrossentropy
- CategoricalHinge
- CosineSimilarity
- Hinge
- Huber
- KLDivergence
- LogCosh
- MeanAbsoluteError
- MeanAbsolutePercentageError
- MeanSquaredError
- MeanSquaredLogarithmicError
- Poisson
- SquaredHinge
- Custom
- Show All Articles ( 1 ) Collapse Articles
-
-
-
-
-
- Dense
- PReLU 2D
- PReLU 3D
- PReLU 4D
- PReLU 5D
- AdditiveAttention
- Attention
- MutiHeadAttention
- Conv1D
- Conv2D
- Conv3D
- ConvLSTM1D
- ConvLSTM2D
- ConvLSTM3D
- Conv1DTranspose
- Conv2DTranspose
- Conv3DTranspose
- DepthwiseConv2D
- SeparableConv1D
- SeparableConv2D
- Embedding
- BatchNormalization
- LayerNormalization
- Bidirectional
- GRU
- LSTM
- SimpleRNN
- Show All Articles ( 12 ) Collapse Articles
-
- Dense
- PReLU 2D
- PReLU 3D
- PReLU 4D
- PReLU 5D
- AdditiveAttention
- Attention
- MultiHeadAttention
- Conv1D
- Conv2D
- Conv3D
- ConvLSTM1D
- ConvLSTM2D
- ConvLSTM3D
- Conv1DTranspose
- Conv2DTranspose
- Conv3DTranspose
- DepthwiseConv2D
- SeparableConv1D
- SeparableConv2D
- Embedding
- BatchNormalization
- LayerNormalization
- Bidirectional
- GRU
- LSTM
- SimpleRNN
- Show All Articles ( 12 ) Collapse Articles
-
-
- Resume
- Dense
- AdditiveAttention
- Attention
- MultiHeadAttention
- BatchNormalization
- LayerNormalization
- Bidirectional
- GRU
- LSTM
- SimpleRNN
- Conv1D
- Conv2D
- Conv3D
- Conv1DTranspose
- Conv2DTranspose
- Conv3DTranspose
- ConvLSTM1D
- ConvLSTM2D
- ConvLSTM3D
- DepthwiseConv2D
- SeparableConv1D
- SeparableConv2D
- Embedding
- PReLU 2D
- PReLU 3D
- PReLU 4D
- PReLU 5D
- Show All Articles ( 13 ) Collapse Articles
-
-
- Dense
- Embedding
- AdditiveAttention
- Attention
- MultiHeadAttention
- Conv1D
- Conv2D
- Conv3D
- ConvLSTM1D
- ConvLSTM2D
- ConvLSTM3D
- Conv1DTranspose
- Conv2DTranspose
- Conv3DTranspose
- DepthwiseConv2D
- SeparableConv1D
- SeparableConv2D
- BatchNormalization
- LayerNormalization
- PReLU 2D
- PReLU 3D
- PReLU 4D
- PReLU 5D
- Bidirectional
- GRU
- LSTM
- RNN (GRU)
- RNN (LSTM)
- RNN (SimpleRNN)
- SimpleRNN
- Show All Articles ( 15 ) Collapse Articles
-
- Dense
- Embedding
- AdditiveAttention
- Attention
- MultiHeadAttention
- Conv1D
- Conv2D
- Conv3D
- ConvLSTM1D
- ConvLSTM2D
- ConvLSTM3D
- Conv1DTranspose
- Conv2DTranspose
- Conv3DTranspose
- DepthwiseConv2D
- SeparableConv1D
- SeparableConv2D
- BatchNormalization
- LayerNormalization
- PReLU 2D
- PReLU 3D
- PReLU 4D
- PReLU 5D
- Bidirectional
- GRU
- LSTM
- RNN (GRU)
- RNN (LSTM)
- RNN (SimpleRNN)
- SimpleRNN
- Show All Articles ( 15 ) Collapse Articles
-
-
-
- Dense
- Embedding
- AdditiveAttention
- Attention
- MultiHeadAttention
- Conv1D
- Conv2D
- Conv3D
- ConvLSTM1D
- ConvLSTM2D
- ConvLSTM3D
- Conv1DTranspose
- Conv2DTranspose
- Conv3DTranspose
- DepthwiseConv2D
- SeparableConv1D
- SeparableConv2D
- BatchNormalization
- LayerNormalization
- PReLU 2D
- PReLU 3D
- PReLU 4D
- PReLU 5D
- Bidirectional
- GRU
- LSTM
- RNN (GRU)
- RNN (LSTM)
- RNN (SimpleRNN)
- SimpleRNN
- Show All Articles ( 15 ) Collapse Articles
-
- Dense
- Embedding
- AdditiveAttention
- Attention
- MultiHeadAttention
- Conv1D
- Conv2D
- Conv3D
- ConvLSTM1D
- ConvLSTM2D
- ConvLSTM3D
- Conv1DTranspose
- Conv2DTranspose
- Conv3DTranspose
- DepthwiseConv2D
- SeparableConv1D
- SeparableConv2D
- BatchNormalization
- LayerNormalization
- PReLU 2D
- PReLU 3D
- PReLU 4D
- PReLU 5D
- Bidirectional
- GRU
- LSTM
- RNN (GRU)
- RNN (LSTM)
- RNN (SimpleRNN)
- SimpleRNN
- Show All Articles ( 15 ) Collapse Articles
-
-
- Resume
- Accuracy
- BinaryAccuracy
- BinaryCrossentropy
- BinaryIoU
- CategoricalAccuracy
- CategoricalCrossentropy
- CategoricalHinge
- CosineSimilarity
- FalseNegatives
- FalsePositives
- Hinge
- Huber
- IoU
- KLDivergence
- LogCoshError
- Mean
- MeanAbsoluteError
- MeanAbsolutePercentageError
- MeanIoU
- MeanRelativeError
- MeanSquaredError
- MeanSquaredLogarithmicError
- MeanTensor
- OneHotIoU
- OneHotMeanIoU
- Poisson
- Precision
- PrecisionAtRecall
- Recall
- RecallAtPrecision
- RootMeanSquaredError
- SensitivityAtSpecificity
- SparseCategoricalAccuracy
- SparseCategoricalCrossentropy
- SparseTopKCategoricalAccuracy
- Specificity
- SpecificityAtSensitivity
- SquaredHinge
- Sum
- TopKCategoricalAccuracy
- TrueNegatives
- TruePositives
- Show All Articles ( 28 ) Collapse Articles
-
-
Loop
Description
Generic Looping construct.
This loop has multiple termination conditions :
- Trip count. Iteration count specified at runtime. Set by specifying the input M. Optional. Set to empty string to omit. Note that a static trip count (specified at graph construction time) can be specified by passing in a constant node for input M.
- Loop termination condition. This is an input to the op that determines whether to run the first iteration and also a loop-carried dependency for the body graph. The body graph must yield a value for the condition variable, whether this input is provided or not.
Input parameters
specified_outputs_name : array, this parameter lets you manually assign custom names to the output tensors of a node.
Graphs in : cluster, ONNX model architecture.
M (optional, heterogeneous) – I : object, a maximum trip-count for the loop specified at runtime. Optional. Pass empty string to skip.
cond (optional, heterogeneous) – B : object, a boolean termination condition. Optional. Pass empty string to skip.
v_initial (variadic) – V : array, the initial values of any loop-carried dependencies (values that change across loop iterations).

Parameters : cluster,
body : object, the graph run each iteration. It has 2+N inputs: (iteration_num, condition, loop carried dependencies…). It has 1+N+K outputs: (condition, loop carried dependencies…, scan_outputs…). Each scan_output is created by concatenating the value of the specified output value at the end of each iteration of the loop. It is an error if the dimensions or data type of these scan_outputs change across loop iterations.
training? : boolean, whether the layer is in training mode (can store data for backward).
Default value “True”. lda coeff : float, defines the coefficient by which the loss derivative will be multiplied before being sent to the previous layer (since during the backward run we go backwards).
Default value “1”.
name (optional) : string, name of the node.

Output parameters
v_final_and_scan_outputs (variadic) – V : array, final N loop carried dependency values then K scan_outputs. Scan outputs must be Tensors.
Type Constraints
V in (optional(seq(tensor(bfloat16)))
, optional(seq(tensor(bool)))
, optional(seq(tensor(complex128)))
, optional(seq(tensor(complex64)))
, optional(seq(tensor(double)))
, optional(seq(tensor(float)))
, optional(seq(tensor(float16)))
, optional(seq(tensor(int16)))
, optional(seq(tensor(int32)))
, optional(seq(tensor(int64)))
, optional(seq(tensor(int8)))
, optional(seq(tensor(string)))
, optional(seq(tensor(uint16)))
, optional(seq(tensor(uint32)))
, optional(seq(tensor(uint64)))
, optional(seq(tensor(uint8)))
, optional(tensor(bfloat16))
, optional(tensor(bool))
, optional(tensor(complex128))
, optional(tensor(complex64))
, optional(tensor(double))
, optional(tensor(float))
, optional(tensor(float16))
, optional(tensor(float8e4m3fn))
, optional(tensor(float8e4m3fnuz))
, optional(tensor(float8e5m2))
, optional(tensor(float8e5m2fnuz))
, optional(tensor(int16))
, optional(tensor(int32))
, optional(tensor(int64))
, optional(tensor(int8))
, optional(tensor(string))
, optional(tensor(uint16))
, optional(tensor(uint32))
, optional(tensor(uint64))
, optional(tensor(uint8))
, seq(tensor(bfloat16))
, seq(tensor(bool))
, seq(tensor(complex128))
, seq(tensor(complex64))
, seq(tensor(double))
, seq(tensor(float))
, seq(tensor(float16))
, seq(tensor(float8e4m3fn))
, seq(tensor(float8e4m3fnuz))
, seq(tensor(float8e5m2))
, seq(tensor(float8e5m2fnuz))
, seq(tensor(int16))
, seq(tensor(int32))
, seq(tensor(int64))
, seq(tensor(int8))
, seq(tensor(string))
, seq(tensor(uint16))
, seq(tensor(uint32))
, seq(tensor(uint64))
, seq(tensor(uint8))
, tensor(bfloat16)
, tensor(bool)
, tensor(complex128)
, tensor(complex64)
, tensor(double)
, tensor(float)
, tensor(float16)
, tensor(float8e4m3fn)
, tensor(float8e4m3fnuz)
, tensor(float8e5m2)
, tensor(float8e5m2fnuz)
, tensor(int16)
, tensor(int32)
, tensor(int64)
, tensor(int8)
, tensor(string)
, tensor(uint16)
, tensor(uint32)
, tensor(uint64)
, tensor(uint8)
) : All Tensor, Sequence(Tensor), Optional(Tensor), and Optional(Sequence(Tensor)) types up to IRv9.
I in (tensor(int64)
) : tensor of int64, which should be a scalar.
B in (tensor(bool)
) : tensor of bool, which should be a scalar.