Welcome to our Support Center

GemmFloat8

Description

Generic Gemm for float and float 8.

 

Input parameters

 

specified_outputs_namearray, this parameter lets you manually assign custom names to the output tensors of a node.

 Graphs in : cluster, ONNX model architecture.

A (heterogeneous) – TA : object, input tensor A. The shape of A should be (M, K) if transA is 0, or (K, M) if transA is non-zero.
B (heterogeneous) – TB : object, input tensor B. The shape of B should be (K, N) if transB is 0, or (N, K) if transB is non-zero.
C (optional, heterogeneous) – TC : object, input tensor C.
scaleA (optional, heterogeneous) – TS : object, scale of tensor A if A is float 8 tensor.
scaleB (optional, heterogeneous) – TS : object, scale of tensor B if B is float 8 tensor.
scaleY (optional, heterogeneous) – TS : object, scale of the output tensor if A or B is float 8.

 Parameters : cluster,

activation : enum, activation function, RELU or GELU or NONE.
Default value “RELU”.
 alpha float, scalar multiplier for the product of input tensors A * B.
Default value “0”.
 beta float, scalar multiplier for the product of input bias C.
Default value “0”.
dtype : enum, output Type. Same definition as attribute ‘to’ for operator Cast.
Default value “UNDEFINED”.
 transA : boolean, whether A should be transposed. Float 8 only supported transA=0.
Default value “False”.
 transB : boolean, whether B should be transposed. Float 8 only supported transB=1.
Default value “False”.
 training? : boolean, whether the layer is in training mode (can store data for backward).
Default value “True”.
 lda coeff : float, defines the coefficient by which the loss derivative will be multiplied before being sent to the previous layer (since during the backward run we go backwards).
Default value “1”.

 name (optional) : string, name of the node.

Output parameters

 

 Y (heterogeneous) – TR : object, output tensor of shape (M, N).

Type Constraints

TA in (tensor(float8e4m3fn)tensor(float8e5m2), tensor(float16), tensor(bfloat16), tensor(float)) : Constrain type to input A.

TB in (tensor(float8e4m3fn)tensor(float8e5m2), tensor(float16), tensor(bfloat16), tensor(float)) : Constrain type to input B.

TC in (tensor(float16), tensor(bfloat16), tensor(float)) : Constrain type to input C.

TR in (tensor(float8e4m3fn)tensor(float8e5m2), tensor(float16), tensor(bfloat16), tensor(float)) : Constrain type to result type.

TS in (tensor(float)) : Constrain type for all input scales (scaleA, scaleB, scaleY).

Example

All these exemples are snippets PNG, you can drop these Snippet onto the block diagram and get the depicted code added to your VI (Do not forget to install Deep Learning library to run it).
Table of Contents