Welcome to our Support Center

QGemm

Description

Quantized Gemm.

 

Input parameters

 

specified_outputs_namearray, this parameter lets you manually assign custom names to the output tensors of a node.

 Graphs in : cluster, ONNX model architecture.

 A (heterogeneous) – TA : object, input tensor A. The shape of A should be (M, K) if transA is 0, or (K, M) if transA is non-zero.
a_scale (heterogeneous) – T : object, scale of quantized input ‘A’. It is a scalar,which means a per-tensor quantization.
a_zero_point (heterogeneous) – TA : object, zero point tensor for input ‘A’. It is a scalar.
B (heterogeneous) – TB : object, input tensor B. The shape of B should be (K, N) if transB is 0, or (N, K) if transB is non-zero.
b_scale (heterogeneous) – T : object, scale of quantized input ‘B’. It could be a scalar or a 1-D tensor, which means a per-tensor or per-column quantization. If it’s a 1-D tensor, its number of elements should be equal to the number of columns of input ‘B’.
b_zero_point (heterogeneous) – TB : object, zero point tensor for input ‘B’. It’s optional and default value is 0. It could be a scalar or a 1-D tensor, which means a per-tensor or per-column quantization. If it’s a 1-D tensor, its number of elements should be equal to the number of columns of input ‘B’.
C (optional, heterogeneous) – TC : object, optional input tensor C. If not specified, the computation is done as if C is a scalar 0. The shape of C should be unidirectional broadcastable to (M, N). Its type is int32_t and must be quantized with zero_point = 0 and scale = alpha / beta * a_scale * b_scale.
y_scale (optional, heterogeneous) – T : object, scale of output ‘Y’. It is a scalar, which means a per-tensor quantization. It is optional. The output is full precision(float32) if it is not provided. Or the output is quantized.
y_zero_point (optional, heterogeneous) – TYZ : object, zero point tensor for output ‘Y’. It is a scalar, which means a per-tensor quantization. It is optional. The output is full precision(float32) if it is not provided. Or the output is quantized.

 Parameters : cluster,

alpha : float, scalar multiplier for the product of input tensors A * B.
Default value “1”.
 transA : boolean, whether A should be transposed.
Default value “False”.
transB : boolean, whether B should be transposed.
Default value “False”.
 training? : boolean, whether the layer is in training mode (can store data for backward).
Default value “True”.
 lda coeff : float, defines the coefficient by which the loss derivative will be multiplied before being sent to the previous layer (since during the backward run we go backwards).
Default value “1”.

 name (optional) : string, name of the node.

Output parameters

 

 Y (heterogeneous) – TY : object, output tensor of shape (M, N).

Type Constraints

T in (tensor(float)) : Constrain scale types to float tensors.

TA in (tensor(uint8), tensor(int8)) : Constrain input A and its zero point types to 8 bit tensors.

TB in (tensor(uint8), tensor(int8)) : Constrain input B and its zero point types to 8 bit tensors.

TC in (tensor(int32)) : Constrain input C to 32 bit integer tensors.

TYZ in (tensor(uint8), tensor(int8)) : Constrain output zero point types to 8 bit tensors.

TY in (tensor(float), tensor(uint8), tensor(int8)) : Constrain output type to float32 or 8 bit tensors.

Example

All these exemples are snippets PNG, you can drop these Snippet onto the block diagram and get the depicted code added to your VI (Do not forget to install Deep Learning library to run it).
Table of Contents