Welcome to our Support Center

QLinearMatMul

Description

Matrix product that behaves like numpy.matmul. It consumes two quantized input tensors, their scales and zero points, scale and zero point of output, and computes the quantized output. The quantization formula is y = saturate((x / y_scale) + y_zero_point). For (x / y_scale), it is rounding to nearest ties to even. Refer to https://en.wikipedia.org/wiki/Rounding for details. Scale and zero point must have same shape. They must be either scalar (per tensor) or N-D tensor (per row for ‘a’ and per column for ‘b’). Scalar refers to per tensor quantization whereas N-D refers to per row or per column quantization. If the input is 2D of shape [M, K] then zero point and scale tensor may be an M element vector [v_1, v_2, …, v_M] for per row quantization and K element vector of shape [v_1, v_2, …, v_K] for per column quantization. If the input is N-D tensor with shape [D1, D2, M, K] then zero point and scale tensor may have shape [D1, D2, M, 1] for per row quantization and shape [D1, D2, 1, K] for per column quantization. Production must never overflow, and accumulation may overflow if and only if in 32 bits.

 

Input parameters

 

specified_outputs_namearray, this parameter lets you manually assign custom names to the output tensors of a node.

 Graphs in : cluster, ONNX model architecture.

a (heterogeneous) – T1 : object, N-dimensional quantized matrix a.
a_scale (heterogeneous) – tensor(float) : object, scale of quantized input a.
a_zero_point (heterogeneous) – T1 : object, zero point of quantized input a.
b (heterogeneous) – T2 : object, N-dimensional quantized matrix b.
b_scale (heterogeneous) – tensor(float) : object, scale of quantized input b.
b_zero_point (heterogeneous) – T2 : object, zero point of quantized input b.
y_scale (heterogeneous) – tensor(float) : object, scale of quantized output y.
y_zero_point (heterogeneous) – T3 : object, zero point of quantized output y.

 Parameters : cluster,

 training? : boolean, whether the layer is in training mode (can store data for backward).
Default value “True”.
 lda coeff : float, defines the coefficient by which the loss derivative will be multiplied before being sent to the previous layer (since during the backward run we go backwards).
Default value “1”.

 name (optional) : string, name of the node.

Output parameters

 

y (heterogeneous) – T3 : object, quantized matrix multiply results from a * b.

Type Constraints

T1 in (tensor(int8)tensor(uint8)) : Constrain input a and its zero point data type to 8-bit integer tensor.

T2 in (tensor(int8)tensor(uint8)) : Constrain input b and its zero point data type to 8-bit integer tensor.

T3 in (tensor(int8)tensor(uint8)) : Constrain output y and its zero point data type to 8-bit integer tensor.

Example

All these exemples are snippets PNG, you can drop these Snippet onto the block diagram and get the depicted code added to your VI (Do not forget to install Deep Learning library to run it).
Table of Contents