Welcome to our Support Center

MicrosoftQuantizeLinear

Description

The linear quantization operator. It consumes a full precision data, a scale, a zero point to compute the low precision / quantized tensor. The quantization formula is y = saturate ((x / y_scale) + y_zero_point). For saturation, it saturates to [0, 255] if it’s uint8, [-128, 127] if it’s int8, [0, 65,535] if it’s uint16, and [-32,768, 32,767] if it’s int16. For (x / y_scale), it’s rounding to nearest ties to even. Refer to https://en.wikipedia.org/wiki/Rounding for details. Scale and zero point must have same shape. They must be either scalar (per tensor) or 1-D tensor (per ‘axis’).

 

 

Input parameters

 

specified_outputs_namearray, this parameter lets you manually assign custom names to the output tensors of a node.

 Graphs in : cluster, ONNX model architecture.

x (heterogeneous) – T1 : object, N-D full precision Input tensor to be quantized.
y_scale (heterogeneous) – T1 : object, scale for doing quantization to get ‘y’. It can be a scalar, which means per-tensor/layer quantization, or a 1-D tensor for per-axis quantization.
y_zero_point (optional, heterogeneous) – T2 : object, zero point for doing quantization to get ‘y’. Shape must match y_scale. Default is uint8 with zero point of 0 if it’s not specified.

 Parameters : cluster,

axis : integer, the axis along which same quantization parameters are applied. It’s optional.If it’s not specified, it means per-tensor quantization and input ‘x_scale’ and ‘x_zero_point’ must be scalars.If it’s specified, it means per ‘axis’ quantization and input ‘x_scale’ and ‘x_zero_point’ must be 1-D tensors.
Default value “0”.
saturate : boolean, the parameter defines how the conversion behaves if an input value is out of range of the destination type. It only applies for float 8 quantization (float8e4m3fn, float8e4m3fnuz, float8e5m2, float8e5m2fnuz). It is true by default. All cases are fully described in two tables inserted in the operator description.
Default value “True”.
 training? : boolean, whether the layer is in training mode (can store data for backward).
Default value “True”.
 lda coeff : float, defines the coefficient by which the loss derivative will be multiplied before being sent to the previous layer (since during the backward run we go backwards).
Default value “1”.

 name (optional) : string, name of the node.

Output parameters

 

y (heterogeneous) – T2 : object, N-D quantized output tensor. It has same shape as input ‘x’.

Type Constraints

T1 in (tensor(float16)tensor(float)) : Constrain ‘x’, ‘y_scale’ to float tensors.

T2 in (tensor(int8)tensor(uint8), tensor(int16)tensor(uint16), tensor(int4)tensor(uint4)) : Constrain ‘y_zero_point’ and ‘y’ to 8-bit and 16-bit integer tensors.

Example

All these exemples are snippets PNG, you can drop these Snippet onto the block diagram and get the depicted code added to your VI (Do not forget to install Deep Learning library to run it).
Table of Contents