Welcome to our Support Center

DynamicQuantizeLinear

Description

A Function to fuse calculation for Scale, Zero Point and FP32->8Bit conversion of FP32 Input data. Outputs Scale, ZeroPoint and Quantized Input for a given FP32 Input. Scale is calculated as :

y_scale = (maximum(0, max(x)) - minimum(0, min(x))) / (qmax - qmin)

  • where qmax and qmin are max and min values for quantization range i.e. [0, 255] in case of uint8

  • data range is adjusted to include 0.

 

Zero point is calculated as :

intermediate_zero_point = qmin - min(x)/y_scale
y_zero_point = cast(round(saturate(itermediate_zero_point)))

  • where qmax and qmin are max and min values for quantization range .i.e [0, 255] in case of uint8

  • for saturation, it saturates to [0, 255] if it’s uint8, or [-127, 127] if it’s int8. Right now only uint8 is supported.

  • rounding to nearest ties to even.

 

Data quantization formula is :

y = saturate (round (x / y_scale) + y_zero_point)
  • for saturation, it saturates to [0, 255] if it’s uint8, or [-127, 127] if it’s int8. Right now only uint8 is supported.

  • rounding to nearest ties to even.

 

Input parameters

 

specified_outputs_namearray, this parameter lets you manually assign custom names to the output tensors of a node.
x (heterogeneous) – T1 : object, input tensor.

 Parameters : cluster,

 training? : boolean, whether the layer is in training mode (can store data for backward).
Default value “True”.
 lda coeff : float, defines the coefficient by which the loss derivative will be multiplied before being sent to the previous layer (since during the backward run we go backwards).
Default value “1”.

 name (optional) : string, name of the node.

Output parameters

 Graphs out : cluster, ONNX model architecture.

y (heterogeneous) – T2 : object, quantized output tensor.
y_scale (heterogeneous) – tensor(float) : object, output scale. It’s a scalar, which means a per-tensor/layer quantization.
y_zero_point (heterogeneous) – T2 : object, output zero point. It’s a scalar, which means a per-tensor/layer quantization.

Type Constraints

T1 in (tensor(float)) : Constrain ‘x’ to float tensor.

T2 in (tensor(uint8)) : Constrain ‘y_zero_point’ and ‘y’ to 8-bit unsigned integer tensor.

Example

All these exemples are snippets PNG, you can drop these Snippet onto the block diagram and get the depicted code added to your VI (Do not forget to install Deep Learning library to run it).
Table of Contents