Welcome to our Support Center

MulInteger

Description

Performs element-wise binary quantized multiplication (with Numpy-style broadcasting support). “This operator supports multidirectional (i.e., Numpy-style) broadcasting” The output of this op is the int32 accumulated result of the mul operation : C (int32) = (A - A_zero_point) * (B - B_zero_point)

 

 

Input parameters

 

specified_outputs_namearray, this parameter lets you manually assign custom names to the output tensors of a node.

 Graphs in : cluster, ONNX model architecture.

A (heterogeneous) – T : object, first operand.
A_zero_point (optional, heterogeneous) – T : object, input A zero point. Default value is 0 if it’s not specified. It’s a scalar, which means a per-tensor/layer quantization.
B (heterogeneous) – T : object, second operand.
B_zero_point (optional, heterogeneous) – T : object, input B zero point. Default value is 0 if it’s not specified. It’s a scalar, which means a per-tensor/layer quantization.

 Parameters : cluster,

 training? : boolean, whether the layer is in training mode (can store data for backward).
Default value “True”.
 lda coeff : float, defines the coefficient by which the loss derivative will be multiplied before being sent to the previous layer (since during the backward run we go backwards).
Default value “1”.

 name (optional) : string, name of the node.

Output parameters

 

C (heterogeneous) – T1 : object, constrain output to 32 bit tensor.

Type Constraints

T in (tensor(uint8)tensor(int8)) : Constrain input types to 8 bit signed and unsigned tensors. 

T1 in (tensor(int32)) : Constrain output types to 32 bit tensors.

Example

All these exemples are snippets PNG, you can drop these Snippet onto the block diagram and get the depicted code added to your VI (Do not forget to install Deep Learning library to run it).
Table of Contents