Welcome to our Support Center

MatMulInteger16

Description

Matrix product that behaves like numpy.matmul: https://docs.scipy.org/doc/numpy-1.13.0/reference/generated/numpy.matmul.html. The production MUST never overflow. The accumulation may overflow if and only if in 32 bits.

 

 

Input parameters

 

specified_outputs_namearray, this parameter lets you manually assign custom names to the output tensors of a node.

 Graphs in : cluster, ONNX model architecture.

A (heterogeneous) – T1 : object, N-dimensional matrix A.
B (heterogeneous) – T2 : object, N-dimensional matrix B.

 Parameters : cluster,

 training? : boolean, whether the layer is in training mode (can store data for backward).
Default value “True”.
 lda coeff : float, defines the coefficient by which the loss derivative will be multiplied before being sent to the previous layer (since during the backward run we go backwards).
Default value “1”.

 name (optional) : string, name of the node.

Output parameters

 

Y (heterogeneous) – T3 : object, matrix multiply results from A * B.

Type Constraints

T1 in (tensor(int16)tensor(uint16)) : Constrain input A data types as 16-bit integer tensor.

T2 in (tensor(int16)tensor(uint16)) : Constrain input B data types as 16-bit integer tensor.

T3 in (tensor(int32)tensor(uint32)) : Constrain output Y data types as 32-bit integer tensor.T3 must be tensor(uint32) when both T1 and T2 are tensor(uint16),or must be tensor(int32) when either T1 or T2 is tensor(int16).

Example

All these exemples are snippets PNG, you can drop these Snippet onto the block diagram and get the depicted code added to your VI (Do not forget to install Deep Learning library to run it).
Table of Contents