Welcome to our Support Center

MatMulFpQ4

Description

Matrix product with right hand matrix being pre-packed and quantized int4 data blob. During quantization, the matrix is divided into blocks, where each block is a contiguous subset inside each column. Each block is quantized into a sequence of 4b integers with a scaling factor and an optional offset. Currently 3 quantization types are supported: (0): block size 32, no offset, (1): block size 32, with offset, (2): block size 64, no offset

 

 

Input parameters

 

specified_outputs_namearray, this parameter lets you manually assign custom names to the output tensors of a node.

 Graphs in : cluster, ONNX model architecture.

A (heterogeneous) – T1 : object, N-dimensional matrix A.
B (heterogeneous) – T2 : object, 1-dimensional data blob.
B_shape (heterogeneous) – T3 : object, shape information of B.

 Parameters : cluster,

blk_quant_type : enum, quantization type.
Default value “block size 32, no offset”.
 training? : boolean, whether the layer is in training mode (can store data for backward).
Default value “True”.
 lda coeff : float, defines the coefficient by which the loss derivative will be multiplied before being sent to the previous layer (since during the backward run we go backwards).
Default value “1”.

 name (optional) : string, name of the node.

Output parameters

 

Y (heterogeneous) – T1 : object, matrix multiply results from A * B.

Type Constraints

T1 in (tensor(float)) : Constrain input matrix data types as single precision float tensor.

T2 in (tensor(uint8)) : Constrain input B data types as data blob.

T3 in (tensor(int64)) : Constrain shape of B must be int64 tensor.

Example

All these exemples are snippets PNG, you can drop these Snippet onto the block diagram and get the depicted code added to your VI (Do not forget to install Deep Learning library to run it).
Table of Contents