Welcome to our Support Center

MatMulNBits

Description

MatMulNBits performs a matrix multiplication where the right-hand-side matrix (weights) is quantized to N bits.

 

 

It is a fusion of two operations :

  1. Linear dequantization of the quantized weights using scale and (optionally) zero-point with formula: dequantized_weight = (quantized_weight – zero_point) * scale
  2. Matrix multiplication between the input matrix A and the dequantized weight matrix.

The weight matrix is a 2D constant matrix with the input feature count and output feature count specified by attributes ‘K’ and ‘N’. It is quantized block-wise along the K dimension with a block size specified by the ‘block_size’ attribute. The block size must be a power of 2 and not smaller than 16 (e.g., 16, 32, 64, 128). Each block has its own scale and zero-point. The quantization is performed using a bit-width specified by the ‘bits’ attribute, which can take values from 2 to 8.

The quantized weights are stored in a bit-packed format along the K dimension, with each block being represented by a blob of uint8. For example, for 4 bits, the first 4 bits are stored in the lower 4 bits of a byte, and the second 4 bits are stored in the higher 4 bits of a byte.

 

Input parameters

 

specified_outputs_namearray, this parameter lets you manually assign custom names to the output tensors of a node.

 Graphs in : cluster, ONNX model architecture.

A (heterogeneous) – T1 : object, the input tensor, not quantized.
B (heterogeneous) – T2 : object, packed uint8 tensor of shape (N, k_blocks, blob_size), where k_blocks = ceil(K / block_size) and blob_size = (block_size * bits / 8). The quantized weights are stored in a bit-packed format along the K dimension, packed within each block_size.
scales (heterogeneous) – T1 : object, per-block scaling factors for dequantization with shape (N, k_blocks) and same data type as input A.
zero_points (optional, heterogeneous) – T3 : object, per-block zero point for dequantization. It can be either packed or unpacked: Packed (uint8) format has shape (N, ceil(k_blocks * bits / 8)), and it uses same bit-packing method as Input B. Unpacked (same type as A) format has shape (N, k_blocks). If not provided, a default zero point is used: 2^(bits – 1) (e.g., 8 for 4-bit quantization, 128 for 8-bit).
g_idx (optional, heterogeneous) – T4 : object, group_idx. This input is deprecated.
bias (optional, heterogeneous) – T1 : object, bias to add to result. It should have shape [N].

 Parameters : cluster,

K : integer, input feature dimension of the weight matrix.
Default value “0”.
N : integer, output feature dimension of the weight matrix.
Default value “0”.
accuracy_level : enum, the minimum accuracy level of input A, can be: 0(unset), 1(fp32), 2(fp16), 3(bf16), or 4(int8) (default unset). It is used to control how input A is quantized or downcast internally while doing computation, for example: 0 means input A will not be quantized or downcast while doing computation. 4 means input A can be quantized with the same block_size to int8 internally from type T1.
Default value “unset”.
bits : integer, bit-width used to quantize the weights (valid range: 2~8).
Default value “0”.
block_size : integer, size of each quantization block along the K (input feature) dimension. Must be a power of two and ≥ 16 (e.g., 16, 32, 64, 128).
Default value “0”.
 training? : boolean, whether the layer is in training mode (can store data for backward).
Default value “True”.
 lda coeff : float, defines the coefficient by which the loss derivative will be multiplied before being sent to the previous layer (since during the backward run we go backwards).
Default value “1”.

 name (optional) : string, name of the node.

Output parameters

 

Y (heterogeneous) – T1 : object, tensor. The output tensor has the same rank as the input.

Type Constraints

T1 in (tensor(float)tensor(float16)tensor(bfloat16)) : Constrain input and output types to float tensors.

T2 in (tensor(uint8)) : Constrain quantized weight types to uint8.

T3 in (tensor(uint8), tensor(float)tensor(float16)tensor(bfloat16)) : Constrain quantized zero point types to uint8 or float tensors.

T4 in (tensor(int32)) : the index tensor.

Example

All these exemples are snippets PNG, you can drop these Snippet onto the block diagram and get the depicted code added to your VI (Do not forget to install Deep Learning library to run it).
Table of Contents