Welcome to our Support Center

TorchEmbedding

Description

Based on Torch operator Embedding, creates a lookup table of embedding vectors of fixed size, for a dictionary of fixed size.

 

Input parameters

 

specified_outputs_namearray, this parameter lets you manually assign custom names to the output tensors of a node.

 Graphs in : cluster, ONNX model architecture.

weight (heterogeneous) – T : object, the embedding matrix of size N x M. ‘N’ is equal to the maximum possible index + 1, and ‘M’ is equal to the embedding size.
indices (heterogeneous) – tensor(int64) : object, long tensor containing the indices to extract from embedding matrix.
padding_idx (optional, heterogeneous) – tensor(int64) : object, a 0-D scalar tensor. If specified, the entries at `padding_idx` do not contribute to the gradient; therefore, the embedding vector at `padding_idx` is not updated during training, i.e. it remains as a fixed pad.
scale_grad_by_freq (optional, heterogeneous) – tensor(bool) : object, a 0-D bool tensor. If given, this will scale gradients by the inverse of frequency of the indices (words) in the mini-batch. Default is “False“.

 Parameters : cluster,

 training? : boolean, whether the layer is in training mode (can store data for backward).
Default value “True”.
 lda coeff : float, defines the coefficient by which the loss derivative will be multiplied before being sent to the previous layer (since during the backward run we go backwards).
Default value “1”.

 name (optional) : string, name of the node.

Output parameters

 

Y (heterogeneous) – T : object, output tensor of the same type as the input tensor. Shape of the output is * x M, where ‘*’ is the shape of input indices, and ‘M’ is the embedding size.

Type Constraints

T in (tensor(float16)tensor(float)tensor(double)tensor(bfloat16)tensor(uint8)tensor(uint16)tensor(uint32),
tensor(uint64)tensor(int8)tensor(int16)tensor(int32)tensor(int64)) : Constrain input and output types to all numeric tensors.

Example

All these exemples are snippets PNG, you can drop these Snippet onto the block diagram and get the depicted code added to your VI (Do not forget to install Deep Learning library to run it).
Table of Contents