Welcome to our Support Center

SequenceMap

Description

Applies a sub-graph to each sample in the input sequence(s).

 

 

Inputs can be either tensors or sequences, with the exception of the first input which must be a sequence. The length of the first input sequence will determine the number of samples in the outputs. Any other sequence inputs should have the same number of samples. The number of inputs and outputs, should match the one of the subgraph.

For each i-th element in the output, a sample will be extracted from the input sequence(s) at the i-th position and the sub-graph will be applied to it. The outputs will contain the outputs of the sub-graph for each sample, in the same order as in the input.

This operator assumes that processing each sample is independent and could executed in parallel or in any order. Users cannot expect any specific ordering in which each subgraph is computed.

 

 

 

Input parameters

 

specified_outputs_namearray, this parameter lets you manually assign custom names to the output tensors of a node.

 Graphs in : cluster, ONNX model architecture.

input_sequence (heterogeneous) – S : object, input sequence.
additional_inputs (variadic) – V : object, additional inputs to the graph.

 Parameters : cluster,

body : object, the graph to be run for each sample in the sequence(s). It should have as many inputs and outputs as inputs and outputs to the SequenceMap function.
 training? : boolean, whether the layer is in training mode (can store data for backward).
Default value “True”.
 lda coeff : float, defines the coefficient by which the loss derivative will be multiplied before being sent to the previous layer (since during the backward run we go backwards).
Default value “1”.

 name (optional) : string, name of the node.

Output parameters

out_sequence (variadic) – S : object, output sequence(s).

Type Constraints

S in (seq(tensor(bool))seq(tensor(complex128))seq(tensor(complex64))seq(tensor(double))seq(tensor(float))
seq(tensor(float16))seq(tensor(int16))seq(tensor(int32))seq(tensor(int64))seq(tensor(int8))seq(tensor(string))seq(tensor(uint16))seq(tensor(uint32))seq(tensor(uint64))seq(tensor(uint8))) : Constrain input types to any sequence type.

V in (seq(tensor(bool))seq(tensor(complex128))seq(tensor(complex64))seq(tensor(double))seq(tensor(float))
seq(tensor(float16))seq(tensor(int16))seq(tensor(int32))seq(tensor(int64))seq(tensor(int8))seq(tensor(string))seq(tensor(uint16))seq(tensor(uint32))seq(tensor(uint64))seq(tensor(uint8))tensor(bool)tensor(complex128)tensor(complex64)tensor(double)tensor(float)tensor(float16)tensor(int16)tensor(int32)tensor(int64)tensor(int8)tensor(string)tensor(uint16)tensor(uint32)tensor(uint64)tensor(uint8)) : Constrain to any tensor or sequence type.

Example

All these exemples are snippets PNG, you can drop these Snippet onto the block diagram and get the depicted code added to your VI (Do not forget to install Deep Learning library to run it).
Table of Contents