Welcome to our Support Center

ReduceL1

Description

Computes the L1 norm of the input tensor’s elements along the provided axes. The resulting tensor has the same rank as the input if keepdims equals 1. If keepdims equals 0, then the resulting tensor has the reduced dimension pruned. Input tensors of rank zero are valid. Reduction over an empty set of values yields 0. The above behavior is similar to numpy, with the exception that numpy defaults keepdims to False instead of True.

 

Input parameters

 

specified_outputs_namearray, this parameter lets you manually assign custom names to the output tensors of a node.

 Graphs in : cluster, ONNX model architecture.

data (heterogeneous) – T : object, an input tensor.
axes (optional, heterogeneous) – tensor(int64) : object, optional input list of integers, along which to reduce. The default is to reduce over empty axes. When axes is empty (either not provided or explicitly empty), behavior depends on ‘noop_with_empty_axes’: reduction over all axes if ‘noop_with_empty_axes’ is false, or no reduction is applied if ‘noop_with_empty_axes’ is true (but other operations will be performed). Accepted range is [-r, r-1] where r = rank(data).

 Parameters : cluster,

keepdims : boolean, keep the reduced dimension or not, true means keep reduced dimension.
Default value “False”.
noop_with_empty_axes : boolean, defines behavior when axes is not provided or is empty. If false, reduction happens over all axes. If true, no reduction is applied, but other operations will be performed. For example, ReduceSumSquare acts as a vanilla Square.
Default value “False”.
 training? : boolean, whether the layer is in training mode (can store data for backward).
Default value “True”.
 lda coeff : float, defines the coefficient by which the loss derivative will be multiplied before being sent to the previous layer (since during the backward run we go backwards).
Default value “1”.

 name (optional) : string, name of the node.

Output parameters

 

reduced (heterogeneous) – T : object, reduced output tensor.

Type Constraints

T in (tensor(bfloat16)tensor(double)tensor(float)tensor(float16)tensor(int32)tensor(int64)tensor(uint32)
tensor(uint64)) : Constrain input and output types to numeric tensors.

Example

All these exemples are snippets PNG, you can drop these Snippet onto the block diagram and get the depicted code added to your VI (Do not forget to install Deep Learning library to run it).
Table of Contents