Welcome to our Support Center

ArgMax

Description

Computes the indices of the max elements of the input tensor’s element along the provided axis. The resulting tensor has the same rank as the input if keepdims equals 1. If keepdims equals 0, then the resulting tensor has the reduced dimension pruned. If select_last_index is True (default False), the index of the last occurrence of the max is selected if the max appears more than once in the input. Otherwise the index of the first occurrence is selected. The type of the output tensor is integer.

 

Input parameters

 

specified_outputs_namearray, this parameter lets you manually assign custom names to the output tensors of a node.
data (heterogeneous) – T : object, an input tensor.

 Parameters : cluster,

axis : integer, the axis in which to compute the arg indices. Accepted range is [-r, r-1] where r = rank(data).
Default value “0”.
keepdims : boolean, keep the reduced dimension or not, if true, this means that the reduced dimension is retained.
Default value “False”.
select_last_index : boolean, whether to select the last index or the first index if the {name} appears in multiple indices.
Default value “False”.
 training? : boolean, whether the layer is in training mode (can store data for backward).
Default value “True”.
 lda coeff : float, defines the coefficient by which the loss derivative will be multiplied before being sent to the previous layer (since during the backward run we go backwards).
Default value “1”.

 name (optional) : string, name of the node.

Output parameters

 

reduced (heterogeneous) – tensor(int64) : object, reduced output tensor with integer data type.

Type Constraints

T in (tensor(bfloat16)tensor(double)tensor(float)tensor(float16)tensor(int16)tensor(int32)tensor(int64)
tensor(int8)tensor(uint16)tensor(uint32)tensor(uint64)tensor(uint8)) : Constrain input and output types to all numeric.

Example

All these exemples are snippets PNG, you can drop these Snippet onto the block diagram and get the depicted code added to your VI (Do not forget to install Deep Learning library to run it).
Table of Contents