Welcome to our Support Center

Mod

Description

Performs an element-wise binary modulo operation. 

 

 

The semantics and supported data types depend on the value of the fmod attribute which must be 0 (default), or 1.

If the fmod attribute is set to 0T is constrained to integer data types and the semantics follow that of the Python %-operator. The sign of the result is that of the divisor.

If fmod is set to 1, the behavior of this operator follows that of the fmod function in C and T is constrained to floating point data types. The result of this operator is the remainder of the division operation x / y where x and y are respective elements of A and B. The result is exactly the value x - n * y, where n is x / y with its fractional part truncated. The returned value has the same sign as x (except if x is -0) and is less or equal to |y| in magnitude. The following special cases apply when fmod is set to 1:

  • If x is -0 and y is greater than zero, either +0 or -0 may be returned.

  • If x is ±∞ and y is not NaNNaN is returned.

  • If y is ±0 and x is not NaNNaN should be returned.

  • If y is ±∞ and x is finite, x is returned.

  • If either argument is NaNNaN is returned.

This operator supports multidirectional (i.e., NumPy-style) broadcasting; for more details please check Broadcasting in ONNX.

 

Input parameters

 

specified_outputs_namearray, this parameter lets you manually assign custom names to the output tensors of a node.

 Graphs in : cluster, ONNX model architecture.

A (heterogeneous) – T : object, dividend tensor.
B (heterogeneous) – T : object, divisor tensor.

 Parameters : cluster,

 fmod : boolean, whether the operator should behave like fmod (false meaning it will do integer mods); Set this to true to force fmod treatment.
Default value “False”.
 training? : boolean, whether the layer is in training mode (can store data for backward).
Default value “True”.
 lda coeff : float, defines the coefficient by which the loss derivative will be multiplied before being sent to the previous layer (since during the backward run we go backwards).
Default value “1”.

 name (optional) : string, name of the node.

Output parameters

 

 C (heterogeneous) – T : object, remainder tensor.

Type Constraints

T in (tensor(bfloat16)tensor(double)tensor(float)tensor(float16)tensor(int16)tensor(int32)tensor(int64)
tensor(int8)tensor(uint16)tensor(uint32)tensor(uint64)tensor(uint8)) : Constrain input and output types to high-precision numeric tensors.

Example

All these exemples are snippets PNG, you can drop these Snippet onto the block diagram and get the depicted code added to your VI (Do not forget to install Deep Learning library to run it).
Table of Contents