Welcome to our Support Center

Create Academic Training Session From File

Description

Initialize an Academic Training Session from an .onnx file. Type : polymorphic.

 

Input parameters

 

Β Execution Device : enum, selects the hardware device on which the model will run.
Β ONNX File Path :Β path,Β is the path to the model file.

Parameters : cluster,

Academic Mode : cluster

independent_loss_model : boolean, if true, splits the model into 3 stages (forward, loss, backward) instead of 2 (forward+loss, backward).
Β max_norm :Β float,Β maximum global gradient norm (enables clipping if > 0).
Β norm_type :Β enum,Β type of norm used to computeΒ grad_normΒ (commonly 1 = L1, 2 = L2).
Β display_norm :Β boolean,Β addsΒ grad_normΒ as a model output if set to 1.

Sessions Parameters : cluster

intra_op_num_threadsΒ : integer, number of threads used within each operator to parallelize computations. If the value is 0, ONNX Runtime automatically uses the number of physical CPU cores.
inter_op_num_threadsΒ : integer, number of threads used between operators, to execute multiple graph nodes in parallel. If set to 0, this parameter is ignored when execution_mode is ORT_SEQUENTIAL. In ORT_PARALLEL mode, 0 means ORT automatically selects a suitable number of threads (usually equal to the number of cores).
execution_modeΒ : enum, controls whether the graph executes nodes one after another or allows parallel execution when possible. ORT_SEQUENTIAL runs nodes in order, ORT_PARALLEL runs them concurrently.
deterministic_compute : boolean,
forces deterministic execution, meaning results will always be identical for the same inputs.
graph_optimization_levelΒ : enum, defines how much ONNX Runtime optimizes the computation graph before running the model.
optimized_model_file_pathΒ : path,
file path to save the optimized model after graph analysis.

CUDA Parameters : cluster

device idΒ : integer, selects which GPU to use (0 = first GPU).
algo : enum, controls the algorithm used for cuDNN convolutions.

Training Parameters : cluster

initializer assign : array, alows you to define the status of each initializer (weight, bias, etc.) in the model.

index : integer, identifies the initializer in the list.
type : enum, defines its status.

            • Constant : fixed value, not modified during training.
            • Frozen : value included in the model but fixed, not updated.
            • Training : value optimised during training.

Losses : array, configures the loss function for each model output.

Type : enum, an enumeration indicating the loss type (e.g., MSE, CrossEntropy, etc.). If enumΒ is set toΒ CustomLoss, the custom class on the right will be used as the loss function. Otherwise, the selected loss will be applied with its default configuration.
CustomLoss :Β object,Β aΒ custom loss class instance.

Optimizer : cluster, defines the optimisation algorithm for updating weights.

Enum : enum, choice of standard optimizers (SGD, Adam, etc.).
CustomΒ :Β object,Β a custom optimizer class instance.

Output parameters

 

Academic TrainingΒ out : object, academic training session.

Example

All these exemples are snippets PNG, you can drop these Snippet onto the block diagram and get the depicted code added to your VI (Do not forget to install Deep Learning library to run it).
Table of Contents