Welcome to our Support Center

Set GPU platform

Description

Sets the type of execution of the model to CUDA/CuDNN. GPU Parameters defines how internal memory will be handle.

Do not forget to install CUDA only with the GIM before using this function to avoid any error (CUDA installation guide).

Input parameters

 

Model in : model architecture.

GPU Parameters : cluster

 Mode : enum, mode of operation for internal platform memory management. This involved only GPU device.

      • FreePtr : Dynamic create and free memory access after internal utilisation in individual forward/backward layer process.
      • AvailablePtr : Creates memory without dynamic free process to permit reusability of it.

 Exec Architecture : enum, program execution architecture.

      • Parallel : Allow execution of the same model in parallel (for example, one training branch and a parallel validation branch, see Unet Example).
        WARNING : In this mode, if you set any layer in your model in training you have to call the Loss+Backward function after every Forward.
      • Sequential : An optimized performance in this mode but you can’t execute model in “Parallel”.

These both way of memory management also require to use this function at the end of the whole process to free all GPU platform allocated memory.

Output parameters

 

Model out : model architecture.

Example

All these exemples are snippets PNG, you can drop these Snippet onto the block diagram and get the depicted code added to your VI (Do not forget to install HAIBAL library to run it).

Table of Contents