Welcome to our Support Center

Get platform

Description

Gets the type of execution of the model and how it works. If you have a CUDA-compatible graphics card you can perform the calculation with your GPU.

Input parameters

 

Model in : model architecture.

Output parameters

 

Model out : model architecture.

Memory Exec : cluster

Device : enum, device with which you want to run the program.
 GPU Parameters : cluster

Mode : enum, mode of operation for internal platform memory management. This involved only GPU device.

        • FreePtr : Dynamic create and free memory access after internal utilisation in individual forward/backward layer process.
        • AvailablePtr : Creates memory without dynamic free process to permit reusability of it.

Exec Architecture : enum, program execution architecture.

        • Parallel : Allow execution of the same model in parallel (for example, one training branch and a parallel validation branch, see Unet Example).
          WARNING : In this mode, if you set any layer in your model in training you have to call the Loss+Backward function after every Forward.
        • Sequential : An optimized performance in this mode but you can’t execute model in “Parallel”.

These both way of memory management also require to use this function at the end of the whole process to free all GPU platform allocated memory.

Example

All these exemples are snippets PNG, you can drop these Snippet onto the block diagram and get the depicted code added to your VI (Do not forget to install HAIBAL library to run it).

Using the “Get Platform” function

1 – Define Graph

We define the graph with one input and two Dense layers named Dense1 and Dense2.

2 – Get Function

We use the function “Get Platform” to get the type of execution of the model and how it works.

Table of Contents