Welcome to our Support Center

Get platform


Gets the type of execution of the model and how it works. If you have a CUDA-compatible graphics card you can perform the calculation with your GPU.

Input parameters


Model in : model architecture.

Output parameters


Model out : model architecture.

Memory Exec : cluster

Device :Β enum,Β device with which you want to run the program.
Β GPU Parameters :Β cluster

Mode :Β enum,Β mode of operation for internal platform memory management. This involved only GPU device.

        • FreePtrΒ : Dynamic create and free memory access after internal utilisation in individual forward/backward layer process.
        • AvailablePtrΒ : Creates memory without dynamic free process to permit reusability of it.

Exec Architecture :Β enum,Β program execution architecture.

        • ParallelΒ : Allow execution of the same model in parallel (for example, one training branch and a parallel validation branch, seeΒ UnetΒ Example).
          WARNING : In this mode, if you set any layer in your model in training you have to call the Loss+Backward function after every Forward.
        • Sequential :Β An optimized performance in this mode but you can’t execute model in β€œParallel”.

These both way of memory management also require to useΒ this functionΒ at the end of the whole process to free all GPU platform allocated memory.


All these exemples are snippets PNG, you can drop these Snippet onto the block diagram and get the depicted code added to your VI (Do not forget to install HAIBAL library to run it).

Using the β€œGet Platform” function

1 – Define Graph

We define the graph with one input and two Dense layers named Dense1 and Dense2.

2 – Get Function

We use the function “Get Platform” to get the type of execution of the model and how it works.

Table of Contents