HAIBAL deep learning library for LabVIEW is still under devellopment. Our team worked last week on the integration of Cuda library and on the possibility to integrate OneApi architecture for Intel CPU and GPU
We are still working on the integration of cuda, this week we are working on RNN and attention layer.
After making some sucessful integration test we confirm that HAIBAL will also integrate OneDNN intel API for our first release.
It will have the advantage to optimize the execution of our inference for Intel architecture hardware.
HAIBAL IN A FEW FIGURES
- 16 activation functions (ELU, Exponential, GELU, HardSigmoid, LeakyReLU, Linear, PRELU, ReLU, SELU, Sigmoid, SoftMax, SoftPlus, SoftSign, Swish, TanH, ThresholdedReLU)
- 84 functional layers (Dense, Conv, MaxPool, RNN, Dropout…)
- 14 loss functions (BinaryCrossentropy, BinaryCrossentropyWithLogits, Crossentropy, CrossentropyWithLogits, Hinge, Huber, KLDivergence, LogCosH, MeanAbsoluteError, MeanAbsolutePercentage, MeanSquare, MeanSquareLog, Poisson, SquaredHinge)
- 15 initialization functions (Constant, GlorotNormal, GlorotUniform, HeNormal, HeUniform, Identity, LeCunNormal, LeCunUniform, Ones, Orthogonal, RandomNormal, Random,Uniform, TruncatedNormal, VarianceScaling, Zeros)
- 7 optimizers (Adagrad, Adam, Inertia, Nadam, Nesterov, RMSProp, SGD)
A YouTube training channel, a complete documentation under GitHub and this website are in progress.
WORKING IN PROGRESS & COMING SOON
July will be a hard work month for us and we do our best to finish the cuda integration part.
Still a little patience …