Welcome to our Support Center

Create Llama Generator

Description

Create Streaming Llama Generator Session.

 

Input parameters

 

ONNX in : object, llama generator session.

 Parameters : cluster,

use_position_ids : boolean, enables the use of explicit position IDs for the input tokens.
temperature : float, controls randomness in the generation process.
repetition_penalty : float, penalizes repeated tokens to reduce looping or redundant text.
max_length : integer, maximum number of tokens in the generated output sequence.
ngram_size : integer, size of n-grams tracked to prevent repetition.

llama_decoder_session : integer, reference to an active ONNX inference session of the LLaMA decoder model.

Output parameters

 

ONNX out : object, llama generator session.

Example

All these exemples are snippets PNG, you can drop these Snippet onto the block diagram and get the depicted code added to your VI (Do not forget to install Deep Learning library to run it).
Table of Contents