Welcome to our Support Center

Generate Full Text

Description

Runs text generation with the Llama model from a preprocessed sequence of tokens. The model processes the full sequence and returns the generated response as a single string (no token-by-token streaming).

 

Input parameters

 

ONNX in : object, llama generator session.

 Parameters : cluster,

use_position_ids : boolean, enables the use of explicit position IDs for the input tokens.
temperature : float, controls randomness in the generation process.
repetition_penalty : float, penalizes repeated tokens to reduce looping or redundant text.
max_length : integer, maximum number of tokens in the generated output sequence.
ngram_size : integer, size of n-grams tracked to prevent repetition.

llama_decoder_session : integer, reference to an active ONNX inference session of the LLaMA decoder model.

Output parameters

 

ONNX out : object, llama generator session.
output_string : string, the complete text generated by the model in a single output.

Example

All these exemples are snippets PNG, you can drop these Snippet onto the block diagram and get the depicted code added to your VI (Do not forget to install Deep Learning library to run it).
Table of Contents