Welcome to our Support Center

Generate Streaming Text

Description

Runs text generation with the Llama model from a preprocessed sequence of tokens. The model generates the output incrementally and returns tokens one by one as they are produced (streaming mode).

 

Input parameters

 

ONNX in : object, llama generator session.

Output parameters

 

ONNX out : object, llama generator session.
output_string : string, the latest generated token, returned as a string.
is_done : boolean, indicates whether the text generation process has finished.

Example

All these exemples are snippets PNG, you can drop these Snippet onto the block diagram and get the depicted code added to your VI (Do not forget to install Deep Learning library to run it).
Table of Contents