Prompts
Manage the instructions to the LLM
Last updated
Manage the instructions to the LLM
Last updated
The Google and OpenAI large language models (LLM) are used to power most of the AI specific features of the DialoX platform, like intent or topic recognition. In these cases the instructions that are sent to the AI providers are managed by the DialoX platform and cannot be changed.
There are also AI capabilities that do support a custom instruction towards the LLM, like for instance the Iterate: LLM flow node. This node behaves just like the Iterate flow node, but with one major benefit. When none of the labelled triggers match, it uses the LLM to generate an answer .
A large language model (LLM) is a type of artificial intelligence (AI) model designed to understand and generate human-like text based on the input it receives. These models are trained on massive datasets of text from books, articles, websites, and other written content, enabling them to process and generate coherent language.
The DialoX platform uses mainly the ChatGPT models from OpenAI, hosted on the Microsoft Azure platform and Gemini models from google hosting within Google Cloud.
The Prompts page is not always available in your Smart App since it is only available for specific features that come with the Skills installed in your Smart App. Currently the Prompts page is enabled by the LLM Flows skill. See the corresponding page on how to install it in your Smart App.
Prompts can be managed in the AI section of the DialoX platform. Each prompt contains a field to manage the instructions and one field to manage the content. Both are provided to the LLM for generating its response to user questions.
Label
The name of your prompt
Instructions
Specific instructions for the LLM on how to respond, the context of the conversation, the do's and don'ts and output format.
Max 1.000 characters
Content
Additional content for the LLM to use in providing answers.
Max 9.000 characters
The platform constraints the prompt size to a maximum amount of characters in an effort to keep the LLM response times as low as possible to fit both the telephony and chat use cases. The larger the prompts, the longer it takes for the response to be generated.