# Run prompt

Large language models (LLMs) are a powerful tool that can be used in many different ways. With the **Run prompt** node, you can run a simple LLM prompt right in your Flow.

The [LLM Flows skill](https://manuals.dialox.ai/store/skills/llm-flows) adds this Flow node to your Flow designer.

## How to use?

The **Run prompt** node can be used anywhere in your Flow. It sends a prompt to the LLM, and receives a response. That response is then stored in a variable, which can in turn be used in subsequent parts of your Flow.

<figure><img src="https://3356808761-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FEBQHGJABNTj5ISSOTRxM%2Fuploads%2Flvi6z3QkQMBvdbvzvMCQ%2Fimage.png?alt=media&#x26;token=1c4e08f3-929d-414e-99e7-bda9f70f9149" alt=""><figcaption></figcaption></figure>

In the above Flow, the user is asked to name a colour. The prompt sent to the LLM asks it to create a rhyme about that colour. The response containing the rhyme is stored in the `llm_colour_response` variable, which is used in the next **Say** node to repeat it to the user:

<figure><img src="https://3356808761-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FEBQHGJABNTj5ISSOTRxM%2Fuploads%2FqYRYMtVE1DbtmzdR1qsg%2Fimage.png?alt=media&#x26;token=728d0fe1-6fe1-443f-9c55-821038b4c204" alt=""><figcaption></figcaption></figure>

## Node properties

| Property           | Description                                                                 | Explanation                                                                                                                                  |
| ------------------ | --------------------------------------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------- |
| Prompt             | The prompt, in text, that the LLM should run                                | Can contain [variables](https://manuals.dialox.ai/studio/flows/variables)                                                                    |
| Assign to          | The variable that the LLM's response should be stored in                    |                                                                                                                                              |
| Include transcript | The request to the LLM will contain a transcript of the conversation so far | This can be helpful when asking the LLM to say something about the conversation, e.g. to create a summary or filter out specific information |

## Resources

* [LLM Flows](https://manuals.dialox.ai/store/skills/llm-flows)
* [Variables](https://manuals.dialox.ai/studio/flows/variables)
