# LLM Response

Unlike fixed Text responses, LLM responses adapt to the conversation while following your defined parameters.

#### Setting up LLM responses:

A single LLM Response node can be used to create the Minimum Viable Flow. Simply connect the Start node to a LLM Response with general system instructions. In such a case, there is no need to connect the LLM Response further to any node – each user's input will trigger the Start and the single LLM Response node with all the existing conversation context and the conversation will keep going, naturally.

<figure><img src="https://1213579860-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2F-MaU7JJyoXT5PfhTD9dJ%2Fuploads%2FCQAiiMzOyw3PcXsHP6Rf%2Fhub-assistant-virbe.virbe.app_dashboard_conversation-flows_ca98acce-9cff-4d65-ad55-7ca28f98965e_preview%3Dfalse%20(1).png?alt=media&#x26;token=ded294a9-63ac-41df-89a3-d9349de8727a" alt=""><figcaption><p>The simplest flow with LLM Response</p></figcaption></figure>

1. Select the LLM model to use
2. Provide system instruction (what the response should achieve)
3. Add any additional context beyond the conversation history

Example for filling in the Context:

Additional Context can help guide the LLM to keep the responses under certain limit of words – this makes the responses more brief and dynamic and they are easier for users to follow.

<figure><img src="https://1213579860-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2F-MaU7JJyoXT5PfhTD9dJ%2Fuploads%2Ft6YRVitnvWKlwK9PK84e%2Fimage.png?alt=media&#x26;token=2c0d0612-6a73-48e8-8b73-858328148933" alt="" width="375"><figcaption></figcaption></figure>

For more complex flows:

{% hint style="info" %}
If you plan. to add follow-up nodes after LLM Response, consider using a Checkpoint (with "Wait for user input) option enabled) to take the next user's input from there. Otherwise, LLM Response does not wait for user's input and any subsequent nodes will be executed immediately.
{% endhint %}

<figure><img src="https://1213579860-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2F-MaU7JJyoXT5PfhTD9dJ%2Fuploads%2FuAumzMwjd5gz2O7gSCco%2Fhub-assistant-virbe.virbe.app_dashboard_conversation-flows_ca98acce-9cff-4d65-ad55-7ca28f98965e%20(3).png?alt=media&#x26;token=6d54d3ef-aa46-4024-b5c7-60861e9d9757" alt=""><figcaption><p>Example of a complex flow with LLM Responses</p></figcaption></figure>

#### Common use cases:

* Natural conversations
* Dynamic responses to queries,&#x20;
* Contextual explanations
* Personalized interactions
* Complex information delivery
* Follow-up discussions

{% hint style="info" %}
**Important considerations:**

* Clear instructions guide better responses
* Additional context helps predictability
* Conversation history is included automatically
* Different models may respond differently
* Test responses with various inputs
* Monitor response quality and appropriateness
  {% endhint %}


---

# Agent Instructions: Querying This Documentation

If you need additional information that is not directly available in this page, you can query the documentation dynamically by asking a question.

Perform an HTTP GET request on the current page URL with the `ask` query parameter:

```
GET https://docs.virbe.ai/dashboard-management/conversation-flows/nodes/response-nodes/llm-response.md?ask=<question>
```

The question should be specific, self-contained, and written in natural language.
The response will contain a direct answer to the question and relevant excerpts and sources from the documentation.

Use this mechanism when the answer is not explicitly present in the current page, you need clarification or additional context, or you want to retrieve related documentation sections.
