LLM Response
The LLM Response node enables AI-generated responses based on context and instructions.
Last updated
Was this helpful?
The LLM Response node enables AI-generated responses based on context and instructions.
Last updated
Was this helpful?
Unlike fixed Text responses, LLM responses adapt to the conversation while following your defined parameters.
A single LLM Response node can be used to create the Minimum Viable Flow. Simply connect the Start node to a LLM Response with general system instructions. In such a case, there is no need to connect the LLM Response further to any node – each user's input will trigger the Start and the single LLM Response node with all the existing conversation context and the conversation will keep going, naturally.
Select the LLM model to use
Provide system instruction (what the response should achieve)
Add any additional context beyond the conversation history
Example for filling in the Context:
Additional Context can help guide the LLM to keep the responses under certain limit of words – this makes the responses more brief and dynamic and they are easier for users to follow.
For more complex flows:
If you plan. to add follow-up nodes after LLM Response, consider using a Checkpoint (with "Wait for user input) option enabled) to take the next user's input from there. Otherwise, LLM Response does not wait for user's input and any subsequent nodes will be executed immediately.
Natural conversations
Dynamic responses to queries,
Contextual explanations
Personalized interactions
Complex information delivery
Follow-up discussions
Important considerations:
Clear instructions guide better responses
Additional context helps predictability
Conversation history is included automatically
Different models may respond differently
Test responses with various inputs
Monitor response quality and appropriateness