(Obsolete) Custom Chatbot Engine V1

Setup

To enable a custom engine you have to adapt to the Virbe request/response schema, which is described below. You might want to use one of our boilerplate projects as a reference.

Schema

RoomMessageRequest

  • id - this is a unique message identifier - you can use it as the replyTo field when posting asynchronously to the room

  • endUserId - this is a unique identifier or the user that posted to a particular room conversation - it persists through multiple conversations as long as the user allows for that eg. cleaning a browser cache will clear this value when using a web widget. You might want to implement your own caching mechanism when using our SDKs.

  • room - this object contains all the information of the created Room conversation.

  • location - this object contains all the information about the touchpoint that End User is interacting from:

    • name - it contains the name of the location you've created in the Virbe Dashboard for your own instance or one of the prebuilt touch-points names: Dashboard, VirbeSpace, Mobile, LivePreview

    • channel - defines the type of location created in the Virbe Dashboard:

      • Predefined Channels: LivePreview, Dashboard, Mobile, VirbeSpace

      • User Registered Channels: Web, Kiosk, Widget, Unity, Unreal

  • action - this object is the one you need to check to implement your own logic for handling

    • text - this is main object you want to analyze and react to in your engine

      • text - this contains the text recognized by Speech Recognition or the text written by end user to your Virbe

      • language - this contains the language recognized by Speech Recognition or the language set in the touchpoint configuration

    • endUserStore - this object is passed if end user submits the input form

      • key - key to store your value under

      • value - value to store

    • namedAction - this is the field which is used for signals & triggers eg. widget open, app launch, face detection in the Metahuman Kiosk, QR Code scan etc. You might want to implement your own logic of reacting to this kind of events

      • name - name of the trigger: widget_open, app_launch, face_detected, being_defocus, being_focus

      • value - additional string value eg. describe your payload type here to parse it properly

      • valueJson - additional Json object passed through your implementation eg. IoT sensors values

    • roomStore - you don't need to react to it most of the times unless you want to store a specific key/value pair when the SDK is starting eg. you want to write and store specific value during new room conversation creation

      • key - key to store your value under

      • value - value to store

RoomMessageResponse

If you want your Virbe to speak out loud you need to respond with text action in the schema.

  • Responding with Text - Make sure to respond with the following schema

{
  "action": {
    "text": {
      "text": "This is the text Virtual Being will speak back",
      "language": "en-US" // The voice configuration for Speech Generation
    }
}
  • Responding with Text and UI components

{
  "action": {
    "text": {
      "text": "Do you like any of these products?",
      "language": "en-US" // The voice configuration for Speech Generation
    },
    "uiAction": {
      "name": "virbe-payload-v2" // Check out the documentation to learn all supported UI components
      "value": {
        "buttons": [
          {
            "label": "Yes"
            "payloadType": "text" // or "namedAction"
            "payload": "Yes" // Value to send to Conversational engine when clicked
          },
          {
            "label": "No"
            "payloadType" : "text"
            "payload": "No" // Value to send to Conversational engine when clicked
          }
        ],
        "cards": [
          {
            "title": "Product A", 
            "imageUrl": "http://asset-store-domain.com/product-a-image.jpg",
            "payloadType": "text", // or "namedAction"
            "payload": "Tell me more on Product A" // Value to send to Conversational engine when clicked
          },
          {
            "title": "Product B", 
            "imageUrl": "http://asset-store-domain.com/product-b-image.jpg",
            "payloadType": "text",
            "payload": "Tell me more on Product B"
          },
          {
            "title": "Product C", 
            "imageUrl": "http://asset-store-domain.com/product-c-image.jpg",
            "payloadType": "text",
            "payload": "Tell me more on Product C"
          }
        ]
      }
    }
  }
}
  • Responding with text and storing user email to the EndUserStore. Check out the documentation how to invoke an integration pipeline (Hubspot, ActiveCampaign) or a custom webhook call when EndUserStore key/value pair is stored

{
  "action": {
    "text": {
      "text": "Thanks for the email. My manager will get back to you soon.",
      "language": "en-US"
    },
    "endUserStore": {
      "key": "email",
      "value": "end-user-email@user-domain.com"
    }
  }
}

Asynchronous Room Message

Asynchronous communication is a perfect solution to improve UX when you have a long-running chain logic for LLMs (eg. ChatGPT) or human-handover scenarios.

  • React to RoomMessageRequest by creating a task that will be processed asynchronously eg. in a separate thread pool or queue.

# FastAPI background task example

@app.post("/api/v1/room/async", response_model=RoomConversationResponse)
def room(request: RoomConversationRequest,
         background_tasks: BackgroundTasks):

    if request.action and request.action.text:
        background_tasks.add_task(
            prepare_async_response_on_text,
            request,
        )

    return RoomConversationResponse(
        action=RoomMessageAction(text=RoomMessageTextData(text="Let me think for a moment..."))
    )
  • Use RoomMessageIngest schema to send POST request to the room asynchronously using the following endpoint:

https://{your-virbe-dashboard-url}/api/v1/rooms/{request-room-id}/messages/api

Request Body Schema:

{
  "replyTo": "d64d99b5-249a-4495-8a89-73ebd674b2de", # Message Id from RoomMesseageRequest
  "endUserId": "f0354d62-95a1-41ec-9e87-b0089addeda0", # EndUserId from RoomMessageRequest - make sure to set it if you're using EndUserStore
  "senderId": "custom-chatbot-async", # or any other value you want to see later in logs
  "action": {
    "text": {
      "text": "This is the text Virtual Being will speak back asynchronously",
      "language": "en-US" // The voice configuration for Speech Generation
    }
}

You can post multiple responses to one request if you want to. To send other actions check the section above.

For a more detailed example scenario check out the sample code in our Github:

Human handover

  • Stop responding automatically and pass all RoomMessageRequest to your system eg. Slack channel or your own software

  • Start posting asynchronously to the room conversation using asynchronous messages described in the previous section

Last updated