LogoLogo
  • PlaceOS Documentation
  • Overview
    • Key Concepts
      • Drivers
      • Interfaces
      • Modules
      • Settings
      • Systems
      • Triggers
      • Zones
    • Languages
      • Crystal
      • TypeScript
    • Protocols
      • MQTT
      • SAML
      • OAuth2
  • How To
    • Configure PlaceOS for Microsoft 365
      • Step 1: Room Calendar Access
        • Create Azure App Registration (Application Permissions)
        • Exchange Calendar Group
        • Limit Application Permissions
        • Configure PlaceOS Calendar Driver
      • Step 2: User Authentication & Calendar Access
        • Create a PlaceOS Authentication Source
        • Create Azure App Registration (Delegated Permissions)
        • Configure PlaceOS Authentication Source
        • Add User Login Redirects
      • Concierge Access
      • Troubleshooting
        • Blocked or Blacklisted IP Error
    • Configure PlaceOS for Google Workspace
      • Google Configuration
        • Create Google Cloud Project & Enable API
        • Configure Google Cloud Service Account
        • Add Google Workplace Permissions
        • Create Google Marketplace App (optional)
        • Google Workspace Service User (RBAC)
        • Configure Access to Google Resource Calendars
      • User Authentication
        • Create a PlaceOS Authentication Source for Google
        • Create Google Cloud OAuth2 Client App
        • Configure PlaceOS Auth Source for Google
        • Add User Login Redirects
    • Deployment
      • Deploy AWS Fargate on Modular CloudFormation Stacks
      • Deploy AWS Fargate on Nested CloudFormation Stacks
      • Writing Import Scripts
    • Analytics
      • MQTT Integration
    • Backoffice
      • Add a Domain to PlaceOS
      • Backoffice File Upload
      • Configure Staff API
      • Calendar Driver
      • Enable Sensor UI
      • Bookings Driver
      • Configure a webhook
    • Authentication
      • Azure B2C
        • Azure B2C Custom Policy Framework
        • Configure PlaceOS for Azure B2C
        • 365 Room Resources on Azure B2C
      • Configure SAML SSO
        • Configure SAML2 with AD FS
        • Configure SAML2 with Auth0
        • Configure SAML2 with Azure AD
        • Configure SAML2 with Google Workspace
      • Configure OAuth2 SSO
      • X-API Keys
      • Bearer tokens
    • Location Services
      • Location Services
      • Area Management
      • Discovering User Devices
      • Locating Users on a Network
      • People Finding with Cisco Meraki on PlaceOS
      • People Finding with Juniper Mist on PlaceOS
    • Notifications
      • Catering Orders
    • User Interfaces
      • Booking Panel App
      • Workplace App
      • Native Booking Panel App
      • Deploy a Frontend Interface
      • Microsoft Outlook Plugin
      • Configure Endpoint Auto Login
      • SVG Map Creation
      • Configuring a default UI
  • Tutorials
    • Setup a dev environment
    • Backend
      • Troubleshooting Backend Failures
      • Import Bookable Rooms
      • Writing A Driver
        • Testing drivers
        • ChatGPT / LLM Capabilities
          • Native GPT Plugins
      • Testing Internal Builds
    • Backoffice
      • Adding Drivers & Modules
      • Add Zone Structure
    • Common Configurations
      • Asset Manager
      • Catering
      • Locker Booking
      • Webex Instant Connect
      • Desk booking
      • Sensor Data Collection
        • Configure Kontakt IO
        • Configuring Meraki
        • Configuring DNA Spaces
      • Elevated Privileges
  • Reference
    • API
      • Real-time Websocket
      • Rest API
      • Staff API
    • Drivers
      • PlaceOS
        • Bookings
        • Staff API
        • Visitor Mailer
        • Lockers
      • Microsoft
        • Graph API
    • PlaceOS Skills
    • Privacy Policy
    • Recommended Products
    • Supported Integrations
    • System Architecture
    • System Functionality & Requirements
    • Infrastructure Requirements
    • Security Compliance
      • FAQ
      • GDPR
      • Security
    • Microsoft Azure Permissions
  • Glossary
  • 🎯PlaceOS Roadmap
  • 🆘PlaceOS Support
  • 👩‍💻PlaceOS Github
  • 📝PlaceOS Changelog
Powered by GitBook
On this page
  • Building a capability
  • Configuring access to OpenAI or Azure APIs
  • Chat API
Export as PDF
  1. Tutorials
  2. Backend
  3. Writing A Driver

ChatGPT / LLM Capabilities

Using drivers to provide capabilities to large language models

PreviousTesting driversNextNative GPT Plugins

Last updated 1 year ago

It's possible to create a custom LLM chat bot per system, providing the LLM with the ability to execute functions within the system.

For a system to work as a ChatGPT chat bot it requires the . This driver is used to define the system prompt and the initial message to display to the user.

Drivers implementing the ChatFunctions interface expose the available functionality to the LLM.

Building a capability

Each driver should encapsulate related functions that can easily be summarised. This summary is used by the LLM to discover the driver with the function it needs.

The key to building effective functions is to make them as human friendly as possible. If the parameters are hard to understand and not constructed for a human the LLM will produce more errors. An example would be to use parsable date strings over Unix timestamps.

Descriptions should explain how to use any non-obvious parameters and be firm when describing any order of operations:

# Capability example
getter capabilities : String do
  String.build do |str|
    str << "provides details of my daily schedule, meeting room bookings and events I'm attending.\n"
    str << "lookup or search for the email and phone numbers of other staff members if you haven't been provided their details. Do not guess.\n"
    str << "check schedules before booking or moving meetings to ensure no one is busy at that time\n"
  end
end
# Parameter usage example, only functions with descriptions are provided to the LLM
@[Description("search for a staff members phone and email addresses using odata filter queries, don't include `$filter=`, for example: `givenName eq 'mary' or startswith(surname,'smith')`, confrim with the user when there are multiple results, search for both givenName and surname using `or` if there is ambiguity")]
def search_staff_member(filter : String)
  # ...
end

Configuring access to OpenAI or Azure APIs

Either OpenAI or Azure APIs can be used to provide LLM model functionality. API keys can be defined for each domain in the internal settings.

Settings

"openai": {
  # either the OpenAI or Azure API key
  "api_key": "",
  # specify your Azure API URI here. Don't include key if not in use.
  "api_base": "",
  "api_model": "gpt-4-1106-preview",
  # max tokens before the conversation needs to be truncated
  # this is a good default for GPT 4
  "max_tokens": 11264
}

Chat API

These HTTP and Websocket API routes available for interfacing with the LLM

Websocket

To establish a new chat, open a websocket to: WS /api/engine/v2/chatgpt/chat/:system_id

Request messages are raw strings (frontend -> backend) What do I have on today?

Response messages are JSON encoded (backend -> frontend)

{
  # use the chat_id to recover a chat if the websocket is disconnected
  "chat_id": "chats-GXs4_ynIhe",
  "message": "checking Schedule capabilities",
  # progress messages are provided as the request is worked on
  "type": "progress",
  "function": "list_function_schemas",
  # token usage is provided for debugging purposes
  # as you don't want requests exceeding your limits
  "usage": {
    "prompt_tokens": 883,
    "completion_tokens": 16,
    "total_tokens": 899
  }
}
{
  "chat_id": "chats-GXs4_ynIhe",
  "message": "performing action: Schedule.my_schedule({\"day_offset\" => 0})",
  "type": "progress",
  "function": "call_function",
  "usage": {
    "prompt_tokens": 2197,
    "completion_tokens": 26,
    "total_tokens": 2223
  }
}
{
  "chat_id": "chats-GXs4_ynIhe",
  "message": "condensing progress: Checked schedule for today, no meetings are present.",
  "type": "progress",
  "function": "task_complete",
  "usage": {
    "prompt_tokens": 2240,
    "completion_tokens": 23,
    "total_tokens": 2263
  },
  # progress messages are pruned from the chat to minimize token usage
  "compressed_usage": 883
}
{
  "chat_id": "chats-GXs4_ynIhe",
  "message": "You don't have any meetings scheduled for today. Is there anything else you would like to know or do?",
  "type": "response",
  "usage": {
    "prompt_tokens": 931,
    "completion_tokens": 23,
    "total_tokens": 954
  },
  "compressed_usage": 954
}

If the websocket is dropped you can resume the chat where you left off by re-establishing with the resume parameter: WS /api/engine/v2/chatgpt/chat/:system_id?resume=chats-GXs4_ynIhe

REST API Routes

  • List your past chats GET /api/engine/v2/chatgpt/

  • List the chat history GET /api/engine/v2/chatgpt/:chat_id

  • Delete chat history DELETE /api/engine/v2/chatgpt/:chat_id

See the example and spec as a template.

TODO driver
PlaceOS LLM Interface driver
Example system configuration