Connect the gateway to agent runtimes (OpenClaw, Hermes, etc.)

Agent runtimes call POST /v1/chat/completions over OpenAI-compatible HTTP, often with tools / tool_calls, SSE, and content: null on assistant messages. For OpenClaw, Hermes Agent, etc., you effectively point them at a custom OpenAI backend (Base URL + Bearer + model).

Each product’s config files and env vars follow its docs; this page only covers HTTP-level commonalities.

Integration checklist

  1. Base URL: https://<host>/v1 (verify auto-append behavior with GET /v1/models).
  2. API Key: valid tenant sk-….
  3. Model: exact code from GET /v1/models.
  4. TLS / SSE: cert chain, proxies must not buffer text/event-stream, timeouts long enough.

OpenClaw

Configure OpenAI-compatible baseUrl + apiKey` (field names vary by version). Upstream: openclaw/openclaw, #3307.

content may be null and other tool-call validation details: see chatCompletionSchema comments and Chat Completions.

Hermes Agent

In Adding Providers / Configuration, point the LLM upstream at this service’s …/v1. Do not confuse with Hermes’ built-in API server (local listener): you want an external OpenAI-compatible backend, not “Hermes only listens on localhost”.

Multimodal

input_file rules: Chat Completions. On Node, @routerbrain/sdk can assemble attachments: chat.messages.

See also

Back to docs home