SDK quickstart
For Node / backend scripts: npm package @routerbrain/sdk (repo path packages/sdk) talks to RouterBrain’s OpenAI-compatible HTTP API (POST /v1/chat/completions, GET /v1/models, etc.). The SDK uses standard fetch, not the third-party OpenRouter official client; request/response types are intentionally loose JSON to track OpenAI shapes.
Package and install
pnpm add @routerbrain/sdk
| Entry | Use case |
|---|---|
@routerbrain/sdk | Browser and Node: RouterBrain, chat.send, files, models, extractTextDelta, error classes, etc. |
@routerbrain/sdk/node | Node only: side-effect attaches client.chat.messages, createRouterBrainFromEnv, etc.—same version as the main package; import "@routerbrain/sdk/node", no second package name. |
Match HTTP multimodal rules: file-like parts in user messages should use
type: "input_file"(file_url/file_id/filename+file_data). Legacyfile/video_urlare normalized toinput_fileduring SDK serialization; new code should follow OpenAI Chat Completions conventions—Chat Completions andinput_file.
Construct RouterBrain
import { RouterBrain } from "@routerbrain/sdk";
const client = new RouterBrain({
/**
* API root (`serverURL`). Normalized to a trailing `/` base, then `chat/completions`, `files`, etc. are appended.
* pathname must be `""`, `/`, or `/v1`; otherwise `TypeError`.
*/
serverURL: "https://gateway.example.com",
/** When set, adds `Authorization: Bearer <apiKey>` */
apiKey: process.env.GATEWAY_API_KEY,
/** Merged into every HTTP request (overlong values truncated per README) */
defaultHeaders: {
"x-trace-id": "req-123",
"x-user-id": "user-456",
"x-agent-name": "my-service",
},
});
Valid serverURL examples
| Value | Notes |
|---|---|
https://gw.example.com | Apex; normalized to trailing / |
https://gw.example.com/ | Explicit root |
https://gw.example.com/v1 | Common: API under /v1 |
Invalid: pathname /v2, /api/v1, etc.—must match SDK /v1 contract.
Per-request RequestOptions: signal (cancel) or temporary serverURL (same pathname rules).
Minimal non-streaming
const result = await client.chat.send({
chatRequest: {
model: "openai/gpt-4o-mini",
messages: [{ role: "user", content: "Introduce yourself in one sentence." }],
},
});
Return type is loose ChatResult—parse like OpenAI Chat Completions, e.g. choices[0].message.content. Do not use extractTextDelta on the full non-streaming payload.
Minimal streaming
import { extractTextDelta } from "@routerbrain/sdk";
const stream = await client.chat.send({
chatRequest: {
model: "openai/gpt-4o-mini",
messages: [{ role: "user", content: "Hello" }],
stream: true,
},
});
for await (const chunk of stream) {
process.stdout.write(extractTextDelta(chunk));
}
extractTextDelta: UTF-8 text deltas from each chunk (prefers choices[0].delta.content, handles piecewise arrays and some reasoning fields). Non-2xx HTTP or SSE errors throw—Error types.
Create from environment (Node)
For scripts/CI, avoid repeating config:
import { createRouterBrainFromEnv } from "@routerbrain/sdk/node";
const client = createRouterBrainFromEnv(process.env);
Env table and sample commands: Create client from environment.
Next steps
- Chat:
chat.send(body normalization,plugins.pdf, streaming errors) - Streaming, tools, and bodies
- Node:
chat.messages(paths/URLs,fromTurns) - Files and Models
- Python and other OpenAI-compatible SDKs
- Field-level details, MIME, plugins, limits:
packages/sdk/README.md - HTTP / ops (sidebar): Gateway quickstart, etc.—complements
@routerbrain/sdkprose