POST /v1/chat/completions

Aligned with OpenAI Chat Completions: snake_case JSON body, auth, routing, upstream call, and usage recording. Responses:

  • Non-streaming: full JSON (stream omitted or false).
  • Streaming: Content-Type: text/event-stream, Cache-Control: no-cache, Connection: keep-alive, HTTP 200; body is data: … lines, usually ending with data: [DONE] (details per implementation and README).

Files in user messages: input_file

When messages[].content is an array, file parts must use type: "input_file" with exactly one of the following (mutually exclusive):

FieldsUse
file_urlhttp(s) URL (PDF, video, etc.).
file_idId from POST /v1/files (must belong to the tenant and API key).
filename + file_dataOften data:<mime>;base64,….

Plain text remains { "type": "text", "text": "…" }. image_url (vision) and input_audio follow OpenAI multimodal conventions.

Normalization and MIME

After validation, input_file is normalized in place to the internal representation (historically type: "file"), then PDF preprocessing, mime_type enrichment, and upstream mapping run.

mime_type resolution (summary): file_id from DB and explicit fields; file_url from extension / HEAD / Range; file_data from data-URL MIME. Unknown MIME or missing file → 400 with error.code (e.g. file_not_found, file_mime_unknown); full table in README.

Deprecated content parts

type: "file", video_url, etc. are rejected with 400 and error.code like unsupported_content_part / invalid_input_file (per README).

@routerbrain/sdk fromPaths uses input_file for images and file-like media; if you hand-write JSON, follow the three-way rule above.

PDF and root fields

When a parseable PDF is present, root pdf_preprocess controls expansion and engine; defaults when omitted are in PDF preprocessing. With the SDK, put config in chatRequest.plugins.pdf and it merges to the root field.

See also

Back to docs home