Skip to main content

Chat Completions API

The core Jetty endpoint. One API, two modes.

Endpoint

POST /v1/chat/completions

Base URL: https://flows-api.jetty.io

Authentication

Authorization: Bearer $JETTY_API_TOKEN

Two Modes

Passthrough Mode (without jetty block)

Standard OpenAI-compatible LLM proxy. Streaming tokens from 100+ providers. Every call is automatically recorded as a trajectory for auditing and observability.

{
"model": "claude-sonnet-4-6",
"messages": [
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "Hello"}
],
"stream": true
}

Supported models include: OpenAI (GPT-4, GPT-4o, etc.), Anthropic (Claude), Google (Gemini), Mistral, Cohere, Groq, and more. Model selection is a parameter — switch providers by changing the model field.

Trajectory Recording

All passthrough calls are recorded as lightweight trajectories with a single completion step. By default they are grouped under the _chat task in your collection. You can organize calls into named tasks by including a jetty.task field:

{
"model": "claude-sonnet-4-6",
"messages": [{"role": "user", "content": "Summarize this article..."}],
"jetty": {
"task": "summarizer"
}
}

Each trajectory captures:

  • Request: Model, messages (with large base64 content redacted), and LLM parameters
  • Response: Full assistant output, finish reason, and upstream provider ID
  • Usage: Prompt tokens, completion tokens, total tokens
  • Latency: End-to-end response time in milliseconds

Trajectories are browsable via the Jetty UI and queryable via the API, using the same infrastructure as runbook trajectories.

Runbook Mode (with jetty block)

Full agent execution in an isolated sandbox with artifact persistence and trajectory recording.

{
"model": "claude-sonnet-4-6",
"messages": [
{"role": "system", "content": "Your runbook instructions..."},
{"role": "user", "content": "Execute the task"}
],
"stream": true,
"jetty": {
"runbook": true,
"collection": "my-org",
"task": "my-task",
"agent": "claude-code",
"file_paths": ["uploads/input.csv"]
}
}

Request Body

Standard Fields (OpenAI-compatible)

FieldTypeRequiredDescription
modelstringYesModel identifier (e.g., claude-sonnet-4-6, gpt-4o)
messagesarrayYesArray of message objects with role and content
streambooleanNoEnable SSE streaming (default: false)
temperaturenumberNoSampling temperature
max_tokensnumberNoMaximum tokens in response

Jetty Extension

The jetty block is optional for passthrough mode and required for runbook mode.

FieldTypeRequiredDescription
jetty.taskstringNoTask name for grouping trajectories (default: _chat for passthrough)
jetty.runbookbooleanRunbookEnable runbook mode
jetty.collectionstringRunbookNamespace for the task (org or project name)
jetty.agentstringRunbookAgent CLI to use: claude-code, codex, or gemini-cli
jetty.file_pathsstring[]NoFiles to upload into the sandbox workspace

Message Format

{
"role": "system" | "user" | "assistant",
"content": string
}

The system message typically contains the runbook — the full specification of what the agent should do.

Response

Passthrough Mode

Standard OpenAI chat completion response (streaming or non-streaming), plus a jetty_metadata object:

{
"id": "chatcmpl-abc123",
"object": "chat.completion",
"choices": [{ "message": { "role": "assistant", "content": "..." }, "finish_reason": "stop" }],
"usage": { "prompt_tokens": 10, "completion_tokens": 20, "total_tokens": 30 },
"jetty_metadata": {
"trajectory_id": "abc12345",
"collection": "my-org",
"mode": "passthrough"
}
}

Runbook Mode

Returns a structured response with:

  • Trajectory ID — For polling status and retrieving artifacts
  • Artifact URLs — Files produced by the agent
  • Execution metadata — Timing, token usage, cost

When streaming (stream: true), progress is delivered via SSE events.

Polling Trajectory Status

After launching a runbook, poll for completion:

GET /api/v1/trajectories/{trajectory_id}
Authorization: Bearer $JETTY_API_TOKEN

Response includes:

  • status: pending | running | completed | failed
  • artifacts: Array of file URLs
  • steps: Execution history
  • usage: Token counts and cost

File Upload

Upload files before referencing them in file_paths:

POST /api/v1/files/upload
Authorization: Bearer $JETTY_API_TOKEN
Content-Type: multipart/form-data

file: <binary>
collection: my-org

Examples

Python (using OpenAI SDK)

from openai import OpenAI

client = OpenAI(
base_url="https://flows-api.jetty.io",
api_key="your-jetty-api-token"
)

# Passthrough mode — standard LLM call
response = client.chat.completions.create(
model="claude-sonnet-4-6",
messages=[
{"role": "user", "content": "Hello"}
]
)

# Runbook mode — add extra_body for the jetty block
response = client.chat.completions.create(
model="claude-sonnet-4-6",
messages=[
{"role": "system", "content": "Analyze the uploaded CSV..."},
{"role": "user", "content": "Run the analysis"}
],
stream=True,
extra_body={
"jetty": {
"runbook": True,
"collection": "my-org",
"task": "analyze-data",
"agent": "claude-code",
"file_paths": ["uploads/data.csv"]
}
}
)

curl

# Passthrough
curl -X POST "https://flows-api.jetty.io/v1/chat/completions" \
-H "Authorization: Bearer $JETTY_API_TOKEN" \
-H "Content-Type: application/json" \
-d '{
"model": "claude-sonnet-4-6",
"messages": [{"role": "user", "content": "Hello"}]
}'

# Runbook
curl -X POST "https://flows-api.jetty.io/v1/chat/completions" \
-H "Authorization: Bearer $JETTY_API_TOKEN" \
-H "Content-Type: application/json" \
-d '{
"model": "claude-sonnet-4-6",
"messages": [
{"role": "system", "content": "Analyze the uploaded data..."},
{"role": "user", "content": "Run analysis"}
],
"stream": true,
"jetty": {
"runbook": true,
"collection": "my-org",
"task": "analyze-data",
"agent": "claude-code",
"file_paths": ["uploads/data.csv"]
}
}'

TypeScript

const response = await fetch('https://flows-api.jetty.io/v1/chat/completions', {
method: 'POST',
headers: {
'Authorization': `Bearer ${process.env.JETTY_API_TOKEN}`,
'Content-Type': 'application/json',
},
body: JSON.stringify({
model: 'claude-sonnet-4-6',
messages: [
{ role: 'system', content: 'Your runbook...' },
{ role: 'user', content: 'Execute' },
],
stream: true,
jetty: {
runbook: true,
collection: 'my-org',
task: 'my-task',
agent: 'claude-code',
file_paths: ['uploads/input.csv'],
},
}),
});

See Also