Skip to main content

Architecture Overview

From your application to persistent storage — the full Jetty stack.

Platform Overview

One API, Two Modes

Jetty exposes a single /v1/chat/completions endpoint that speaks the OpenAI Chat Completions protocol.

Passthrough Mode — Without the jetty block, it's a standard LLM proxy. Streaming tokens from Claude, GPT, Gemini, Mistral, or any of 100+ providers. Existing SDKs and integrations work out of the box.

Runbook Mode — With the jetty block, Jetty provisions an isolated sandbox, runs an agent autonomously, persists all artifacts, and records a full trajectory.

The extension is additive. Any integration that talks to OpenAI can talk to Jetty.

Components

Passthrough (LLM)

Unified provider integration. Model selection is a parameter, not an architecture decision. Switch from Claude to Gemini by changing one field.

Supported: OpenAI, Anthropic, Google, Mistral, Cohere, Groq, and 100+ more.

Workflow Engine

Durable execution engine for multi-step DAGs. 47+ step types: LLM calls, evaluation, branching, loops, file transforms, agent runs. Handles sequencing, error recovery, retry logic, and artifact management.

See Workflow Orchestration.

Runbook Engine

Provisions ephemeral sandboxes from pre-built snapshots. Installs the agent CLI, uploads files, injects the runbook, and lets the agent execute with full shell access. Artifacts persist to cloud storage after the sandbox is destroyed.

See Agentic Workflows.

Persistence Layer

ComponentPurpose
Relational DBWorkflow definitions, trajectory metadata, user data
Object StorageFiles, artifacts, runbook outputs
Trace CollectorExecution spans, token usage, latency, cost

Core Concepts

Collections

Organizational containers for related workflows, datasets, and models.

  • Scope permissions and access
  • Group related tasks
  • Share resources across tasks

Tasks (Workflows)

JSON configurations that define multi-step processes.

  • init_params: Default input parameters
  • step_configs: Configuration for each step
  • steps: Ordered list of steps to execute

Steps

Atomic units of work executed by the workflow engine.

  • Each step has defined inputs and outputs
  • Steps can reference outputs from previous steps
  • Powered by durable execution activities for reliability

Trajectories

Execution records capturing the full history of a workflow run.

  • Input parameters
  • Step-by-step outputs
  • Status and timing information
  • Labels for organization and filtering

Path Expressions

References to data within the workflow context.

  • init_params.field - Input parameters
  • step_name.outputs.field - Step outputs
  • step_name.inputs.field - Step inputs

Open Protocols

  • OpenAI-compatible — Drop-in replacement for /v1/chat/completions. The jetty extension is additive.
  • Bring any model — 100+ providers. Switch models by changing one field.
  • Bring any agent — Claude Code, Codex, Gemini CLI today. Any agent CLI tomorrow.
  • Standard observability — Built-in tracing captures tokens, latency, cost, spans. Compatible with existing monitoring stacks.

Key Properties

  • One API, zero infra — No Docker configs, no container orchestration, no agent lifecycle management
  • Persistent by default — Every execution produces a trajectory with full history and artifacts
  • Production-grade — Durable workflow execution survives crashes; retries, timeouts, and webhooks built in
  • Observable — Built-in tracing, real-time log streaming, structured trajectory data

Data Flow

init_params ──► step_1 ──► step_2 ──► step_3 ──► final_output
│ ▲
└──────────┘
(path expressions)

Durable Execution Foundation

Jetty workflows are powered by a durable execution engine, providing:

  • Durability: Workflows survive failures and restarts
  • Scalability: Handle thousands of concurrent workflows
  • Visibility: Full execution history and debugging
  • Reliability: Automatic retries and error handling