Skip to main content

Welcome to Jetty!

Jetty is a platform for running AI agents and workflows in production.

Three Use Cases

1. Agentic Workflows

Describe outcomes in English. Jetty provisions a sandbox, runs the agent, persists artifacts, returns results. No infrastructure to manage.

Learn more →

2. Workflow Orchestration

Build multi-step AI data pipelines with built-in evaluation. Chain LLM calls, agent runs, quality gates, and control flow into DAGs.

Learn more →

3. Jetty Agent

Connect your observability telemetry. Jetty analyzes your LLM usage and generates pull requests with optimizations — human-reviewed before anything ships.

Learn more →

Getting Started

Choose your path based on what you need:

Quick Start

Guides

API Reference

For AI Coding Agents

Quick Example

{
"model": "claude-sonnet-4-6",
"messages": [
{"role": "system", "content": "Analyze the uploaded CSV and produce a report."},
{"role": "user", "content": "Run the analysis"}
],
"stream": true,
"jetty": {
"runbook": true,
"collection": "my-org",
"task": "analyze-data",
"agent": "claude-code",
"file_paths": ["uploads/dataset.csv"]
}
}

Without the jetty block — standard LLM passthrough (100+ providers). With it — full agent sandbox execution.