Welcome to Jetty!
Jetty is a platform for running AI agents and workflows in production.
Three Use Cases
1. Agentic Workflows
Describe outcomes in English. Jetty provisions a sandbox, runs the agent, persists artifacts, returns results. No infrastructure to manage.
2. Workflow Orchestration
Build multi-step AI data pipelines with built-in evaluation. Chain LLM calls, agent runs, quality gates, and control flow into DAGs.
3. Jetty Agent
Connect your observability telemetry. Jetty analyzes your LLM usage and generates pull requests with optimizations — human-reviewed before anything ships.
Getting Started
Choose your path based on what you need:
Quick Start
- Quickstart: Agentic Workflows — Run an AI agent in a sandbox in 5 minutes
- Setup — Environment setup and API keys
- Your First Flow — Build a workflow step by step
Guides
- CI Integration — Trigger Jetty workflows from GitHub Actions
- Langfuse Setup — Connect telemetry for automated analysis
- Evaluating LLMs — Build evaluation pipelines
- All Guides →
API Reference
- Chat Completions — One endpoint, two modes (LLM passthrough + runbook execution)
- Webhook API — Receive workflow completion callbacks
- GitHub PR API — Create pull requests programmatically
- Full API Reference →
For AI Coding Agents
- Agent & MCP Overview — Three ways to connect (plugin, MCP, REST)
- Claude Code Plugin — Install and use
/jettycommands - MCP Server — Setup for Cursor, Windsurf, VS Code, Zed, and more
Quick Example
{
"model": "claude-sonnet-4-6",
"messages": [
{"role": "system", "content": "Analyze the uploaded CSV and produce a report."},
{"role": "user", "content": "Run the analysis"}
],
"stream": true,
"jetty": {
"runbook": true,
"collection": "my-org",
"task": "analyze-data",
"agent": "claude-code",
"file_paths": ["uploads/dataset.csv"]
}
}
Without the jetty block — standard LLM passthrough (100+ providers). With it — full agent sandbox execution.