Jetty Agent
Connect your telemetry. Improve automatically.
The Idea
You're running LLM calls in production. You have observability — traces, costs, latency — in tools like Langfuse. You can see the inefficiencies. But turning visibility into action means manually analyzing traces, figuring out optimizations, writing the code, and shipping it.
Jetty Agent closes this loop. It connects to your observability stack, analyzes your LLM usage, and generates pull requests with optimized code — with human review before anything ships.
How It Works
Step 1: Connect Your Telemetry
Link your Langfuse account — cloud or self-hosted. Jetty reads your traces with read-only access. Credentials are encrypted at rest with AES-256-GCM.
See Langfuse Setup Guide.
Step 2: Automated Analysis
Jetty triggers an agentic workflow that analyzes your trace data:
- Token usage patterns
- Cost hotspots
- Latency outliers
- Prompt inefficiencies
The agent produces a structured analysis with actionable recommendations and estimated savings.
Step 3: Human Review
You see a summary of findings and cost-saving estimates in the dashboard. No code changes happen without your approval. Recommendations are stored with status pending until you act.
Step 4: PR Generation
Approve a recommendation, and Jetty triggers a second workflow that generates a pull request:
- PR title and description
- File changes with optimized code
- Targeting your connected GitHub repository
The PR is created through a secure GitHub App integration with scoped installation tokens. See GitHub PR API.
Step 5: Merge and Measure
Merge the PR in GitHub. Jetty tracks PR lifecycle events — created, reviewed, merged, closed — building a feedback loop that improves future recommendations.
The Feedback Loop
This is where the three Jetty use cases converge:
- Agentic Workflows power the analysis and PR generation — agents running autonomously in sandboxes, producing structured output
- Workflow Orchestration chains analysis → evaluation → PR generation into a multi-step pipeline with quality gates
- Jetty Agent closes the loop by connecting production telemetry to these workflows
Every analysis run produces a trajectory. Every recommendation tracks whether it was accepted, modified, or rejected. Every merged PR feeds back. Over time, Jetty Agent learns what works for your codebase.
Architecture
Key Integration Points
| Component | How It Connects |
|---|---|
| Langfuse | API credentials (public + secret key), project selection |
| Jetty Workflows | Analysis and PR generation run as agentic workflows on Jetty |
| GitHub App | Scoped installation tokens for repository access and PR creation |
| Webhook | Workflow results delivered via HMAC-signed callbacks |
Security
- Read-only Langfuse access — Jetty retrieves traces, never modifies your Langfuse data
- Encrypted credentials — Secret keys stored with AES-256-GCM encryption at rest
- Scoped GitHub tokens — Installation access tokens limited to granted repositories
- HMAC webhook verification — Timing-safe signature comparison on all callbacks
- Human-in-the-loop — No PRs created without explicit user approval
Next Steps
- Langfuse Setup Guide — Connect your account and run your first analysis
- Webhook Reference — How analysis results are delivered
- PR API Reference — How PRs are created