Step Library Overview
The Step Library is the heart of Jetty workflows, providing a comprehensive collection of reusable components that handle everything from AI model interactions to data processing, control flow, and evaluation. Each step is a self-contained function that can be configured, chained together, and orchestrated to create sophisticated AI/ML workflows.
What are Steps?โ
Steps are atomic units of work in Jetty workflows. They encapsulate specific functionality like:
- Calling AI models (OpenAI, Anthropic, Google Gemini, Replicate)
- Processing data (text manipulation, image processing, file operations)
- Controlling workflow execution (parallel processing, conditionals, loops)
- Evaluating outputs (LLM-as-judge, trajectory analysis, benchmarks)
Each step has:
- Input parameters - Configuration and data inputs
- Output values - Results that can be passed to other steps
- Dependencies - Clear ordering and data flow requirements
- Error handling - Robust failure management and recovery
Step Categoriesโ
The Step Library is organized into functional categories:
๐ค AI Models (15 steps)โ
Modern AI model integrations with unified configuration patterns:
- Google Gemini (5 steps) - Text generation, JSON/text reading, file processing, image generation
- LiteLLM (6 steps) - Multi-provider access to 100+ models (OpenAI, Claude, etc.)
- Replicate (8 steps) - Image generation, video generation, text streaming, embeddings
๐ Control Flow (5 steps)โ
Orchestration and parallel processing:
- List Emit Await - Fan-out parallel workflow processing
- Extract From Trajectories - Aggregate results from child workflows
- Number Sequence Generator - Generate numeric sequences
- Conditional Branch - Conditional logic
- Loop Counter - Iteration state management
๐ง Data Processing (7 steps)โ
File manipulation, downloads, and integrations:
- Tools - Text operations, file I/O, image metadata, webhooks
โ๏ธ Evaluation (3 steps)โ
Assessment and scoring frameworks:
- Simple Judge - LLM-as-judge evaluation
- Select Trajectories - Trajectory filtering and selection
- Visualize Correlation - Statistical analysis and plotting
๐งช Development (3 steps)โ
Testing and development utilities:
- Text Echo - Echo text input (for testing)
- Text Doubler - Double text output (for testing)
- Random Compliance Check - Generate random scores (for testing)
Step Discovery by Use Caseโ
Text Generationโ
gemini_prompt- Google Gemini text generationlitellm_chat- Multi-provider chat completionsreplicate_text_stream- Streaming text generation
Image Generationโ
gemini_image_generator- Gemini image generationlitellm_image_generation- DALL-E and compatible modelsreplicate_text2image- Flux, Stable Diffusion, and more
Video Generationโ
replicate_text2video- Seedance video generation
Image Analysisโ
litellm_vision- Vision model analysisreplicate_extract_embeddings_url- CLIP embeddings
Document Processingโ
gemini_file_reader- Multi-format file analysisgemini_json_reader- JSON data analysisread_text_file- Text file readingtext_concatenate- Combine multiple texts
Evaluation & Scoringโ
simple_judge- LLM-as-judge evaluationselect_trajectories- Filter and select trajectoriesvisualize_correlation- Statistical analysis
Parallel Processingโ
list_emit_await- Fan-out to child workflowsextract_from_trajectories- Collect results
Benchmarkingโ
harbor_terminal_bench- Agent terminal benchmarksswe_bench_docker_eval- SWE-bench evaluation
Configuration Patternsโ
All steps follow consistent configuration patterns:
Common Parametersโ
{
"activity": "step_name",
"model": "model_identifier",
"temperature": 0.7,
"max_tokens": 1000
}
Secrets Managementโ
Steps integrate with Jetty's secrets management:
- Environment variables (development)
- Organization-scoped secrets (production)
- Provider-specific API keys
Data Flowโ
Steps connect through output โ input references:
{
"prompt": "previous_step.outputs.generated_text",
"input_data": "init_params.user_data"
}
Environment Variablesโ
Steps automatically use these environment variables for API authentication:
| Variable | Used By |
|---|---|
OPENAI_API_KEY | LiteLLM steps |
ANTHROPIC_API_KEY | LiteLLM steps |
GEMINI_API_KEY | Gemini steps |
REPLICATE_API_TOKEN | Replicate steps |
LITELLM_API_KEY | LiteLLM proxy |
Quick Linksโ
Getting Startedโ
- Quick Start Guide - Set up your first workflow
- First Flow Tutorial - Build a complete workflow
By Providerโ
- Google Gemini - Native Google AI integration
- LiteLLM - 100+ model providers
- Replicate - Specialized and community models
By Functionโ
- Control Flow - Orchestration patterns
- Evaluation - LLM-as-judge
Best Practicesโ
- Start Simple - Begin with single-step workflows and gradually add complexity
- Use Path Expressions - Reference outputs from previous steps dynamically
- Leverage Secrets - Store API keys securely using secrets management
- Handle Errors - Check step outputs for success/error indicators
- Monitor Costs - Use appropriate models for task complexity
- Test Incrementally - Validate individual steps before chaining
Next Stepsโ
- Explore AI Models for text and image generation
- Learn about Control Flow for parallel processing
- Review Evaluation for LLM-as-judge patterns
- Check Guides for practical tutorials