Skip to main content

Step Library Overview

The Step Library is the heart of Jetty workflows, providing a comprehensive collection of reusable components that handle everything from AI model interactions to data processing, control flow, and evaluation. Each step is a self-contained function that can be configured, chained together, and orchestrated to create sophisticated AI/ML workflows.

What are Steps?โ€‹

Steps are atomic units of work in Jetty workflows. They encapsulate specific functionality like:

  • Calling AI models (OpenAI, Anthropic, Google Gemini, Replicate)
  • Processing data (text manipulation, image processing, file operations)
  • Controlling workflow execution (parallel processing, conditionals, loops)
  • Evaluating outputs (LLM-as-judge, trajectory analysis, benchmarks)

Each step has:

  • Input parameters - Configuration and data inputs
  • Output values - Results that can be passed to other steps
  • Dependencies - Clear ordering and data flow requirements
  • Error handling - Robust failure management and recovery

Step Categoriesโ€‹

The Step Library is organized into functional categories:

๐Ÿค– AI Models (15 steps)โ€‹

Modern AI model integrations with unified configuration patterns:

  • Google Gemini (5 steps) - Text generation, JSON/text reading, file processing, image generation
  • LiteLLM (6 steps) - Multi-provider access to 100+ models (OpenAI, Claude, etc.)
  • Replicate (8 steps) - Image generation, video generation, text streaming, embeddings

๐Ÿ”€ Control Flow (5 steps)โ€‹

Orchestration and parallel processing:

๐Ÿ”ง Data Processing (7 steps)โ€‹

File manipulation, downloads, and integrations:

  • Tools - Text operations, file I/O, image metadata, webhooks

โš–๏ธ Evaluation (3 steps)โ€‹

Assessment and scoring frameworks:

๐Ÿงช Development (3 steps)โ€‹

Testing and development utilities:

  • Text Echo - Echo text input (for testing)
  • Text Doubler - Double text output (for testing)
  • Random Compliance Check - Generate random scores (for testing)

Step Discovery by Use Caseโ€‹

Text Generationโ€‹

  • gemini_prompt - Google Gemini text generation
  • litellm_chat - Multi-provider chat completions
  • replicate_text_stream - Streaming text generation

Image Generationโ€‹

  • gemini_image_generator - Gemini image generation
  • litellm_image_generation - DALL-E and compatible models
  • replicate_text2image - Flux, Stable Diffusion, and more

Video Generationโ€‹

  • replicate_text2video - Seedance video generation

Image Analysisโ€‹

  • litellm_vision - Vision model analysis
  • replicate_extract_embeddings_url - CLIP embeddings

Document Processingโ€‹

  • gemini_file_reader - Multi-format file analysis
  • gemini_json_reader - JSON data analysis
  • read_text_file - Text file reading
  • text_concatenate - Combine multiple texts

Evaluation & Scoringโ€‹

  • simple_judge - LLM-as-judge evaluation
  • select_trajectories - Filter and select trajectories
  • visualize_correlation - Statistical analysis

Parallel Processingโ€‹

  • list_emit_await - Fan-out to child workflows
  • extract_from_trajectories - Collect results

Benchmarkingโ€‹

  • harbor_terminal_bench - Agent terminal benchmarks
  • swe_bench_docker_eval - SWE-bench evaluation

Configuration Patternsโ€‹

All steps follow consistent configuration patterns:

Common Parametersโ€‹

{
"activity": "step_name",
"model": "model_identifier",
"temperature": 0.7,
"max_tokens": 1000
}

Secrets Managementโ€‹

Steps integrate with Jetty's secrets management:

  • Environment variables (development)
  • Organization-scoped secrets (production)
  • Provider-specific API keys

Data Flowโ€‹

Steps connect through output โ†’ input references:

{
"prompt": "previous_step.outputs.generated_text",
"input_data": "init_params.user_data"
}

Environment Variablesโ€‹

Steps automatically use these environment variables for API authentication:

VariableUsed By
OPENAI_API_KEYLiteLLM steps
ANTHROPIC_API_KEYLiteLLM steps
GEMINI_API_KEYGemini steps
REPLICATE_API_TOKENReplicate steps
LITELLM_API_KEYLiteLLM proxy

Getting Startedโ€‹

By Providerโ€‹

By Functionโ€‹

Best Practicesโ€‹

  1. Start Simple - Begin with single-step workflows and gradually add complexity
  2. Use Path Expressions - Reference outputs from previous steps dynamically
  3. Leverage Secrets - Store API keys securely using secrets management
  4. Handle Errors - Check step outputs for success/error indicators
  5. Monitor Costs - Use appropriate models for task complexity
  6. Test Incrementally - Validate individual steps before chaining

Next Stepsโ€‹