Workflow Examples
Complete, copy-paste ready workflows based on production use cases from the Jetty platform.
Quick Reference
| Example | Use Case | Complexity |
|---|---|---|
| Hello World | First workflow | Beginner |
| LLM Chat | Basic LLM interaction | Beginner |
| Text-to-Image | Image generation | Beginner |
| Image Evaluation | Judge generated images | Intermediate |
| Batch Processing | Process multiple items | Intermediate |
| Multi-Model Comparison | A/B testing | Advanced |
| Document Translation | Process documents | Advanced |
Hello World
Simplest possible workflow using text_doubler.
Based on: hello_flow
{
"init_params": {
"text": "Hello, Jetty!"
},
"step_configs": {
"doubler": {
"activity": "text_doubler",
"text_path": "init_params.text"
}
},
"steps": ["doubler"]
}
Output: "Hello, Jetty!Hello, Jetty!"
LLM Chat
Basic LLM interaction with any provider.
Based on: litellm-chat
{
"init_params": {
"user_question": "What are the key differences between Python and JavaScript?",
"system_message": "You are a helpful programming tutor."
},
"step_configs": {
"chat": {
"model": "gemini/gemini-1.5-flash",
"activity": "litellm_chat",
"prompt": "init_params.user_question",
"max_tokens": 500,
"temperature": 0.7,
"system_prompt": "init_params.system_message"
}
},
"steps": ["chat"]
}
Text-to-Image
Generate images from text prompts.
Based on: text2image_with_metadata
{
"init_params": {
"prompt": "A beautiful landscape with mountains and a lake"
},
"step_configs": {
"generate": {
"model": "black-forest-labs/flux-schnell",
"activity": "replicate_text2image",
"prompt_path": "init_params.prompt",
"output_format": "jpg"
},
"add_metadata": {
"activity": "add_image_metadata"
}
},
"steps": ["generate", "add_metadata"]
}
Image Evaluation
Generate and evaluate images with multiple criteria.
Based on: llm-judge
{
"init_params": {
"prompt": "A cow eating a shrimp",
"model": "black-forest-labs/flux-schnell",
"evaluation_criteria": "Are there animals in this picture?"
},
"step_configs": {
"generate": {
"activity": "replicate_text2image",
"model_path": "init_params.model",
"prompt_path": "init_params.prompt",
"aspect_ratio": "16:9",
"output_format": "jpg"
},
"describe": {
"model": "gpt-4o",
"activity": "simple_judge",
"items_path": "generate.outputs.images[0].path",
"judge_type": "scale",
"scale_range": [1, 5],
"model_provider": "openai",
"instruction_path": "init_params.evaluation_criteria"
},
"ip_risk": {
"model": "gpt-4o",
"activity": "simple_judge",
"items_path": "generate.outputs.images[0].path",
"judge_type": "scale",
"instruction": "Rate the potential for IP infringement in this image.",
"scale_range": [0, 1],
"model_provider": "openai"
}
},
"steps": ["generate", "describe", "ip_risk"]
}
Batch Processing
Process multiple items in parallel with fan-out/fan-in.
Based on: emit-doubler-collector
{
"init_params": {
"start": 1,
"end": 5
},
"step_configs": {
"generate_numbers": {
"mode": "range",
"step": 1,
"activity": "number_sequence_generator",
"end_path": "init_params.end",
"start_path": "init_params.start"
},
"process_each": {
"activity": "list_emit_await",
"items_path": "generate_numbers.outputs.sequence",
"task_reference": {
"task_name": "hello_flow"
},
"data_mapping": {
"text": "Processing number {{ $item }}"
}
},
"collect": {
"activity": "extract_from_trajectories",
"trajectory_list_path": "process_each.outputs.trajectory_references",
"extract_keys": {
"result": "doubler.outputs.text"
}
}
},
"steps": ["generate_numbers", "process_each", "collect"]
}
Multi-Model Comparison
Compare outputs from different image models.
Based on: t2i-evals
{
"init_params": {
"prompt": "A beautiful landscape with mountains and a lake"
},
"step_configs": {
"flux": {
"model": "black-forest-labs/flux-schnell",
"activity": "replicate_text2image",
"prompt_path": "init_params.prompt"
},
"midjourney": {
"model": "tstramer/midjourney-diffusion:436b051ebd8f68d23e83d22de5e198e0995357afef113768c20f0b6fcef23c8b",
"activity": "replicate_text2image",
"prompt_path": "init_params.prompt",
"guidance_scale": 7.5,
"num_inference_steps": 50
}
},
"steps": ["flux", "midjourney"]
}
Document Translation
Translate documents with validation.
Based on: sacred-translate-v1
{
"init_params": {
"target_language": "french"
},
"step_configs": {
"translate": {
"model": "gemini-2.5-pro",
"activity": "gemini_file_reader",
"asset_path": "init_params.file_paths[0]",
"prompt": "Translate this document. Preserve formatting."
},
"save": {
"activity": "save_text_file",
"file_text_path": "translate.outputs.text"
},
"read_original": {
"activity": "read_text_file",
"text_path": "init_params.file_paths[0]"
},
"combine": {
"activity": "text_concatenate",
"text_paths": [
"translate.outputs.text",
"read_original.outputs.text"
]
},
"validate": {
"model": "gemini-2.5-pro",
"activity": "gemini_text_reader",
"text_path": "combine.outputs.json",
"prompt": "Compare translation with original. List any issues."
}
},
"steps": ["translate", "save", "read_original", "combine", "validate"]
}
Image to Video
Generate video from text-to-image.
Based on: text2image2video
{
"init_params": {
"prompt": "A scientist holding up a balloon",
"motion": "zoom out to space"
},
"step_configs": {
"expand": {
"model": "openai/gpt-4",
"activity": "litellm_chat",
"messages_path": "init_params.prompt",
"system_prompt": "Elaborate the prompt with cyberpunk themes."
},
"generate_image": {
"model": "black-forest-labs/flux-schnell",
"activity": "replicate_text2image",
"prompt_path": "expand.outputs.text",
"output_format": "jpg"
},
"animate": {
"activity": "replicate_text2video",
"duration": 3,
"image_path": "generate_image.outputs.images[0].path",
"resolution": "480p",
"prompt_path": "init_params.motion"
}
},
"steps": ["expand", "generate_image", "animate"]
}
Running Examples
All examples can be run via the API:
# Sync execution (wait for result)
curl -X POST "https://flows-api.jetty.io/api/v1/run-sync/YOUR_COLLECTION/TASK_NAME" \
-H "Authorization: Bearer $JETTY_API_TOKEN" \
-F "bakery_host=https://dock.jetty.io" \
-F 'init_params={"prompt": "your prompt here"}'
# Async execution (get tracking ID)
curl -X POST "https://flows-api.jetty.io/api/v1/run/YOUR_COLLECTION/TASK_NAME" \
-H "Authorization: Bearer $JETTY_API_TOKEN" \
-F "bakery_host=https://dock.jetty.io" \
-F 'init_params={"prompt": "your prompt here"}'