Basic Step Structure
Simple steps
Steps with callbacks
For richer configuration, use the callback pattern. The callback receives a step builder that lets you choose an executor, set environment variables, timeouts, outputs, and more.Using executor helpers
In addition to configuring steps via callbacks, you can use executor helper functions to build reusable, well-typed steps and then attach them to workflows.Step objects you can reuse across workflows, while .add_step(...) attaches them to a specific workflow.
Step Types
1. Shell commands
Execute shell commands and scripts:2. Shell scripts with environment variables
3. Python code
Execute Python code directly:4. Docker containers
Run steps in Docker containers when you want clean, isolated environments for builds, tests, or tooling.5. Kubiya API calls
Interact with Kubiya platform APIs as part of a workflow:6. Tools and bounded services
Define custom tools inline and, when needed, attach temporary services like databases or caches for more realistic environments.HTTP, SSH, and agent steps
Beyond shell, Python, Docker, and tools, the DSL includes executors for HTTP calls, SSH commands, and AI agents. These are especially useful when you want workflows to orchestrate external systems or intelligent automation.HTTP and SSH
Inline agents and LLM completion
Step dependencies and parallelism
Sequential dependencies
Control the execution order in a workflow:Multiple dependencies
A step can depend on multiple previous steps:Parallel steps
You can also run a single step across many items in parallel..parallel(...) on an individual step to specify a list of items or a reference to a variable containing them.
Step outputs and variables
Steps can expose parts of their result as named outputs. Later steps can then reference those outputs instead of re-running the same work or scraping logs. This is how you pass values through the workflow graph in a controlled way.Capturing outputs
Use.output(NAME) on a step to capture its primary output under a
descriptive name. Any downstream step can then interpolate that value using
the {{NAME}} syntax.
Complex data flow
For richer scenarios, you can emit structured data (for example JSON) from one step and parse it in another. This keeps complex logic in regular Python while still using the DSL to orchestrate when and how each piece runs.Step configuration
Step descriptions
Add descriptions for documentation and observability:Output variables
Name output variables for use in subsequent steps:Variable interpolation
Kubiya workflows support two main kinds of interpolation:${PARAM}pulls in workflow parameters or environment variables that are defined at the workflow level.{{OUTPUT}}pulls in values produced by earlier steps via.output(...).
Using workflow parameters
Using step outputs
Here the first step saves the raw configuration asCONFIG, and the second
step injects that value directly into a shell pipeline using the {{CONFIG}}
placeholder.
Control flow and reliability
Steps support a rich set of controls for retries, timeouts, and “continue even on failure” behavior.Retry policies
Repeat / polling
repeat is useful when you want to poll an external system until it reaches a desired state.
Continue-on and timeouts
.signal_on_stop(...), .mail_on_error(...), and .retries(...) for smaller adjustments to how a step behaves at runtime.
Complete examples
Example 1: Multi-step data processing
This example models a simple batch ETL pipeline. It shows how to use workflow parameters, a mix of shell and Python steps, and explicit dependencies to coordinate a multi-stage data flow.- The workflow takes
INPUT_DIRandOUTPUT_DIRparameters so you can reuse the same definition across environments or datasets. setupprepares the target directory up front so later steps can assume it exists.list-filesdiscovers input CSV files and exposes the list through an output (FILE_LIST), which is a typical pattern when you want to inspect or log what is about to be processed.processuses a Python step withpandasto perform the actual data cleaning and transformation. Because it depends onlist-files, it runs only after discovery has completed.summarizeis a lightweight shell step that gives operators a quick view of the generated artifacts, making the pipeline easier to debug.
Example 2: Complex dependencies
This example demonstrates why graph workflows are useful for non-trivial CI/CD pipelines. Multiple services are built in parallel, tests fan out, artifacts are pushed only after successful validation, and a final deployment plus health check gates the end of the workflow.build-*steps can run concurrently because they have no dependencies.test-*depend on their respective builds, so a broken build stops that service’s tests from running unnecessarily.push-*steps ensure only tested images are published; the worker image is pushed directly after build when there are no tests.deploy-allwaits on all pushes to complete, which is where you would also typically add approvals, notifications, or stricter retry/timeout policies.health-checkruns from a dedicated container to validate the deployed endpoints and fail fast if something is wrong, giving a clear last point in the graph to attach alerts or rollbacks.
Best Practices
Keep Steps Atomic
Keep Steps Atomic
Each step should have a single, clear purpose:
Use Dependencies Wisely
Use Dependencies Wisely
Only add dependencies when truly needed:
Name Outputs Clearly
Name Outputs Clearly
Use descriptive names for output variables:
Add Descriptions
Add Descriptions
Document complex steps:
Next Steps
Examples
Browse real-world workflow patterns