MCP Integration

Connect Kubiya to any AI assistant (Claude, ChatGPT, Cursor) using the Model Context Protocol (MCP). Write simple workflows and let AI execute them on behalf of authenticated users.

Identity-Aware Workflow Execution

When users interact with Kubiya through the Composer UI or other interfaces, the MCP server ensures workflows execute with their identity. This provides:

  • User Attribution - Every workflow run is tied to the user who initiated it
  • Permission Enforcement - Users can only execute workflows they have access to
  • Audit Compliance - Complete traceability of who did what and when
  • Multi-tenant Isolation - Users’ workflows and data remain separate

Quick Start (6 Lines of Code!)

import asyncio
from mcp_use import MCPClient
from mcp_use.adapters.langchain_adapter import LangChainAdapter

# Configure and connect to Kubiya
client = MCPClient.from_dict({
    "mcpServers": {
        "kubiya": {
            "command": "python3",
            "args": ["-m", "kubiya_workflow_sdk.mcp.server"]
        }
    }
})

# That's it! Now use the tools
adapter = LangChainAdapter()
tools = await adapter.create_tools(client)

What Can You Do?

The Kubiya MCP server exposes these powerful tools to any AI:

🔧 Available Tools

ToolWhat it doesExample
define_workflowCreate workflows from simple Python codeCreate deployment pipelines
execute_workflowRun workflows with parametersDeploy to production
list_workflowsSee all your workflowsCheck available automations
validate_workflowCheck workflow syntaxValidate before running
export_workflowExport as YAML/JSONShare with team
get_executionTrack workflow statusMonitor deployments

Creating Workflows - It’s Simple!

No decorators, no complexity. Just simple, clean Python:

Hello World

from kubiya_workflow_sdk.dsl import Workflow

wf = Workflow("hello-world")
wf.description("My first workflow!")
wf.step("greet", "echo 'Hello, Kubiya!'")
wf.step("date", "date")

Real-World Example: Deploy Pipeline

from kubiya_workflow_sdk.dsl import Workflow

wf = Workflow("deploy-app")
wf.description("Deploy application to production")

# Add parameters
wf.params(app="myapp", env="staging", version="latest")

# Build and test
wf.step("build", "docker build -t {{app}}:{{version}} .")
wf.step("test", "docker run {{app}}:{{version}} npm test")

# Deploy based on environment
wf.step("deploy-staging", 
    "kubectl apply -f k8s/staging/",
    condition="{{env}} == 'staging'"
)
wf.step("deploy-prod",
    "kubectl apply -f k8s/production/", 
    condition="{{env}} == 'production'"
)

# Verify deployment
wf.step("verify", "curl https://{{app}}-{{env}}.example.com/health")

Connect Your AI Assistant

Option 1: Use with Any LangChain LLM

from langchain_openai import ChatOpenAI
from mcp_use import MCPAgent, MCPClient

# Setup
client = MCPClient.from_dict({
    "mcpServers": {
        "kubiya": {
            "command": "python3",
            "args": ["-m", "kubiya_workflow_sdk.mcp.server"],
            "env": {"KUBIYA_API_KEY": "your-key"}  # For execution
        }
    }
})

# Create AI agent
llm = ChatOpenAI(model="gpt-4")
agent = MCPAgent(llm=llm, client=client)

# Let AI create workflows!
result = await agent.run(
    "Create a workflow that backs up my database every night"
)

Option 2: Direct Tool Usage (No LLM)

# Get tools
adapter = LangChainAdapter()
tools = await adapter.create_tools(client)

# Find the tool you need
define_tool = next(t for t in tools if t.name == "define_workflow")

# Create workflow directly
result = await define_tool.ainvoke({
    "name": "backup-db",
    "code": '''
from kubiya_workflow_sdk.dsl import Workflow

wf = Workflow("backup-db")
wf.step("dump", "pg_dump mydb > backup.sql")
wf.step("compress", "gzip backup.sql")
wf.step("upload", "aws s3 cp backup.sql.gz s3://backups/")
'''
})

Real Examples That Work

1. Data Processing Pipeline

wf = Workflow("process-data")
wf.params(input="/data/raw", output="/data/processed")

# Process files in parallel
wf.parallel_steps(
    "process-files",
    items=["file1.csv", "file2.csv", "file3.csv"],
    command="python process.py {{input}}/{{item}} {{output}}/{{item}}"
)

2. Infrastructure Automation

wf = Workflow("setup-infra")
wf.params(provider="aws", region="us-east-1")

wf.step("terraform-init", "terraform init")
wf.step("terraform-plan", "terraform plan -out=tfplan")
wf.step("terraform-apply", "terraform apply tfplan")

3. CI/CD Pipeline

wf = Workflow("ci-pipeline")
wf.step("checkout", "git checkout main")
wf.step("install", "npm install")
wf.step("lint", "npm run lint")
wf.step("test", "npm test")
wf.step("build", "npm run build")
wf.step("deploy", "npm run deploy:prod", condition="{{branch}} == 'main'")

Installation

# Install the SDK and mcp-use
pip install kubiya-workflow-sdk mcp-use

# For AI agents, also install your preferred LLM
pip install langchain-openai  # For OpenAI
pip install langchain-anthropic  # For Claude

Running the MCP Server

The MCP server starts automatically when you connect, but you can also run it standalone:

# Run the server
python -m kubiya_workflow_sdk.mcp.server

Authentication Options

The MCP server supports two authentication approaches:

1. Simple API Key (Getting Started)

Perfect for development and simple integrations:

# Pass API key as parameter
result = await execute_tool.ainvoke({
    "name": "my-workflow",
    "params": {"env": "production"},
    "api_key": "your-kubiya-api-key"
})

# Or use environment variable
export KUBIYA_API_KEY="your-api-key"

2. OAuth/OIDC Authentication (Production)

For enterprise deployments with proper authentication:

# Run server with OAuth/OIDC
python -m kubiya_workflow_sdk.mcp.server_auth \
  --auth-server https://your-auth-server.com \
  --auth-type oidc

Learn more in the Authentication Guide.

Tool Overview

The server exposes 8 powerful tools:

ToolPurposeRequires API Key
define_workflowCreate workflows from codeNo
create_workflow_from_jsonCreate from JSON/YAMLNo
execute_workflowRun workflowsYes (via parameter)
get_executionCheck execution statusNo
list_workflowsList all workflowsNo
validate_workflowValidate workflow syntaxNo
export_workflowExport to YAML/JSONNo
validate_workflow_codeValidate Python codeNo

What’s Next?

FAQ

Q: Do I need decorators?
A: No! Just use the simple DSL: wf = Workflow("name") and add steps.

Q: Can I use this with Claude/ChatGPT/Cursor?
A: Yes! Any tool that supports MCP or LangChain can connect.

Q: Do I need an API key?
A: Only for executing workflows. Creating and validating workflows works without a key.

Q: Is it production-ready?
A: Yes! The MCP server is battle-tested and ready for production use.