Kubiya MCP Server

Connect any AI assistant (Claude, ChatGPT, Cursor, or custom LLMs) to the full power of Kubiya using the Model Context Protocol (MCP). The Kubiya CLI includes a comprehensive MCP server that requires zero dependencies and can run anywhere with just a single KUBIYA_API_KEY. as taken from the app

🚀 Why Kubiya MCP?

🏢 Enterprise-Ready AI

Serverless Agents with production-grade execution, policy enforcement, and audit trails

🛠️ Serverless Tools

Container-based tools that run anywhere - from simple scripts to complex applications

🏃 Local & Cloud Runners

Execute on your infrastructure or use Kubiya-hosted runners for instant scalability

🛡️ Policy Enforcement

OPA-based policies with pre-execution validation and comprehensive access control

✨ Zero Setup Required

Unlike other MCP servers, Kubiya requires no additional dependencies:

# That's it! Just the Kubiya CLI
curl -fsSL https://raw.githubusercontent.com/kubiyabot/cli/main/install.sh | bash

# Set your API key
export KUBIYA_API_KEY="kb-..."

# Start MCP server (runs anywhere!)
kubiya mcp serve

🎯 Key Capabilities for LLMs

1. Serverless AI Agents

  • Conversational Agents: Multi-turn conversations with memory and context
  • Tool-Calling Agents: Agents that can execute workflows and tools autonomously
  • Custom Agent Logic: Define agent behavior, personality, and capabilities
  • Identity-Aware: Execute with proper user attribution and permissions

2. 🏠 Execution on Your Infrastructure

Critical difference: Your data and workloads never leave your environment.

🔒 Security & Compliance:

  • 🏠 Data Locality: Everything executes in your environment
  • 🛡️ Zero Trust: Policy validation before execution
  • 📋 Compliance Ready: GDPR, SOC2 in your infrastructure
  • 🔐 Air-Gap Capable: Works completely offline

3. 🧠 LLM-Native Design

Every component designed for AI agent interaction:

🔧 21+ LLM-Optimized MCP Tools

Every tool designed for AI agent understanding and execution:

Core Execution Tools

ToolDescriptionUse Case
execute_toolRun any tool with live streamingExecute Docker containers, scripts, APIs
create_on_demand_toolCreate and run tools from definitionsBuild custom automation on-the-fly
execute_workflowRun complete workflowsComplex multi-step automation
execute_whitelisted_toolRun pre-approved toolsSecure, controlled tool execution

Platform Management

ToolDescriptionUse Case
check_runner_healthHealth status of runnersMonitor system health
find_available_runnerAuto-select best runnerOptimal execution placement
list_agentsList AI agentsDiscover available agents
chat_with_agentConversational agent interactionMulti-turn AI conversations
list secretsList available secretsGet confidential information
list_integrationsList avalable integrationsGather system-wide info

Tool & Source Management

ToolDescriptionUse Case
list_sourcesList tool repositoriesDiscover available tools
execute_tool_from_sourceRun tools from specific sourcesExecute from GitHub/GitLab repos
discover_sourceExplore source contentsPreview tools before execution
list_integrationsList available integrationsSee AWS, K8s, DB connections

Knowledge & Security

|------|-------------|----------| | search_kb | Search knowledge base | Find documentation, procedures | | list_kb | Browse knowledge entries | Explore organizational knowledge | | list_secrets | List available secrets | Check available credentials |

📋 Quick Setup Examples

Claude Desktop Integration

{
  "mcpServers": {
    "kubiya": {
      "command": "kubiya",
      "args": ["mcp", "serve"],
      "env": {
        "KUBIYA_API_KEY": "kb-your-api-key-here"
      }
    }
  }
}

Cursor Integration

{
  "mcp.servers": {
    "kubiya": {
      "command": "kubiya",
      "args": ["mcp", "serve"],
      "env": {
        "KUBIYA_API_KEY": "kb-your-api-key-here"
      }
    }
  }
}

Custom LLM Integration (Python)

import asyncio
from mcp import ClientSession, StdioServerParameters
from mcp.client.stdio import StdioServerTransport

async def use_kubiya_tools():
    # Connect to Kubiya MCP server
    server_params = StdioServerParameters(
        command="kubiya",
        args=["mcp", "serve"],
        env={"KUBIYA_API_KEY": "kb-your-key"}
    )
    
    async with StdioServerTransport(server_params) as (read, write):
        async with ClientSession(read, write) as session:
            # Initialize
            await session.initialize()
            
            # List available tools
            tools = await session.list_tools()
            print(f"Available tools: {[t.name for t in tools.tools]}")
            
            # Execute a tool
            result = await session.call_tool(
                "execute_tool",
                {
                    "tool_name": "kubectl",
                    "args": {"command": "get pods -A"},
                    "runner": "auto"
                }
            )
            print(result.content)

# Run it
asyncio.run(use_kubiya_tools())

🛠️ Real-World Examples

1. Infrastructure Automation

# In Claude/ChatGPT, just say:
"Create a tool that checks our Kubernetes cluster health and restarts any failed pods"

# Kubiya will create and execute:
# - kubectl get pods --all-namespaces --field-selector=status.phase=Failed
# - kubectl delete pod <failed-pods> --grace-period=0
# - kubectl get pods --watch (to verify restart)

2. DevOps Workflows

# Ask your AI:
"Deploy our application to staging with version 2.1.0, run tests, and promote to production if successful"

# Kubiya executes the complete workflow:
# - docker build -t app:2.1.0
# - kubectl apply -f k8s/staging/ 
# - run integration tests
# - if tests pass: kubectl apply -f k8s/production/
# - send notification to Slack

3. Data Engineering

# Natural language request:
"Process the daily sales data, validate it, transform it to our schema, and load it into the warehouse"

# Kubiya handles the entire pipeline:
# - Download data from S3
# - Python/pandas data validation
# - ETL transformations  
# - Load to Snowflake/BigQuery
# - Data quality checks
# - Alerting on failures

🚀 Advanced Features

Policy-Based Access Control

# Enable policy enforcement
export KUBIYA_OPA_ENFORCE=true

# Create policies via CLI
kubiya policy create --name "prod-access" --file policy.rego

# Test permissions
kubiya policy test-tool --tool kubectl --args '{"command": "delete pod"}' --runner prod

Runner Auto-Selection

# Automatic runner selection based on:
# - Health status
# - Current load  
# - Geographic location
# - Resource requirements
{
  "tool_name": "heavy-computation",
  "runner": "auto",  # Kubiya picks the best runner
  "args": {"dataset": "large"}
}

Platform API Access

# Enable full platform capabilities  
kubiya mcp serve --allow-platform-apis

# Now AI can manage:
# - Create/delete runners
# - Manage integrations
# - Control agent deployments
# - Administer knowledge base

🔧 Configuration Options

Environment Variables

VariableDescriptionDefault
KUBIYA_API_KEYRequired - Your Kubiya API keyNone
KUBIYA_API_URLKubiya API endpointhttps://api.kubiya.ai
KUBIYA_OPA_ENFORCEEnable policy enforcementfalse
KUBIYA_DEFAULT_RUNNERDefault runner for executionauto
KUBIYA_MCP_ALLOW_PLATFORM_APISEnable platform management toolsfalse

MCP Server Options

# Basic server
kubiya mcp serve

# With platform APIs enabled
kubiya mcp serve --allow-platform-apis

# With policy enforcement  
KUBIYA_OPA_ENFORCE=true kubiya mcp serve

# Custom configuration file
kubiya mcp serve --config ~/.kubiya/mcp-config.json

Configuration File Example (~/.kubiya/mcp-server.json)

{
  "enable_runners": true,
  "allow_platform_apis": false,
  "enable_opa_policies": false,
  "allow_dynamic_tools": false,
  "verbose_logging": false,
  "whitelisted_tools": [
    {
      "name": "kubectl",
      "alias": "",
      "description": "Executes kubectl commands. For namespace-scoped resources, include '-n <namespace>' in the command. Use '--all-namespaces' for cluster-wide queries. Some resources like nodes and persistent volumes are cluster-scoped and don't require a namespace.",
      "type": "docker",
      "content": "\nset -eu\nTOKEN_LOCATION=\"/tmp/kubernetes_context_token\"\nCERT_LOCATION=\"/tmp/kubernetes_context_cert\"\n# Inject in-cluster context using the temporary token file\nif [ -f $TOKEN_LOCATION ] && [ -f $CERT_LOCATION ]; then\n    KUBE_TOKEN=$(cat $TOKEN_LOCATION)\n    kubectl config set-cluster in-cluster --server=https://kubernetes.default.svc --certificate-authority=$CERT_LOCATION > /dev/null 2>&1\n    kubectl config set-credentials in-cluster --token=$KUBE_TOKEN > /dev/null 2>&1\n    kubectl config set-context in-cluster --cluster=in-cluster --user=in-cluster > /dev/null 2>&1\n    kubectl config use-context in-cluster > /dev/null 2>&1\nelse\n    echo \"Error: Kubernetes context token or cert file not found at $TOKEN_LOCATION or $CERT_LOCATION respectively.\"\n    exit 1\nfi\n\n\n    #!/bin/bash\n    set -e\n\n    # Show the command being executed\n    echo \"🔧 Executing: kubectl $command\"\n\n    # Run the kubectl command\n    if eval \"kubectl $command\"; then\n        echo \"✅ Command executed successfully\"\n    else\n        echo \"❌ Command failed: kubectl $command\"\n        exit 1\n    fi\n    ",
      "args": [
        {
          "name": "command",
          "type": "string",
          "description": "The full kubectl command to execute. Examples include (but are not limited to):\n- 'get pods -n default'\n- 'create namespace test'\n- 'get pods --all-namespaces'\n- 'get nodes'  # cluster-scoped resource, no namespace needed\n- 'describe node my-node-1'",
          "required": true
        }
      ],
      "env": null,
      "with_files": [
        {
          "source": "/var/run/secrets/kubernetes.io/serviceaccount/token",
          "destination": "/tmp/kubernetes_context_token"
        },
        {
          "source": "/var/run/secrets/kubernetes.io/serviceaccount/ca.crt",
          "destination": "/tmp/kubernetes_context_cert"
        }
      ],
      "with_volumes": null,
      "icon_url": "https://kubernetes.io/icons/icon-128x128.png",
      "image": "kubiya/kubectl-light:latest",
      "mermaid": "graph TD\n    %% Styles\n    classDef triggerClass fill:#3498db,color:#fff,stroke:#2980b9,stroke-width:2px,font-weight:bold\n    classDef paramClass fill:#2ecc71,color:#fff,stroke:#27ae60,stroke-width:2px\n    classDef execClass fill:#e74c3c,color:#fff,stroke:#c0392b,stroke-width:2px,font-weight:bold\n    classDef envClass fill:#f39c12,color:#fff,stroke:#f1c40f,stroke-width:2px\n\n    %% Main Components\n    Trigger(\"Trigger\"):::triggerClass\n    Params(\"Parameters\"):::paramClass\n    Exec(\"kubectl\"):::execClass\n    Env(\"Environment\"):::envClass\n\n    %% Flow\n    Trigger --> Params --> Exec\n    Env --> Exec\n\n    %% Trigger Options\n    User(\"User\")\n    API(\"API\")\n    Webhook(\"Webhook\")\n    Cron(\"Scheduled\")\n    User --> Trigger\n    API --> Trigger\n    Webhook --> Trigger\n    Cron --> Trigger\n\n    %% Parameters\n    subgraph Parameters[\"Parameters\"]\n        direction TB\n        Param0(\"command (Required)<br/>The full kubectl command to execute. Examples include (but are not limited to):<br/>- 'get pods -n default'<br/>- 'create namespace test'<br/>- 'get pods --all-namespaces'<br/>- 'get nodes'  # cluster-scoped resource, no namespace needed<br/>- 'describe node my-node-1'<br/>Type: string\"):::paramClass\n    end\n    Parameters --- Params\n\n    %% Execution\n    subgraph Execution[\"Execution\"]\n        direction TB\n        Code(\"Script: <br/>set -eu<br/>TOKEN_LOCATION=\\\"/tmp/kubernetes_context_t...\")\n        Type(\"Type: Docker\")\n        Image(\"Docker Image: kubiya/kubectl-light:latest\")\n    end\n    Execution --- Exec\n\n    %% Environment\n    subgraph Environment[\"Environment\"]\n        direction TB\n    end\n    Environment --- Env\n\n    %% Context Note\n    ContextNote(\"Parameter values can be<br/>fetched from context<br/>based on the trigger\")\n    ContextNote -.-> Params",
      "runner": "core-testing-2"
    }
  ]
}

this configuration file makes the mcp server expose a sigle kubectl tool and hides internal kubiya operations from the mcp

🎯 Use Cases for AI Applications

1. Enterprise Automation Assistant

  • User: “Please backup our production database and notify the team”
  • AI + Kubiya: Executes secure backup workflow with proper credentials and notifications

2. DevOps Copilot

  • User: “The app is down in production, please investigate and fix”
  • AI + Kubiya: Checks logs, identifies issues, applies fixes, and reports back

3. Data Analysis Agent

  • User: “Analyze last month’s sales trends and create a report”
  • AI + Kubiya: Queries databases, runs analysis scripts, generates visualizations

4. Infrastructure Management

  • User: “Scale up our Kubernetes cluster for the upcoming traffic spike”
  • AI + Kubiya: Safely scales infrastructure with proper validation and monitoring

🔒 Security & Compliance

Identity-Aware Execution

  • Every action is tied to the authenticated user
  • Granular permissions via OPA policies
  • Complete audit trails for compliance

Secure by Default

  • Tools run in isolated containers
  • Secrets are encrypted and managed securely
  • Network policies control access

Enterprise Features

  • SSO/OIDC integration
  • Role-based access control (RBAC)
  • SOC2 compliant infrastructure
  • Air-gapped deployment options

📚 Next Steps

🆘 Support & Community


Ready to supercharge your AI with enterprise-grade automation? The Kubiya MCP server brings the full power of the Kubiya platform to any AI assistant with zero setup complexity.