Kubiya LogoKubiya Developer Docs
Deployment/Runners

Local Runners

Deploy and configure local runners in your Kubernetes environment

Local Runners

Local runners enable you to execute agent tools directly in your own infrastructure. By deploying runners in your Kubernetes environment, you maintain complete control over security, networking, and resource allocation while still leveraging the power of Kubiya's agents.

Prerequisites

Before setting up a local runner, you'll need:

  • A Kubernetes cluster (v1.16+)
  • kubectl or Helm installed and configured to access your cluster
  • Cluster admin permissions (for initial setup)
  • Outbound network access from your cluster to Kubiya's platform

Installation Options

Kubiya supports two installation methods for local runners:

Using Kubernetes Manifests

This method uses standard Kubernetes manifests to deploy the runner directly:

Create a Namespace

First, create a dedicated namespace for Kubiya components:

kubectl create namespace kubiya

Apply the Runner Manifest

Deploy the runner using the manifest below. Replace [RUNNER_ID] and [RUNNER_SECRET] with your unique values.

apiVersion: v1
kind: ServiceAccount
metadata:
  name: kubiya-service-account
  namespace: kubiya
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: kubiya-runner
  namespace: kubiya
  labels:
    app: kubiya-runner
spec:
  replicas: 1
  selector:
    matchLabels:
      app: kubiya-runner
  template:
    metadata:
      labels:
        app: kubiya-runner
    spec:
      serviceAccountName: kubiya-service-account
      containers:
        - name: kubiya-runner
          image: kubiya/runner:latest
          resources:
            limits:
              cpu: "1"
              memory: "1Gi"
            requests:
              cpu: "0.5"
              memory: "512Mi"
          env:
            - name: RUNNER_ID
              value: "[RUNNER_ID]"
            - name: RUNNER_SECRET
              value: "[RUNNER_SECRET]"
            - name: KUBIYA_API_URL
              value: "https://api.kubiya.ai"
      imagePullSecrets:
        - name: kubiya-registry-secret

Create Registry Secret

Create a secret to pull images from Kubiya's registry:

kubectl create secret docker-registry kubiya-registry-secret \
  --namespace kubiya \
  --docker-server=registry.kubiya.ai \
  --docker-username=[PROVIDED_USERNAME] \
  --docker-password=[PROVIDED_PASSWORD]

Verify Deployment

Ensure the runner is running:

kubectl get pods -n kubiya

Obtaining Runner Credentials

To set up your runner, you'll need unique credentials (ID and secret) for your environment. These can be obtained through the Kubiya platform:

Access the Runners Page

Log in to the Kubiya platform and navigate to the Runners section.

Add a New Local Runner

Click "Add Local Runner" and follow the on-screen instructions.

Assign a Nickname

Give your runner a memorable name that identifies its purpose or environment.

Copy Credentials

The system will generate a unique ID and secret. Copy these values for use in your installation.

Complete the Setup

Use the credentials in either the Kubernetes manifest or Helm chart installation methods.

Runner credentials are sensitive. Store them securely and avoid checking them into source control.

Resource Requirements

Local runners have the following minimum resource requirements:

ResourceMinimumRecommended
CPU0.51+
Memory512Mi1Gi+
Disk1Gi5Gi

For high-load environments, adjust these values based on your agent tool requirements.

Additional Configuration

Scaling Runners

For high-availability or high-load environments, you can scale the number of runner instances:

# Using kubectl
kubectl scale deployment kubiya-runner -n kubiya --replicas=3
 
# Using Helm
helm upgrade kubiya-runner kubiya/runner \
  --namespace kubiya \
  --set replicaCount=3

Enhanced Permissions

Some agent tools require additional permissions to access cluster resources. For example, the Kubernetes Crew agent requires broader cluster access:

kubectl create clusterrolebinding kubiya-sa-cluster-admin \
  --clusterrole=cluster-admin \
  --serviceaccount=kubiya:kubiya-service-account

Granting cluster-admin permissions gives the runner broad access to your cluster. Consider using more granular permissions for production environments.

Network Configuration

Local runners need outbound network access to:

  • api.kubiya.ai (TCP port 443)
  • Any services that your agent tools need to interact with

No inbound connectivity is required to the runner.

Private Registry Support

To use a private container registry for your agent tools:

# For Kubernetes manifest
imagePullSecrets:
  - name: your-registry-secret
 
# For Helm values
imagePullSecrets:
  - your-registry-secret

Enforcer Service

For advanced policy enforcement capabilities, you can deploy the Enforcer service alongside your runners. This service:

  • Enforces attribute-based access control (ABAC) policies
  • Manages JIT (Just-In-Time) permissions for AWS and other services
  • Integrates with your existing governance frameworks

The Enforcer service is required for the AWS JIT Permission Crew use case and similar scenarios requiring fine-grained permission management.

Common Issues & Troubleshooting

Runner Not Connecting

If your runner doesn't appear as "Connected" in the Kubiya platform:

  1. Check that the pod is running:

    kubectl get pods -n kubiya
  2. Examine the runner logs:

    kubectl logs -n kubiya deployment/kubiya-runner
  3. Verify network connectivity to api.kubiya.ai:

    kubectl exec -it -n kubiya deploy/kubiya-runner -- curl -v https://api.kubiya.ai/health

Image Pull Errors

For ImagePullBackOff errors:

  1. Verify your registry secret:

    kubectl get secret -n kubiya kubiya-registry-secret -o yaml
  2. Check if your cluster can reach the registry:

    kubectl run -it --rm --restart=Never -n kubiya test-image \
      --image=busybox -- nslookup registry.kubiya.ai

Next Steps