Local Runners
Deploy and configure local runners in your Kubernetes environment
Local Runners
Local runners enable you to execute agent tools directly in your own infrastructure. By deploying runners in your Kubernetes environment, you maintain complete control over security, networking, and resource allocation while still leveraging the power of Kubiya's agents.
Prerequisites
Before setting up a local runner, you'll need:
- A Kubernetes cluster (v1.16+)
kubectl
or Helm installed and configured to access your cluster- Cluster admin permissions (for initial setup)
- Outbound network access from your cluster to Kubiya's platform
Installation Options
Kubiya supports two installation methods for local runners:
Using Kubernetes Manifests
This method uses standard Kubernetes manifests to deploy the runner directly:
Create a Namespace
First, create a dedicated namespace for Kubiya components:
Apply the Runner Manifest
Deploy the runner using the manifest below. Replace [RUNNER_ID]
and [RUNNER_SECRET]
with your unique values.
Create Registry Secret
Create a secret to pull images from Kubiya's registry:
Verify Deployment
Ensure the runner is running:
Obtaining Runner Credentials
To set up your runner, you'll need unique credentials (ID and secret) for your environment. These can be obtained through the Kubiya platform:
Access the Runners Page
Log in to the Kubiya platform and navigate to the Runners section.
Add a New Local Runner
Click "Add Local Runner" and follow the on-screen instructions.
Assign a Nickname
Give your runner a memorable name that identifies its purpose or environment.
Copy Credentials
The system will generate a unique ID and secret. Copy these values for use in your installation.
Complete the Setup
Use the credentials in either the Kubernetes manifest or Helm chart installation methods.
Runner credentials are sensitive. Store them securely and avoid checking them into source control.
Resource Requirements
Local runners have the following minimum resource requirements:
Resource | Minimum | Recommended |
---|---|---|
CPU | 0.5 | 1+ |
Memory | 512Mi | 1Gi+ |
Disk | 1Gi | 5Gi |
For high-load environments, adjust these values based on your agent tool requirements.
Additional Configuration
Scaling Runners
For high-availability or high-load environments, you can scale the number of runner instances:
Enhanced Permissions
Some agent tools require additional permissions to access cluster resources. For example, the Kubernetes Crew agent requires broader cluster access:
Granting cluster-admin permissions gives the runner broad access to your cluster. Consider using more granular permissions for production environments.
Network Configuration
Local runners need outbound network access to:
api.kubiya.ai
(TCP port 443)- Any services that your agent tools need to interact with
No inbound connectivity is required to the runner.
Private Registry Support
To use a private container registry for your agent tools:
Enforcer Service
For advanced policy enforcement capabilities, you can deploy the Enforcer service alongside your runners. This service:
- Enforces attribute-based access control (ABAC) policies
- Manages JIT (Just-In-Time) permissions for AWS and other services
- Integrates with your existing governance frameworks
The Enforcer service is required for the AWS JIT Permission Crew use case and similar scenarios requiring fine-grained permission management.
Common Issues & Troubleshooting
Runner Not Connecting
If your runner doesn't appear as "Connected" in the Kubiya platform:
-
Check that the pod is running:
-
Examine the runner logs:
-
Verify network connectivity to
api.kubiya.ai
:
Image Pull Errors
For ImagePullBackOff
errors:
-
Verify your registry secret:
-
Check if your cluster can reach the registry:
Next Steps
- Configure permissions for your runners
- Deploy agents that use your local runners
- Set up the Enforcer service for policy management