Kubiya LogoKubiya Developer Docs

Tools Commands

Manage built-in and custom Tools using the Kubiya CLI.

Tools Commands

kubiya tools

Manage the Tools that provide capabilities to your AI agents.

Available Commands

CommandDescription
listList all available tools (built-in and custom)
getGet details about a specific tool
executeExecute a specific tool with arguments
browseBrowse and interactively select tools
applyCreate or update a custom tool (upsert operation)
generateGenerate a basic tool scaffold/template
testTest a tool locally with specific input
deleteDelete a custom tool
validateValidate a tool definition
bundleBundle a Python function into a tool definition
import-openapiImport OpenAPI spec as a tool

Command Examples

# List all available tools
kubiya tools list
 
# List by category
kubiya tools list --category aws
 
# Show only custom tools
kubiya tools list --custom
 
# Browse tools interactively
kubiya browse
 
# Browse specific category
kubiya browse --category kubernetes

Tool Definition Structure

Example Tool Definition

name: database-backup
description: Create database backups
version: 1.0.0
 
# Docker image that implements the tool logic
image: your-registry/database-backup:latest
 
# Input and output schema defined using JSON Schema format
schema:
  input:
    type: object
    properties:
      database:
        type: string
        description: Database name to backup
      retention_days:
        type: integer
        default: 7
        description: Days to retain the backup
    required:
      - database
  output:
    type: object
    properties:
      backup_id:
        type: string
        description: The ID of the created backup
      success:
        type: boolean
        description: Indicates if the backup was successful
 
# Optional labels for categorization
labels:
  type: database
  team: infrastructure

Tool Development Guide

1. Generate a tool scaffold

The first step is to generate a scaffold for your new tool:

# Generate a Python-based tool scaffold
kubiya tools generate --name aws-resource-manager --template python
 
# Or use interactive generation
kubiya tools generate --interactive

This creates a directory structure with all necessary files:

aws-resource-manager/
├── Dockerfile                # Container definition
├── README.md                 # Documentation
├── kubiya.yaml               # Tool definition
├── src/
│   ├── main.py               # Tool implementation
│   └── requirements.txt      # Dependencies
└── tests/
    ├── test_tool.py          # Unit tests
    └── sample_input.json     # Sample inputs for testing

2. Implement your tool logic

Edit the main implementation file:

import boto3
import json
import os
import sys
 
def main():
    # Read input from stdin
    input_data = json.loads(sys.stdin.read())
    
    # Get input parameters
    action = input_data.get("action")
    resource_type = input_data.get("resource_type")
    region = input_data.get("region", "us-east-1")
    
    # Initialize AWS client
    if resource_type == "s3":
        client = boto3.client('s3', region_name=region)
        if action == "list":
            response = client.list_buckets()
            result = [bucket['Name'] for bucket in response['Buckets']]
        # ... other actions
    
    # Return result as JSON to stdout
    print(json.dumps({"result": result}))
 
if __name__ == "__main__":
    main()

3. Define your tool schema

Update the tool definition in kubiya.yaml:

name: aws-resource-manager
description: Manage AWS resources like S3, EC2, and more
version: 1.0.0
 
# Docker image (will be built from Dockerfile)
image: kubiya/aws-resource-manager:latest
 
# Define the input/output schema
schema:
  input:
    type: object
    properties:
      action:
        type: string
        enum: ["list", "create", "delete"]
        description: "Action to perform on the resource"
      resource_type:
        type: string
        enum: ["s3", "ec2", "rds"]
        description: "Type of AWS resource to manage"
      region:
        type: string
        default: "us-east-1"
        description: "AWS region (default: us-east-1)"
    required: ["action", "resource_type"]
  output:
    type: object
    properties:
      result:
        description: "Operation result"
 
# Environment variables required by the tool
environment:
  - AWS_REGION
 
# Secrets required by the tool (will be injected at runtime)
secrets:
  - AWS_ACCESS_KEY_ID
  - AWS_SECRET_ACCESS_KEY
 
# Labels for organization
labels:
  service: aws
  type: infrastructure

4. Create test cases

Define sample inputs for testing:

{
  "action": "list",
  "resource_type": "s3",
  "region": "us-east-1"
}

And implement tests:

import unittest
import json
import sys
import os
from unittest.mock import patch, MagicMock
 
# Add the src directory to the path
sys.path.append(os.path.join(os.path.dirname(__file__), '../src'))
 
import main
 
class TestAwsResourceManager(unittest.TestCase):
    @patch('boto3.client')
    def test_list_s3_buckets(self, mock_client):
        # Mock the boto3 response
        mock_s3 = MagicMock()
        mock_s3.list_buckets.return_value = {
            'Buckets': [{'Name': 'bucket1'}, {'Name': 'bucket2'}]
        }
        mock_client.return_value = mock_s3
        
        # Test the function
        with patch('sys.stdin.read', return_value='{"action":"list","resource_type":"s3"}'):
            with patch('builtins.print') as mock_print:
                main.main()
                mock_print.assert_called_with(json.dumps({"result": ["bucket1", "bucket2"]}))
 
if __name__ == '__main__':
    unittest.main()

5. Test locally

Test your tool implementation before deploying:

# Run unit tests
cd aws-resource-manager
python -m unittest tests/test_tool.py
 
# Test with the CLI
kubiya tools test --local --dir ./ --input tests/sample_input.json

6. Build and deploy

Build the container and deploy the tool:

# Build the Docker image
docker build -t kubiya/aws-resource-manager:latest ./aws-resource-manager
 
# Push to registry (if needed)
docker push kubiya/aws-resource-manager:latest
 
# Apply the tool definition
kubiya tools apply --file ./aws-resource-manager/kubiya.yaml

7. Test the deployed tool

Verify that your tool works after deployment:

# Execute the tool
kubiya tools execute aws-resource-manager --args '{"action":"list","resource_type":"s3"}'

Python SDK Approach

For simpler tools, you can use the Python SDK with the convenient decorator syntax:

Python SDK Tool Example

from kubiya import tool
 
@tool
def aws_s3_list_buckets(region: str = "us-east-1"):
    """
    Lists all S3 buckets in the specified AWS region.
    
    Args:
        region: AWS region to check (default: us-east-1)
        
    Returns:
        List of S3 bucket names
    """
    import boto3
    client = boto3.client('s3', region_name=region)
    response = client.list_buckets()
    return [bucket['Name'] for bucket in response['Buckets']]
 
@tool
def aws_s3_create_bucket(
    bucket_name: str,
    region: str = "us-east-1",
    public: bool = False
):
    """
    Creates an S3 bucket with the specified name.
    
    Args:
        bucket_name: Name for the new bucket
        region: AWS region to create in (default: us-east-1)
        public: Whether the bucket should be public (default: False)
        
    Returns:
        Dict with creation status and bucket URL
    """
    import boto3
    client = boto3.client('s3', region_name=region)
    # Implementation...
    return {"status": "created", "url": f"https://{bucket_name}.s3.amazonaws.com"}

To bundle and deploy this tool:

# Bundle the tool definition
kubiya tools bundle aws_tools.py --output aws-s3-tools.yaml
 
# Apply the definition
kubiya tools apply --file aws-s3-tools.yaml

OpenAPI Import

You can turn any REST API with an OpenAPI specification into a tool:

# Import an OpenAPI spec
kubiya tools import-openapi \
  --file petstore-openapi.yaml \
  --name pet-store-api \
  --description "Manage pets in the pet store" \
  --base-url https://api.petstore.example \
  --headers "Authorization=Bearer $API_KEY"

Tool Integration with CI/CD

Example GitLab CI pipeline for tool development:

GitLab CI Example

stages:
  - validate
  - test
  - build
  - deploy
 
variables:
  DOCKER_REGISTRY: registry.example.com
  TOOL_NAME: aws-resource-manager
  IMAGE_TAG: $DOCKER_REGISTRY/$TOOL_NAME:$CI_COMMIT_SHORT_SHA
 
validate:
  stage: validate
  script:
    - kubiya tools validate --file kubiya.yaml
 
unit-tests:
  stage: test
  script:
    - cd tests
    - python -m unittest test_tool.py
 
build-image:
  stage: build
  script:
    - docker build -t $IMAGE_TAG .
    - docker push $IMAGE_TAG
    # Update image reference in kubiya.yaml
    - sed -i "s|image:.*|image: $IMAGE_TAG|g" kubiya.yaml
 
deploy-tool:
  stage: deploy
  script:
    - kubiya tools apply --file kubiya.yaml
  only:
    - main

Tool Sources for Development

For creating and managing tools in a structured, source-controlled way, we recommend using Tool Sources.

Tool Sources allow you to manage agentic tools predictably with deep integration for version control systems. You can securely host your tools in git repositories, and the tool manager will automatically sync with them.

# Add a Git repository as a tool source
kubiya source add --name "infra-tools" --url "https://github.com/org/infra-tools"
 
# List all tool sources
kubiya source list
 
# Sync changes from a tool source
kubiya source sync infra-tools
 
# Get details about a source
kubiya source get infra-tools
 
# Remove a tool source
kubiya source remove infra-tools

Common Options

These options apply to most tools commands:

OptionDescription
--file, -fPath to a YAML file with tool definition
--dir, -dDirectory containing tool files
--output, -oOutput format: json, yaml, table (default: table)
--output-fileSave output to a file instead of stdout
--labelFilter or set labels (key=value format)
--interactive, -iInteractive mode with prompts
--debugEnable debug mode for additional information
--localFor testing: use local files without deploying

Explore our community tools repository for examples and ready-to-use tools across various domains: GitHub