Skip to main content
Cognitive memory enables your organization to store and recall contextual knowledge using AI-powered semantic search. Store runbooks, configurations, best practices, and any organizational knowledge in datasets, then retrieve relevant information using natural language queries.

Quick Start

# Create a dataset for your knowledge
kubiya memory dataset create \
  --name "production-runbooks" \
  --scope org \
  --description "Production operational runbooks"
Output:
✓ Dataset created successfully
  ID: 12229faa-89e2-5a54-a451-4971b2f04b37
  Name: production-runbooks
  Scope: org
# Store a memory
kubiya memory store \
  --title "AWS Production Setup" \
  --content "Region: us-east-1, VPC: vpc-0a1b2c3d4e5f, Subnets: subnet-abc, subnet-def" \
  --dataset-id 12229faa-89e2-5a54-a451-4971b2f04b37 \
  --tags aws,production,infrastructure
Output:
✓ Memory stored successfully
  Memory ID: mem_org123_user456_1734567890
  Status: processing
# Recall memories using semantic search
kubiya memory recall "AWS configuration" --tags production
Output:
🔍 Memory Recall Results (2 matches)

1. AWS Production Setup (score: 0.95)
   Memory ID: mem_org123_user456_1734567890
   Tags: aws, production, infrastructure
   Created: 2024-12-15

   Region: us-east-1, VPC: vpc-0a1b2c3d4e5f...

2. AWS Network Configuration (score: 0.87)
   ...

Datasets

Datasets are containers for organizing memories with different access scopes. Each memory must be stored in a dataset.

Dataset Scopes

  • user: Private to your user account
  • org: Shared across your entire organization
  • role: Accessible to specific roles (requires --allowed-roles)

Create Dataset

# Organization-wide dataset
kubiya memory dataset create \
  --name "team-knowledge" \
  --scope org \
  --description "Shared team knowledge base"

# User-private dataset
kubiya memory dataset create \
  --name "personal-notes" \
  --scope user

# Role-based dataset
kubiya memory dataset create \
  --name "ops-runbooks" \
  --scope role \
  --allowed-roles devops,sre

List Datasets

kubiya memory dataset list
Output:
📁 Datasets (3)

NAME                 ID                             SCOPE    CREATED
team-knowledge       abc123-def456...               org      2024-12-10
personal-notes       xyz789-uvw012...               user     2024-12-12
ops-runbooks         mno345-pqr678...               role     2024-12-13
# JSON output
kubiya memory dataset list --output json

Get Dataset Details

kubiya memory dataset get abc123-def456-789
Output:
📁 Dataset Details

  ID: abc123-def456-789
  Name: team-knowledge
  Scope: org
  Description: Shared team knowledge base
  Created: 2024-12-10 15:30:00

Get Dataset Data

View all data entries in a dataset:
kubiya memory dataset get-data abc123-def456-789

Delete Dataset

Deleting a dataset removes all associated memories. This action cannot be undone.
kubiya memory dataset delete abc123-def456-789

Memory Operations

Store Memory

Store contextual knowledge with semantic embeddings for later retrieval.
# Basic memory storage
kubiya memory store \
  --title "Database Connection String" \
  --content "Production PostgreSQL: postgres://prod-db.example.com:5432/mydb" \
  --dataset-id abc123-def456-789 \
  --tags database,production,postgresql
# Store from file
kubiya memory store \
  --title "Kubernetes Deployment Guide" \
  --content-file ./docs/k8s-deployment.md \
  --dataset-id abc123-def456-789 \
  --tags kubernetes,deployment,production
# Store with structured metadata
kubiya memory store \
  --title "API Configuration" \
  --content "API endpoint: https://api.example.com" \
  --dataset-id abc123-def456-789 \
  --tags api,configuration \
  --metadata-json '{"env":"production","version":"2.0","owner":"platform-team"}'
Supported Flags:
  • --title (required) - Descriptive title for the memory
  • --content - Direct content input (or use --content-file)
  • --content-file - Read content from a file
  • --dataset-id (required) - Target dataset identifier
  • --tags - Comma-separated tags for categorization
  • --metadata-json - Additional structured metadata as JSON
  • --output - Output format (text, json, yaml)

Recall Memories

Search stored memories using natural language queries with semantic understanding.
# Simple recall
kubiya memory recall "database configuration"
# Recall with filters and precision control
kubiya memory recall "kubernetes deployment" \
  --tags production,kubernetes \
  --top-k 5 \
  --min-score 0.7
# JSON output for automation
kubiya memory recall "incident response" \
  --tags critical \
  --output json
Query Tips:
  • Use natural language descriptions
  • Be specific: “production database failover” vs “database”
  • Combine with tags for precision
  • Adjust --min-score to filter by relevance (0.0-1.0)
  • Use --top-k to limit results (default: 10)

List Memories

View all stored memories:
# List all memories
kubiya memory list
Output:
🧠 Memories (5)

TITLE                          MEMORY ID                    TAGS                    CREATED
Database Connection String     mem_org123_user456_...       database,production     2024-12-15
Kubernetes Deployment Guide    mem_org123_user456_...       kubernetes,deployment   2024-12-14
API Configuration              mem_org123_user456_...       api,configuration       2024-12-13
# JSON output
kubiya memory list --output json

# YAML output
kubiya memory list --output yaml

Check Job Status

Some memory operations are asynchronous. Check their status:
kubiya memory status job_abc123def456
Output:
⚙️  Memory Job Status

  Job ID: job_abc123def456
  Status: completed
  Progress: 100.0%
  Completed: 2024-12-15 14:30:00

Output Formats

All memory commands support multiple output formats for different use cases:
FormatDescriptionUse Case
text (default)Human-readable with colors and formattingInteractive CLI usage
jsonMachine-readable JSONAutomation, scripts, CI/CD
yamlYAML formatConfiguration management
tableTabular formatList commands (automatic)
# Text output (default)
kubiya memory dataset list

# JSON for scripting
kubiya memory dataset list --output json | jq '.[] | .name'

# YAML for configs
kubiya memory recall "config" --output yaml > memory.yaml

Best Practices

Dataset Organization

Scope Strategy:
  • Use org scope for shared team knowledge (runbooks, documentation)
  • Use user scope for personal notes and drafts
  • Use role scope for sensitive information (credentials, SRE procedures)
Naming Conventions:
# Good: Descriptive, specific names
production-runbooks
api-documentation
incident-response-procedures

# Avoid: Generic, unclear names
data
stuff
notes
Tagging Taxonomy: Establish consistent tags across your organization:
# Environment tags
production, staging, development

# Component tags
database, api, frontend, infrastructure

# Team tags
backend-team, devops-team, data-team

# Priority tags
critical, important, nice-to-have

Memory Storage

Craft Effective Titles:
# Good: Specific and searchable
"PostgreSQL Production Failover Procedure"
"API Rate Limiting Configuration"
"Kubernetes Node Scaling Policy"

# Avoid: Vague or generic
"Database Stuff"
"Config"
"Notes"
Provide Rich Context:
# Good: Detailed, actionable content
kubiya memory store \
  --title "Database Backup Procedure" \
  --content "1. Stop application writes
2. Run pg_dump with --no-owner flag
3. Upload to S3 bucket: s3://backups/prod/
4. Verify backup integrity with pg_restore --list
5. Resume application writes" \
  --dataset-id <id> \
  --tags database,backup,postgresql,production

# Avoid: Minimal context
kubiya memory store \
  --title "Backup" \
  --content "Use pg_dump" \
  --dataset-id <id>
Use Multiple Tags:
# Multiple tags improve discoverability
--tags database,postgresql,production,backup,critical
Structure Metadata:
# Add searchable structured data
--metadata-json '{
  "environment": "production",
  "owner": "platform-team",
  "last_updated": "2024-12-15",
  "version": "2.0",
  "severity": "critical"
}'
Query Clarity:
# Good: Descriptive natural language
kubiya memory recall "how to perform database failover in production"
kubiya memory recall "steps for kubernetes node replacement"

# Less effective: Keyword stuffing
kubiya memory recall "database failover production steps procedure"
Combine Filters:
# Precision through filters
kubiya memory recall "deployment issues" \
  --tags production,kubernetes \
  --min-score 0.8 \
  --top-k 3
Iterate on Queries:
  1. Start broad: "deployment"
  2. Add specificity: "kubernetes deployment"
  3. Add filters: --tags production
  4. Adjust threshold: --min-score 0.7

Use Cases

Runbook Storage

Store operational procedures and incident response playbooks:
# Create runbook dataset
kubiya memory dataset create \
  --name "incident-runbooks" \
  --scope org \
  --description "Incident response and operational procedures"

# Store runbook from file
kubiya memory store \
  --title "Database Failover Procedure" \
  --content-file ./runbooks/db-failover.md \
  --dataset-id <runbook-dataset-id> \
  --tags incident-response,database,critical,postgresql \
  --metadata-json '{"severity":"high","owner":"dba-team"}'

# Recall during incident
kubiya memory recall "database is down how to failover" \
  --tags critical,database \
  --top-k 3

Configuration Management

Centralize configuration documentation:
# Store infrastructure configs
kubiya memory store \
  --title "Production AWS Configuration" \
  --content "Region: us-east-1
VPC: vpc-0a1b2c3d4e5f
Subnets: subnet-123 (private), subnet-456 (public)
NAT Gateway: nat-789
Load Balancer: alb-prod-001" \
  --dataset-id <config-dataset-id> \
  --tags aws,production,infrastructure,networking \
  --metadata-json '{"environment":"production","region":"us-east-1"}'

# Retrieve when needed
kubiya memory recall "AWS VPC configuration" --tags production

Knowledge Sharing

Build a team knowledge base:
# Document best practices
kubiya memory store \
  --title "Deployment Best Practices" \
  --content "1. Always run tests in CI before deploying
2. Use blue-green deployments for zero downtime
3. Tag all releases with semantic versioning
4. Monitor error rates for 10 minutes post-deploy
5. Keep deployment size small and frequent" \
  --dataset-id <team-knowledge-id> \
  --tags best-practices,deployment,ci-cd

# Document troubleshooting steps
kubiya memory store \
  --title "Debugging High API Latency" \
  --content-file ./docs/api-latency-debug.md \
  --dataset-id <team-knowledge-id> \
  --tags troubleshooting,api,performance

Onboarding Documentation

Create searchable onboarding materials:
# Store onboarding guides
kubiya memory store \
  --title "Setting Up Development Environment" \
  --content-file ./docs/dev-setup.md \
  --dataset-id <onboarding-dataset-id> \
  --tags onboarding,development,getting-started

# New team members can search
kubiya memory recall "how to setup development environment"

Integration with Agents

Cognitive memory enhances agent capabilities by providing contextual knowledge.
Agents can automatically access organization-wide datasets to recall relevant information when executing tasks. Configure your agents to use memory for more intelligent automation.
# Agents can use stored knowledge
# Example: Agent recalls deployment procedures before executing deployment
kubiya exec "deploy the api to production" --agent production-agent

# The agent automatically recalls relevant memories:
# - Deployment best practices
# - Production configuration
# - Rollback procedures

Command Reference

memory store

Store new contextual memory with semantic embeddings. Syntax:
kubiya memory store [flags]
Required Flags:
  • --title - Memory title (descriptive and searchable)
  • --dataset-id - Target dataset identifier
  • --content OR --content-file - Memory content
Optional Flags:
  • --tags - Comma-separated tags for categorization
  • --metadata-json - Additional structured metadata as JSON
  • --output - Output format: text, json, yaml

memory recall

Search memories using semantic understanding. Syntax:
kubiya memory recall <query> [flags]
kubiya memory recall --query <query> [flags]
Arguments:
  • query - Natural language search query (positional or --query flag)
Optional Flags:
  • --tags - Filter results by tags (comma-separated)
  • --top-k - Number of results to return (default: 10)
  • --min-score - Minimum similarity score: 0.0-1.0 (default: 0.0)
  • --output - Output format: text, json, yaml

memory list

List all stored memories. Syntax:
kubiya memory list [flags]
Optional Flags:
  • --output - Output format: text, json, yaml, table

memory status

Check the status of an asynchronous memory processing job. Syntax:
kubiya memory status <job-id> [flags]
Arguments:
  • job-id - Job identifier (returned from async operations)
Optional Flags:
  • --output - Output format: text, json, yaml

memory dataset create

Create a new dataset for organizing memories. Syntax:
kubiya memory dataset create [flags]
Required Flags:
  • --name - Dataset name (descriptive and unique)
  • --scope - Access scope: user, org, or role
Optional Flags:
  • --description - Dataset description
  • --allowed-roles - Comma-separated roles (required if scope is role)
  • --output - Output format: text, json, yaml
Examples:
# Organization dataset
kubiya memory dataset create --name "team-docs" --scope org

# Role-based dataset
kubiya memory dataset create \
  --name "sre-runbooks" \
  --scope role \
  --allowed-roles sre,devops

memory dataset list

List all accessible datasets. Syntax:
kubiya memory dataset list [flags]
Optional Flags:
  • --output - Output format: text, json, yaml, table

memory dataset get

Get detailed information about a specific dataset. Syntax:
kubiya memory dataset get <dataset-id> [flags]
Arguments:
  • dataset-id - Dataset identifier
Optional Flags:
  • --output - Output format: text, json, yaml

memory dataset delete

Delete a dataset and all its associated memories. Syntax:
kubiya memory dataset delete <dataset-id>
Arguments:
  • dataset-id - Dataset identifier
This action is irreversible. All memories in the dataset will be permanently deleted.

memory dataset get-data

Retrieve all data entries from a dataset. Syntax:
kubiya memory dataset get-data <dataset-id> [flags]
Arguments:
  • dataset-id - Dataset identifier
Optional Flags:
  • --output - Output format: text, json, yaml

Next Steps