Full-Stack Agent Quickstart
Get a working AI agent up and running in minutes. This guide will walk you through creating an agent that can understand natural language and execute workflows.
Prerequisites
Python 3.8+
Kubiya API key (get one at app.kubiya.ai )
LLM provider API key (OpenAI, Anthropic, or Together AI)
Step 1: Install the SDK
pip install kubiya-workflow-sdk
Step 2: Set Up Environment
# Required: Kubiya API key
export KUBIYA_API_KEY = "your-kubiya-api-key"
# Choose one LLM provider:
export OPENAI_API_KEY = "your-openai-key" # For OpenAI
# OR
export ANTHROPIC_API_KEY = "your-anthropic-key" # For Anthropic
# OR
export TOGETHER_API_KEY = "your-together-key" # For Together AI
Step 3: Start the Agent Server
OpenAI (GPT-4) Anthropic (Claude) Together AI (DeepSeek) kubiya mcp agent --provider openai --model gpt-4o --port 8765
kubiya mcp agent --provider openai --model gpt-4o --port 8765
kubiya mcp agent --provider anthropic --model claude-3-5-sonnet-20241022 --port 8765
kubiya mcp agent --provider together --model deepseek-ai/DeepSeek-V3 --port 8765
You should see:
╭───────────── Starting Agent Server ─────────────╮
│ Kubiya MCP Agent Server │
│ │
│ Provider: openai │
│ Model: gpt-4o │
│ Endpoint: http://0.0.0.0:8765 │
│ Kubiya API: ✅ Configured │
╰─────────────────────────────────────────────────╯
Step 4: Test Your Agent
Simple Test
curl -X POST http://localhost:8765/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{
"messages": [
{"role": "user", "content": "Create a simple hello world workflow"}
]
}'
Streaming Test
curl -X POST http://localhost:8765/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{
"messages": [
{"role": "user", "content": "Create and execute a workflow that counts to 5"}
],
"stream": true
}'
Step 5: Build a Simple UI
Create agent-ui.html
:
<! DOCTYPE html >
< html >
< head >
< title > Kubiya Agent </ title >
< style >
body { font-family : Arial , sans-serif ; max-width : 800 px ; margin : 0 auto ; padding : 20 px ; }
#messages { border : 1 px solid #ccc ; height : 400 px ; overflow-y : auto ; padding : 10 px ; margin-bottom : 10 px ; }
.message { margin : 10 px 0 ; padding : 10 px ; border-radius : 5 px ; }
.user { background : #e3f2fd ; text-align : right ; }
.assistant { background : #f5f5f5 ; }
.error { background : #ffebee ; color : #c62828 ; }
#input-form { display : flex ; gap : 10 px ; }
#user-input { flex : 1 ; padding : 10 px ; }
button { padding : 10 px 20 px ; background : #1976d2 ; color : white ; border : none ; border-radius : 5 px ; cursor : pointer ; }
button :disabled { background : #ccc ; }
</ style >
</ head >
< body >
< h1 > Kubiya AI Agent </ h1 >
< div id = "messages" ></ div >
< form id = "input-form" >
< input type = "text" id = "user-input" placeholder = "Describe a workflow to create..." />
< button type = "submit" id = "send-btn" > Send </ button >
</ form >
< script >
const messagesDiv = document . getElementById ( 'messages' );
const userInput = document . getElementById ( 'user-input' );
const sendBtn = document . getElementById ( 'send-btn' );
const form = document . getElementById ( 'input-form' );
function addMessage ( content , role ) {
const messageDiv = document . createElement ( 'div' );
messageDiv . className = `message ${ role } ` ;
messageDiv . textContent = content ;
messagesDiv . appendChild ( messageDiv );
messagesDiv . scrollTop = messagesDiv . scrollHeight ;
}
async function sendMessage () {
const message = userInput . value . trim ();
if ( ! message ) return ;
addMessage ( message , 'user' );
userInput . value = '' ;
sendBtn . disabled = true ;
try {
const response = await fetch ( 'http://localhost:8765/v1/chat/completions' , {
method: 'POST' ,
headers: { 'Content-Type' : 'application/json' },
body: JSON . stringify ({
messages: [{ role: 'user' , content: message }],
stream: true
})
});
const reader = response . body . getReader ();
const decoder = new TextDecoder ();
let assistantMessage = '' ;
while ( true ) {
const { done , value } = await reader . read ();
if ( done ) break ;
const chunk = decoder . decode ( value );
const lines = chunk . split ( ' \n ' );
for ( const line of lines ) {
if ( line . startsWith ( 'data: ' )) {
const data = line . slice ( 6 );
if ( data === '[DONE]' ) continue ;
try {
const json = JSON . parse ( data );
if ( json . choices ?.[ 0 ]?. delta ?. content ) {
assistantMessage += json . choices [ 0 ]. delta . content ;
}
} catch ( e ) {
// Skip invalid JSON
}
}
}
}
if ( assistantMessage ) {
addMessage ( assistantMessage , 'assistant' );
}
} catch ( error ) {
addMessage ( `Error: ${ error . message } ` , 'error' );
} finally {
sendBtn . disabled = false ;
}
}
form . addEventListener ( 'submit' , ( e ) => {
e . preventDefault ();
sendMessage ();
});
</ script >
</ body >
</ html >
Open the file in your browser and start chatting with your agent!
Example Conversations
Basic Workflow
You: Create a workflow that prints system information
Agent: I'll create a workflow that displays system information for you...
[Creates and executes workflow showing OS, CPU, memory info]
Docker-Based Workflow
You: Use Python to analyze a CSV file and create a chart
Agent: I'll create a data analysis workflow using Python and pandas...
[Creates workflow with python:3.11-slim, installs dependencies, processes data]
Complex Automation
You: Set up a CI/CD pipeline that builds, tests, and deploys my app
Agent: I'll create a comprehensive CI/CD workflow for you...
[Creates multi-stage workflow with parallel testing and deployment]
What’s Next?
Explore Advanced Features
Troubleshooting
Check that all required environment variables are set
Verify your API keys are valid
Ensure port 8765 is not already in use
Check Python version (3.8+ required)
Workflows fail to execute
Verify your Kubiya API key is valid
Check that runners are available: curl http://localhost:8765/health
Ensure you have permissions to execute workflows
Check the agent server logs for detailed errors
Verify the LLM provider API key is correct
Check your internet connection
Try a different model or provider
Look for rate limiting errors in logs
Get Help
Responses are generated using AI and may contain mistakes.