Kubernetes Crew
Put Your Kubernetes Cluster On Autopilot
Last updated
Was this helpful?
Put Your Kubernetes Cluster On Autopilot
Last updated
Was this helpful?
Whether you're an expert Kubernetes operator or a newcomer, Kubernetes Crew can help you.
Work smarter, not harder. Detect, understand, and resolve Kubernetes issues before they start costing your org.
Rest easy knowing you always have someone manning the Kubernetes helm.
Maximize resource efficiency, minimize costs, and automate routine tasks—all with precision.
Get your AI Teammates up and running in minutes.
A Kubernetes cluster
(must be a local runner)
(the Kubiya Slack app)
Select Kubernetes Crew and click Continue
Follow the on-screen instructions
If you haven't created a runner yet, no problem. In the Select Runner drop-down, choose Create a Runner and follow the on-screen instructions.
Click Save and Continue. Behind the scenes this is running Terraform Plan
.
If the plan is successful, you'll be brought to a screen showing a summary of the resources that will be created. To finish setup, click Delegate. This will run a Terraform Apply
.
Refresh the screen and check that the use case's status is Active
. If so, then the Terraform Apply was successful and you are ready to use your use case.
By default, Kubiya local runners have access to the kubiya
namespace only. For this use case, your local runner will need access to your cluster's other namespaces.
There are two ways to use the Kubernetes Crew:
Webhooks
Proactively with requests
By default, the Kubernetes Crew setup contains webhooks that detect events you should know about. Whenever one of those events occur, the Kubernetes Crew will automatically look into it and update you in the Slack channel you designated during configuration.
At any time, you can also go to the Kubi Jr. app in Slack and send a message asking the Kubernetes Crew questions or to perform any Kubernetes operations.
For example, you can ask questions like:
Which pods are consuming the most CPU and memory in the last hour?
Are there any pods running with privileged containers?
Can you help me to understand traffic routing to pods in kubiya namespace?
Check the services defined in the kubiya
namespace to see which pods they route to.
Can you validate all the CA certs within this cluster and let me know the expiration date?
Can you analysis the reason of "Crashloopback" pod on default namespace.
Can you send me the list of node names having events?
Can you enable debug container for pod/agent-manager-5b85f7f6d8-n92sc
Can you send me the list of all the pods having more than 5 restarts in all namespaces.
Can you get me the list of pods where resource is not defined.
Can you get me the list of incorrect configurations in this k8s clusters.
Congrats, you're ready to go! Now, go get to know your new teammates for Kubernetes operations 😃
Go to the
For a full breakdown of setting it up, see our .
Here's .
Make sure to . Otherwise, the Kubernetes Crew will not have access.