AI Agent Fleets
AI Agent Fleets is Okteto’s orchestration layer for running multiple AI agents inside Okteto's isolated, ephemeral development environments. Each agent operates independently, with its own container, filesystem, logs, and access policies. This ensures safe, observable, and reproducible workflows for modern software development.
Whether you’re adding a new feature to your application, spinning up a new project, or exploring multiple features in parallel, AI Agent Fleets lets you move faster without compromising on safety or control.
Beta Notice: AI Agent Fleets is currently in beta. If you don’t yet have access, you can request to be added to our Beta by visiting okteto.com/ai.
⸻
🧠 What Are AI Agent Fleets?
An AI Agent is a large language model (LLM)-powered assistant that can write, modify, and test code inside an Okteto development environment.
A Fleet is a set of one or more agents working independently on tasks such as:
- Adding features or fixing bugs
- Refactoring legacy services
- Bootstrapping new applications
- Running code experiments in parallel
Each agent runs inside its own Kubernetes namespace, backed by Okteto’s dev environment stack. This gives you:
- Full isolation: Agents don’t share state or files unless explicitly configured
- Production-like environment: Access to the same runtime, secrets, and configuration as your real dev setup
- Test-first execution: Agents run tests, validate changes, and generate logs before making pull requests
- No local dependencies: Nothing is run or installed on your machine
🚀 Getting Started
1. Enable AI Agent Fleets
To use AI Agent Fleets, make sure the feature is enabled in your organization. Your Okteto admin can enable it under the Admin > AI Agent Fleets section.
2. Start an Agent
Once enabled, you’ll see the Agents tab in your Okteto Dashboard.
To launch an agent:
- Choose whether you want to:
- Work from an existing repository
- Start a brand-new project
- Describe your task in natural language (e.g. “Add a health check endpoint to the movies API”)
- Hit Enter or click Launch Agent
3. Wait for Provisioning
Okteto will provision a dev environment and start the agent. This takes about 30–60 seconds. During this time:
- A new Development Environment is spun up
- Dependencies are installed
- The agent prepares to begin work
You'll see status updates throughout the process.
4. Review Results
Once the agent completes:
- Preview the application in a live URL by clicking on one of the endpoints created on the right side of the page
- View logs, test results, and the full diff
- Accept or discard the changes
- Ask the agent to create a pull request
🔐 Security and Isolation
Each AI Agent runs in a sandboxed Namespace with limited access to only the resources it needs. All actions occur inside an ephemeral environment, with observability built in. This ensures safe experimentation and protects production environments.
🧪 Example Use Cases
-
Spin up a new service from a prompt like:
“Create a new TypeScript REST API with an /alive endpoint and a README” -
Refactor legacy code with:
“Update this repo to use async/await instead of callbacks” -
Add a feature in parallel to existing work: “Add a banner to the homepage announcing the beta launch”
📘 FAQs
Can I run multiple agents at once?
Yes! You can launch multiple agents in parallel, each working independently.
Do I need to install anything locally?
No. Okteto handles everything in the cloud. You don’t need Docker or Kubernetes installed.
Can I bring my own LLM key?
Yes. You can optionally provide your own Anthropic LLM key in the agent settings if you want to use your own quota or a different provider.
Do you plan on adding support for additional LLM Providers? Yes! We plan on adding support for additional models based on your feedback in a coming release.
📣 Feedback
AI Agent Fleets is currently in beta. To request access, visit okteto.com/ai. We welcome your feedback as we continue to improve the experience.