As AI Large Language Models and harnesses like OpenCode and Claude Code become increasingly capable, we see more users kicking off sandboxed agents in response to chat messages, Kanban updates, vibe coding UIs, terminal sessions, GitHub comments, and more.
The sandbox is an important step beyond simple containers, because it gives you a few things:
Security: Any untrusted end user (or a rogue LLM) can run in the sandbox and not compromise the host machine or other sandboxes running alongside it. This is traditionally (but not always) accomplished with a microVM.
Speed: An end user should be able to pick up a new sandbox quickly and restore the state from a previously used one quickly.
Control: The trusted platform needs to be able to take actions within the untrusted domain of the sandbox. This might mean mounting files in the sandbox, or controlling which requests access it, or executing specific commands.
Today, we’re excited to add another key component of control to our Sandboxes and all Containers: outbound Workers. These are programmatic egress proxies that allow users running sandboxes to easily connect to different services, add observability, and, importantly for agents, add flexible and safe authentication.
Here’s a quick look at adding a secret key to a header using an outbound Worker:
class OpenCodeInABox extends Sandbox {
static outboundByHost = {
"github.com": (request, env, ctx) => {
const headersWithAuth = new Headers(request.headers);
headersWithAuth.set("x-auth-token", env.SECRET);
return fetch(request, { headers: headersWithAuth });
}
}
}
Any time code running in the sandbox makes a request to “github.com”, the request is proxied through the handler. This allows you to do anything you want on each request, including logging, modifying, or cancelling it. In this case, we’re safely injecting a secret (more on this later). The proxy runs on the same machine as any sandbox, has access to distributed state, and can be easily modified with simple JavaScript.
We’re excited about all the possibilities this adds to Sandboxes, especially around authentication for agents. Before going into details, let’s back up and take a quick tour of traditional forms of auth, and why we think there’s something better.
Common auth for agentic workloads
The core issue with agentic auth is that we can’t fully trust the workload. While our LLMs aren’t nefarious (at least not yet), we still need to be able to apply protections to ensure they don’t use data inappropriately or take actions they shouldn’t.
A few common methods exist to provide auth to agents, and each has downsides:
Standard API tokens are the most basic method of authentication, typically injected into applications via environment variables or in mounted secrets files. This is the arguably simplest method, but least secure. You have to trust that the sandbox won’t somehow be compromised or accidentally exfiltrate the token while making a request. Since you can’t fully trust the agent, you’ll need to set up token expiry and rotation, which can be a hassle.
Workload identity tokens, such as OIDC tokens, can solve some of this pain. Rather than granting the agent a token with general permissions, you can grant it a token that attests its identity. Now, rather than the agent having direct access to some service with a token, it can exchange an identity token for a very short-lived access token. The OIDC token can be invalidated after a specific agent’s workflow completes, and expiry is easier to manage. One of the biggest downsides of workload identity tokens is the potential inflexibility of integrations. Many services don’t have first-class support for OIDC, so in order to get working integrations with upstream services, platforms will need to roll their own token-exchanging services. This makes adoption difficult.
Custom proxies provide maximum flexibility, and can be paired with workload identity tokens. If you can pass some or all of your sandbox egress through a trusted piece of code, you can insert whatever rules you need. Maybe the upstream service your agent is communicating with has a bad RBAC story, and it can’t provide granular permissions. No problem, just write the controls and permissions yourself! This is a great option for agents that you need to lock down with granular controls. However, how do you intercept all of a sandbox’s traffic? How do you set up a proxy that is dynamic and easily programmable? How do you proxy traffic efficiently? These aren’t easy problems to solve.
With those imperfect methods in mind, what does an ideal auth mechanism look like?
Ideally, it is:
Zero trust. No token is ever granted to an untrusted user for any amount of time.
Simple. Easy to author. Doesn’t involve a complex system of minting, rotating, and decrypting tokens.
Flexible. We don’t rely on the upstream system to provide the granular access we need. We can apply whatever rules we want.
Identity-aware. We can identify the sandbox making the call and apply specific rules for it.
Observable. We can easily gather information about what calls are being made.
Performant. We aren’t round-tripping to a centralized or slow source of truth.
Transparent. The sandboxed workload doesn’t have to know about it. Things just work.
Dynamic. We can change rules on the fly.
We believe outbound Workers for Sandboxes fit the bill on all of these. Let’s see how.
Outbound Workers in practice
Basics: restriction and observability
First, we’ll look at a very basic example: logging requests and denying specific actions.
In this case, we’ll use the outbound function, which intercepts all outgoing HTTP requests from the sandbox. With a few lines of JavaScript, it’s easy to ensure only GETs are made and log then deny any disallowed methods.
class MySandboxedApp extends Sandbox {
static outbound = (req, env, ctx) => {
// Deny any non-GET action and log
if (req.method !== 'GET') {
console.log(`Container making ${req.method} request to: ${req.url}`);
return new Response('Not Allowed', { status: 405, statusText: 'Method Not Allowed'});
}
// Proceed if it is a GET request
return fetch(req);
};
}
This proxy runs on Workers and runs on the same machine as the sandboxed VM. Workers were built for quick response times, often sitting in front of cached CDN traffic, so additional latency is extremely minimal.
Because this is running on Workers, we get observability out of the box. You can view logs and outbound requests in the Workers dashboard or export them to your application performance monitoring tool of choice.
Zero trust credential injection
How would we use this to enforce a zero trust environment for our agent? Let’s imagine we want to make a request to a private GitHub instance, but we never want our LLM to access a private token.
We can use outboundByHost to define functions for specific domains or IPs. In this case, we’ll inject a protected credential if the domain is “my-internal-vcs.dev”. The sandboxed agent never has access to these credentials.
class OpenCodeInABox extends Sandbox {
static outboundByHost = {
"my-internal-vcs.dev": (request, env, ctx) => {
const headersWithAuth = new Headers(request.headers);
headersWithAuth.set("x-auth-token", env.SECRET);
return fetch(request, { headers: headersWithAuth });
}
}
}
It is also easy to conditionalize the response based on the identity of the container. You don’t have to inject the same tokens for every sandbox instance.
static outboundByHost = {
"my-internal-vcs.dev": (request, env, ctx) => {
// note: KV is encrypted at rest and in transit
const authKey = await env.KEYS.get(ctx.containerId);
const requestWithAuth = new Request(request);
requestWithAuth.headers.set("x-auth-token", authKey);
return fetch(requestWithAuth);
}
}
As you may have noticed in the last example, another major advantage of using outbound Workers is that it makes integration into the Workers ecosystem easier. Previously, if a user wanted to access R2, they would have to inject an R2 credential, then make a call from their container to the public R2 API. Same for KV, Agents, other Containers, other Worker services, etc.
Now, you just call any binding from your outbound Workers.
class MySandboxedApp extends Sandbox {
static outboundByHost = {
"my.kv": async (req, env, ctx) => {
const key = keyFromReq(req);
const myResult = await env.KV.get(key);
return new Response(myResult);
},
"objects.cf": async (req, env, ctx) => {
const prefix = ctx.containerId
const path = pathFromRequest(req);
const object = await env.R2.get(`${prefix}/${path}`);
const myResult = await env.KV.get(key);
return new Response(myResult);
},
};
}
Rather than parsing tokens and setting up policies, we can easily conditionalize access with code and whatever logic we want. In the R2 example, we also were able to use the sandbox’s ID to further scope access with ease.
Networking control should also be dynamic. On many platforms, config for Container and VM networking is static, looking something like this:
{
defaultEgress: "block",
allowedDomains: ["github.com", "npmjs.org"]
}
This is better than nothing, but we can do better. For many sandboxes, we might want to apply a policy on start, but then override it with another once specific operations have been performed.
For instance, we can boot a sandbox, grab our dependencies via NPM and Github, and then lock down egress after that. This ensures that we open up the network for as little time as possible.
To achieve this, we can use outboundHandlers, which allows us to define arbitrary outbound handlers that can be applied programmatically using the setOutboundHandler method. Each of these also takes params, allowing you to customize behavior from code. In this case, we will allow some hostnames with the custom “allowHosts” policy, then turn off HTTP.
class MySandboxedApp extends Sandbox {
static outboundHandlers = {
async allowHosts(req, env, { params }) {
const url = new URL(request.url);
const allowedHostname = params.allowedHostnames.includes(url.hostname);
if (allowedHostname) {
return await fetch(newRequest);
} else {
return new Response(null, { status: 403, statusText: "Forbidden" });
}
}
async noHttp(req) {
return new Response(null, { status: 403, statusText: "Forbidden" });
}
}
}
async setUpSandboxes(req, env) {
const sandbox = await env.SANDBOX.getByName(userId);
await sandbox.setOutboundHandler("allowHosts", {
allowedHostnames: ["github.com", "npmjs.org"]
});
await sandbox.gitClone(userRepoURL)
await sandbox.exec("npm install")
await sandbox.setOutboundHandler("noHttp");
}
This could be extended even further. Your agent might ask the end user a question like “Do you want to allow POST requests to cloudflare.com?” based on whatever tools it needs at that time. With dynamic outbound Workers, you can easily modify the sandbox rules on the fly to provide this level of control.
TLS support with MITM Proxying
To do anything useful with requests beyond allowing or denying them, you need to have access to the content. This means that if you’re making HTTPS requests, they need to be decrypted by the Workers proxy.
To achieve this, a unique ephemeral certificate authority (CA) and private key are created for each Sandbox instance, and the CA is placed into the sandbox. By default, sandbox instances will trust this CA, while standard container instances can opt into trusting it, for instance by calling sudo update-ca-certificates.
export class MyContainer extends Container {
interceptHttps = true;
}
MyContainer.outbound = (req, env, ctx) => {
// All HTTP(S) requests will trigger this hook.
return fetch(req);
};
TLS traffic is proxied by a Cloudflare isolated network process by performing a TLS handshake. It creates a leaf CA from an ephemeral and unique private key and uses the SNI extracted in the ClientHello. It will then invoke in the same machine the configured Worker to handle the HTTPS request.
Our ephemeral private key and CA will never leave our container runtime sidecar process, and is never shared across other container sidecar processes.
With this in place, outbound Workers act as a truly transparent proxy. The sandbox doesn't need any awareness of specific protocols or domains — all HTTP and HTTPS traffic flows through the outbound handler for filtering or modification.
To enable the functionality shown above in both Container and Sandbox, we added new methods to the ctx.container object: interceptOutboundHttp and interceptOutboundHttps, which intercept outgoing requests on specific hostnames (with basic glob matching), IP ranges, and it can be used to intercept all outbound requests. These methods are called with a WorkerEntrypoint, which gets set up as the front door to the outbound Worker.
export class MyWorker extends WorkerEntrypoint {
fetch() {
return new Response(this.ctx.props.message);
}
}
// ... inside your container DurableObject ...
this.ctx.container.start({ enableInternet: false });
const outboundWorker = this.ctx.exports.MyWorker({ props: { message: 'hello' } });
await this.ctx.container.interceptOutboundHttp('15.0.0.1:80', outboundWorker);
// From now on, all HTTP requests to 15.0.0.1:80 return "hello"
await this.waitForContainerToBeHealthy();
// You can decide to return another message now...
const secondOutboundWorker = this.ctx.exports.MyWorker({ props: { message: 'switcheroo' } });
await this.ctx.container.interceptOutboundHttp('15.0.0.1:80', secondOutboundWorker);
// all HTTP requests to 15.0.0.1 now show "switcheroo", even on connections that were
// open before this interceptOutboundHttp
// You can even set hostnames, CIDRs, for both IPv4 and IPv6
await this.ctx.container.interceptOutboundHttp('example.com', secondOutboundWorker);
await this.ctx.container.interceptOutboundHttp('*.example.com', secondOutboundWorker);
await this.ctx.container.interceptOutboundHttp('123.123.123.123/23', secoundOutboundWorker);
All proxying to Workers happens locally on the same machine that runs the sandbox VM. Even though communication between container and Worker is “authless”, it is secure.
These methods can be called at any time, before or after starting the container, even while connections are still open. Connections that send multiple HTTP requests will automatically pick up a new entrypoint, so updating outbound Workers will not break existing TCP connections or interrupt HTTP requests.
Local development with wrangler dev also has support for egress interception. To make it possible, we automatically spawn a sidecar process inside the local container’s network namespace. We called this sidecar component proxy-everything. Once proxy-everything is attached, it applies the appropriate TPROXY nftable rules, routing matching traffic from the local Container to workerd, Cloudflare’s open source JavaScript runtime, which runs the outbound Worker. This allows the local development experience to mirror what happens in prod, so testing and development remain simple.
Giving outbound Workers a try
If you haven’t tried Cloudflare Sandboxes, check out the Getting Started guide. If you are a current user of Containers or Sandboxes, start using outbound Workers now by reading the documentation and upgrading to @cloudflare/containers@0.3.0 or @cloudflare/sandbox@0.8.9.