
<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:media="http://search.yahoo.com/mrss/">
    <channel>
        <title><![CDATA[ The Cloudflare Blog ]]></title>
        <description><![CDATA[ Get the latest news on how products at Cloudflare are built, technologies used, and join the teams helping to build a better Internet. ]]></description>
        <link>https://blog.cloudflare.com</link>
        <atom:link href="https://blog.cloudflare.com/" rel="self" type="application/rss+xml"/>
        <language>en-us</language>
        
        <lastBuildDate>Wed, 06 May 2026 23:37:15 GMT</lastBuildDate>
        <item>
            <title><![CDATA[Building the agentic cloud: everything we launched during Agents Week 2026]]></title>
            <link>https://blog.cloudflare.com/agents-week-in-review/</link>
            <pubDate>Mon, 20 Apr 2026 13:00:00 GMT</pubDate>
            <description><![CDATA[ Agents Week 2026 is a wrap. Let’s take a look at everything we announced, from compute and security to the agent toolbox, platform tools, and the emerging agentic web. Everything we shipped for the agentic cloud.
 ]]></description>
            <content:encoded><![CDATA[ <p></p><p>Today marks the end of our first Agents Week, an innovation week dedicated entirely to the age of agents. It couldn’t have been more timely: over the past year, agents have swiftly changed how people work. Coding agents are helping developers ship faster than ever. Support agents resolve tickets end-to-end. Research agents validate hypotheses across hundreds of sources in minutes. And people aren't just running one agent: they're running several in parallel and around the clock.</p><p>As Cloudflare's CTO Dane Knecht and VP of Product Rita Kozlov noted in our <a href="https://blog.cloudflare.com/welcome-to-agents-week/"><u>welcome to Agents Week post</u></a>, the potential scale of agents is staggering: If even a fraction of the world's knowledge workers each run a few agents in parallel, you need compute capacity for tens of millions of simultaneous sessions. The one-app-serves-many-users model the cloud was built on doesn't work for that. But that's exactly what developers and businesses want to do: build agents, deploy them to users, and run them at scale.</p><p>Getting there means solving problems across the entire stack. Agents need <b>compute</b> that scales from full operating systems to lightweight isolates. They need <b>security</b> and identity built into how they run.  They need an <b>agent toolbox</b>: the right models, tools, and context to do real work. All the code that agents generate needs a clear path from afternoon <b>prototype to production</b> app. And finally, as agents drive a growing share of Internet traffic, the web itself needs to adapt for the emerging <b>agentic web</b>. Turns out, the containerless, serverless compute platform we launched eight years ago with Workers was ready-made for this moment. Since then, we've grown it into a full platform, and this week we shipped the next wave of primitives purpose-built for agents, organized around exactly those problems.</p><p>We are here to create Cloud 2.0 — the agentic cloud. Infrastructure designed for a world where agents are a primary workload. </p><p>Here's a list of everything we announced this week — we wouldn’t want you to miss a thing.</p>
    <div>
      <h2>Compute</h2>
      <a href="#compute">
        
      </a>
    </div>
    <p>It starts with compute. Agents need somewhere to run, and somewhere to store and run the code they write. Not all agents need the same thing: some need a full operating system to install packages and run terminal commands, most need something lightweight that starts in milliseconds and scales to millions. This week we shipped the environments to run them, as well as a new Git-compatible workspace for agents:</p><table><tr><th><p><b>Announcement</b></p></th><th><p><b>Summary</b></p></th></tr><tr><td><p><a href="https://blog.cloudflare.com/artifacts-git-for-agents-beta"><u>Artifacts: Versioned storage that speaks Git</u></a></p></td><td><p>Give your agents, developers, and automations a home for code and data. We’ve just launched Artifacts: Git-compatible versioned storage built for agents. Create tens of millions of repos, fork from any remote, and hand off a URL to any Git client.</p></td></tr><tr><td><p><a href="https://blog.cloudflare.com/sandbox-ga/"><u>Agents have their own computers with Sandboxes GA</u></a></p></td><td><p>Cloudflare Sandboxes give AI agents a persistent, isolated environment: a real computer with a shell, a filesystem, and background processes that starts on demand and picks up exactly where it left off.</p></td></tr><tr><td><p><a href="https://blog.cloudflare.com/sandbox-auth/"><u>Dynamic, identity-aware, and secure: egress controls for Sandboxes</u></a></p></td><td><p>Outbound Workers for Sandboxes provide a programmable, zero-trust egress proxy for AI agents. This allows developers to inject credentials and enforce dynamic security policies without exposing sensitive tokens to untrusted code.</p></td></tr><tr><td><p><a href="https://blog.cloudflare.com/durable-object-facets-dynamic-workers/"><u>Durable Objects in Dynamic Workers: Give each AI-generated app its own database</u></a></p></td><td><p>Durable Object Facets allows Dynamic Workers to instantiate Durable Objects with their own isolated SQLite databases. This enables developers to build platforms that run persistent, stateful code generated on-the-fly.</p></td></tr><tr><td><p><a href="https://blog.cloudflare.com/workflows-v2/"><u>Rearchitecting the Workflows control plane for the agentic era</u></a></p></td><td><p>Cloudflare Workflows, a durable execution engine for multi-step applications, now supports 50,000 concurrency and 300 creation rate limits through a rearchitectured control plane, helping scale to meet the use cases for durable background agents.</p></td></tr></table>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6GiG5cEdgwoLU94AuUuLfS/f9022abc8ce823ef2468860474598b6f/BLOG-3239_2.png" />
          </figure>
    <div>
      <h2>Security</h2>
      <a href="#security">
        
      </a>
    </div>
    <p>Running agents and their code is only half the challenge. Agents connect to private networks, access internal services, and take autonomous actions on behalf of users. When anyone in an organization can spin up their own agents, security can't be an afterthought. It has to be the default. This week, we launched the tools to make that easy.</p><table><tr><th><p><b>Announcement</b></p></th><th><p><b>Summary</b></p></th></tr><tr><td><p><a href="https://blog.cloudflare.com/mesh/"><u>Secure private networking for everyone: users, nodes, agents, Workers — introducing Cloudflare Mesh</u></a></p></td><td><p>Cloudflare Mesh provides secure, private network access for users, nodes, and autonomous AI agents. By integrating with Workers VPC, developers can now grant agents scoped access to private databases and APIs without manual tunnels.</p></td></tr><tr><td><p><a href="https://blog.cloudflare.com/managed-oauth-for-access/"><u>Managed OAuth for Access: make internal apps agent-ready in one click</u></a></p></td><td><p>Managed OAuth for Cloudflare Access helps AI agents securely navigate internal applications. By adopting RFC 9728, agents can authenticate on behalf of users without using insecure service accounts.</p></td></tr><tr><td><p><a href="https://blog.cloudflare.com/improved-developer-security/"><u>Securing non-human identities: automated revocation, OAuth, and scoped permissions</u></a></p></td><td><p>Cloudflare is introducing scannable API tokens, enhanced OAuth visibility, and GA for resource-scoped permissions. These tools help developers implement a true least-privilege architecture while protecting against credential leakage.</p></td></tr><tr><td><p><a href="https://blog.cloudflare.com/enterprise-mcp/"><u>Scaling MCP adoption: our reference architecture for enterprise MCP deployments</u></a></p></td><td><p>We share Cloudflare's internal strategy for governing MCP using Access, AI Gateway, and MCP server portals. We also launch Code Mode to slash token costs and recommend new rules for detecting Shadow MCP in Cloudflare Gateway.</p></td></tr></table>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6dT2Kw2vS0AJvk6Nw1oAg2/c1b9ca10314f85664ce6a939339f1247/BLOG-3239_3.png" />
          </figure>
    <div>
      <h2>Agent Toolbox</h2>
      <a href="#agent-toolbox">
        
      </a>
    </div>
    <p>A capable agent needs to be able to think and remember, communicate, and see. This means being powered with the right models, with access to the right tools and the right context for their task at hand. This week we shipped the primitives — inference, search, memory, voice, email, and a browser — that turn an agent into something that actually gets work done.</p><table><tr><th><p><b>Announcement</b></p></th><th><p><b>Summary</b></p></th></tr><tr><td><p><a href="https://blog.cloudflare.com/project-think/"><u>Project Think: building the next generation of AI agents on Cloudflare</u></a></p></td><td><p>Announcing a preview of the next edition of the Agents SDK — from lightweight primitives to a batteries-included platform for AI agents that think, act, and persist.</p></td></tr><tr><td><p><a href="https://blog.cloudflare.com/voice-agents/"><u>Add voice to your agent</u></a></p></td><td><p>An experimental voice pipeline for the Agents SDK enables real-time voice interactions over WebSockets. Developers can now build agents with continuous STT and TTS in just ~30 lines of server-side code.</p></td></tr><tr><td><p><a href="https://blog.cloudflare.com/email-for-agents/"><u>Cloudflare Email Service: now in public beta. Ready for your agents</u></a></p></td><td><p>Agents are becoming multi-channel. That means making them available wherever your users already are — including the inbox. Cloudflare Email Service enters public beta with the infrastructure layer to make that easy: send, receive, and process email natively from your agents.</p></td></tr><tr><td><p><a href="https://blog.cloudflare.com/ai-platform/"><u>Cloudflare's AI platform: an inference layer designed for agents </u></a></p></td><td><p>We're building Cloudflare into a unified inference layer for agents, letting developers call models from 14+ providers. New features include Workers binding for running third-party models and an expanded catalog with multimodal models.</p></td></tr><tr><td><p><a href="https://blog.cloudflare.com/high-performance-llms/"><u>Building the foundation for running extra-large language models</u></a></p></td><td><p>We built a custom technology stack to run fast large language models on Cloudflare’s infrastructure. This post explores the engineering trade-offs and technical optimizations required to make high-performance AI inference accessible.</p></td></tr><tr><td><p><a href="https://blog.cloudflare.com/unweight-tensor-compression/"><u>Unweight: how we compressed an LLM 22% without sacrificing quality</u></a></p></td><td><p>Running large LLMs across Cloudflare’s network requires us to be smarter and more efficient about GPU memory bandwidth. That’s why we developed Unweight, a lossless inference-time compression system that achieves up to a 22% model footprint reduction, so that we can deliver faster and cheaper inference than ever before. </p></td></tr><tr><td><p><a href="https://blog.cloudflare.com/introducing-agent-memory/"><u>Agents that remember: introducing Agent Memory</u></a></p></td><td><p>Cloudflare Agent Memory is a managed service that gives AI agents persistent memory, allowing them to recall what matters, forget what doesn't, and get smarter over time.</p></td></tr><tr><td><p><a href="https://blog.cloudflare.com/ai-search-agent-primitive/"><u>AI Search: the search primitive for your agents</u></a></p></td><td><p>AI Search is the search primitive for your agents. Create instances dynamically, upload files, and search across instances with hybrid retrieval and relevance boosting. Just create a search instance, upload, and search.</p></td></tr><tr><td><p><a href="https://blog.cloudflare.com/browser-run-for-ai-agents/"><u>Browser Run: give your agents a browser</u></a></p></td><td><p>Browser Rendering is now Browser Run, with Live View, Human in the Loop, CDP access, session recordings, and 4x higher concurrency limits for AI agents.</p></td></tr></table>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3sTmO0Pt0PabTR2g8pZ963/a3f523a17db3a13301072232664b11c2/BLOG-3239_4.png" />
          </figure>
    <div>
      <h2>Prototype to production</h2>
      <a href="#prototype-to-production">
        
      </a>
    </div>
    <p>The best infrastructure is also one that’s easy to use. We want to meet developers and their agents where they’re already working: in the terminal, in the editor, in a prompt, and make the full Cloudflare platform accessible without context-switching.</p><table><tr><th><p><b>Announcement</b></p></th><th><p><b>Summary</b></p></th></tr><tr><td><p><a href="https://blog.cloudflare.com/cf-cli-local-explorer/"><u>Building a CLI for all of Cloudflare</u></a></p></td><td><p>We’re introducing cf, a new unified CLI designed for consistency across the Cloudflare platform, alongside Local Explorer for debugging local data. These tools simplify how developers and AI agents interact with our nearly 3,000 API operations.</p></td></tr><tr><td><p><a href="https://blog.cloudflare.com/introducing-agent-lee/"><u>Introducing Agent Lee - a new interface to the Cloudflare stack</u></a></p></td><td><p>Agent Lee is an in-dashboard agent that shifts Cloudflare’s interface from manual tab-switching to a single prompt. Using sandboxed TypeScript, it helps you troubleshoot and manage your stack as a grounded technical collaborator.</p></td></tr><tr><td><p><a href="https://blog.cloudflare.com/flagship/"><u>Introducing Flagship: feature flags built for the age of AI</u></a></p></td><td><p>Introducing Flagship, a native feature flag service built on Cloudflare’s global network to eliminate the latency of third-party providers. By using KV and Durable Objects, Flagship allows for sub-millisecond flag evaluation.</p></td></tr><tr><td><p><a href="https://blog.cloudflare.com/deploy-planetscale-postgres-with-workers/"><u>Deploy Postgres and MySQL databases with PlanetScale + Workers</u></a></p></td><td><p>Learn how to deploy PlanetScale Postgres and MySQL databases via Cloudflare and connect Cloudflare Workers.</p></td></tr><tr><td><p><a href="https://blog.cloudflare.com/registrar-api-beta/"><u>Register domains wherever you build: Cloudflare Registrar API now in beta</u></a></p></td><td><p>The Cloudflare Registrar API is now in beta. Developers and AI agents can search, check availability, and register domains at cost directly from their editor, their terminal, or their agent — without leaving their workflow.</p></td></tr></table>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1BuOIU5Q3gfMIekZJEiBra/732c42b78aa397e7bb19b3816c7d1516/BLOG-3239_5.png" />
          </figure>
    <div>
      <h2>Agentic Web</h2>
      <a href="#agentic-web">
        
      </a>
    </div>
    <p>As more agents come online, they're still browsing an Internet that was built for people. Existing websites need new tools to control what bots can access their content, package and present it for agents, and measure how ready they are for this shift.</p><table><tr><th><p><b>Announcement</b></p></th><th><p><b>Summary</b></p></th></tr><tr><td><p><a href="https://blog.cloudflare.com/agent-readiness/"><u>Introducing the Agent Readiness score. Is your site agent-ready?</u></a></p></td><td><p>The Agent Readiness score can help site owners understand how well their websites support AI agents. Here we explore new standards, share Radar data, and detail how we made Cloudflare’s docs the most agent-friendly on the web.</p></td></tr><tr><td><p><a href="https://blog.cloudflare.com/ai-redirects/"><u>Redirects for AI Training enforces canonical content</u></a></p></td><td><p>Soft directives don’t stop crawlers from ingesting deprecated content. Redirects for AI Training allows anybody on Cloudflare to redirect verified crawlers to canonical pages with one toggle and no origin changes.</p></td></tr><tr><td><p><a href="https://blog.cloudflare.com/network-performance-agents-week/"><u>Agents Week: Network performance update</u></a></p></td><td><p>By migrating our request handling layer to a Rust-based architecture called FL2, Cloudflare has increased its performance lead to 60% of the world’s top networks. We use real-user measurements and TCP connection trimeans to ensure our data reflects the actual experience of people on the Internet</p></td></tr><tr><td><p><a href="https://blog.cloudflare.com/shared-dictionaries/"><u>Shared dictionary compression that keeps up with the agentic web</u></a></p></td><td><p>We give you a sneak peek of our support for shared compression dictionaries, show you how it improves page load times, and reveal when you’ll be able to try the beta yourself.</p></td></tr></table>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5jKSnMla0hclK999LtHfuu/886f66ffda1c699a5a15bf6748a6cf9b/BLOG-3239_6.png" />
          </figure>
    <div>
      <h2>That’s a wrap</h2>
      <a href="#thats-a-wrap">
        
      </a>
    </div>
    <p>Agents Week 2026 is ending, but the agentic cloud is just getting started. Everything we shipped this week — from compute and security to the agent toolbox and the agentic web — is the foundation. We're going to keep building on it to give you everything you need to build what's next.</p><p>We also have more blog posts coming out today and tomorrow to continue the story, so keep an eye out for the latest <a href="https://blog.cloudflare.com/"><u>at our blog</u></a>.</p><p>If you're building on any of what we announced this week, we want to hear about it. Come find us on <a href="https://x.com/cloudflaredev"><u>X</u></a> or <a href="https://discord.com/invite/cloudflaredev"><u>Discord</u></a>, or head to the <a href="https://developers.cloudflare.com/products/?product-group=Developer+platform"><u>developer documentation</u></a>. </p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/YLNvB0KU0Eh3KpAj6YgD6/77c190843d1fa76b212571f1d607d8d2/BLOG-3239_7.png" />
          </figure><p></p> ]]></content:encoded>
            <category><![CDATA[Agents Week]]></category>
            <category><![CDATA[Agents]]></category>
            <category><![CDATA[AI]]></category>
            <category><![CDATA[Durable Objects]]></category>
            <category><![CDATA[Cloudflare Workers]]></category>
            <category><![CDATA[SDK]]></category>
            <category><![CDATA[Browser Run]]></category>
            <category><![CDATA[Cloudflare Access]]></category>
            <category><![CDATA[Browser Rendering]]></category>
            <category><![CDATA[MCP]]></category>
            <category><![CDATA[Developer Platform]]></category>
            <category><![CDATA[Developers]]></category>
            <category><![CDATA[Sandbox]]></category>
            <category><![CDATA[LLM]]></category>
            <category><![CDATA[Cloudflare Gateway]]></category>
            <category><![CDATA[Workers AI]]></category>
            <category><![CDATA[Product News]]></category>
            <category><![CDATA[API]]></category>
            <guid isPermaLink="false">25CSwW9eXDM4FJOdVPLf8f</guid>
            <dc:creator>Ming Lu</dc:creator>
            <dc:creator>Anni Wang</dc:creator>
        </item>
        <item>
            <title><![CDATA[Cloudflare’s AI Platform: an inference layer designed for agents]]></title>
            <link>https://blog.cloudflare.com/ai-platform/</link>
            <pubDate>Thu, 16 Apr 2026 14:05:00 GMT</pubDate>
            <description><![CDATA[ We're building AI Gateway into a unified inference layer for AI, letting developers call models from 14+ providers. New features include Workers AI binding integration and an expanded catalog with multimodal models.
 ]]></description>
            <content:encoded><![CDATA[ <p>AI models are changing quickly: the best model to use for agentic coding today might in three months be a completely different model from a different provider. On top of this, real-world use cases often require calling more than one model. Your customer support agent might use a fast, cheap model to classify a user's message; a large, reasoning model to plan its actions; and a lightweight model to execute individual tasks.</p><p>This means you need access to all the models, without tying yourself financially and operationally to a single provider. You also need the right systems in place to monitor costs across providers, ensure reliability when one of them has an outage, and manage latency no matter where your users are.</p><p>These challenges are present whenever you’re building with AI, but they get even more pressing when you’re building <a href="https://www.cloudflare.com/learning/ai/what-is-agentic-ai/"><u>agents</u></a>. A simple chatbot might make one <a href="https://www.cloudflare.com/learning/ai/inference-vs-training/"><u>inference</u></a> call per user prompt. An agent might chain ten calls together to complete a single task and suddenly, a single slow provider doesn't add 50ms, it adds 500ms. One failed request isn't a retry, but suddenly a cascade of downstream failures. </p><p>Since launching AI Gateway and Workers AI, we’ve seen incredible adoption from developers building AI-powered applications on Cloudflare and we’ve been shipping fast to keep up! In just the past few months, we've refreshed the dashboard, added zero-setup default gateways, automatic retries on upstream failures, and more granular logging controls. Today, we’re making Cloudflare into a unified inference layer: one API to access any AI model from any provider, built to be fast and reliable. </p>
    <div>
      <h3>One catalog, one unified endpoint</h3>
      <a href="#one-catalog-one-unified-endpoint">
        
      </a>
    </div>
    <p>Starting today, you can call third-party models using the same AI.run() binding you already use for Workers AI. If you’re using Workers, switching from a Cloudflare-hosted model to one from OpenAI, Anthropic, or any other provider is a one-line change. </p>
            <pre><code>const response = await env.AI.run('anthropic/claude-opus-4-6',{
input: 'What is Cloudflare?',
}, {
gateway: { id: "default" },
});</code></pre>
            <p>For those who don’t use Workers, we’ll be releasing REST API support in the coming weeks, so you can access the full model catalog from any environment.</p><p>We’re also excited to share that you'll now have access to 70+ models across 12+ providers — all through one API, one line of code to switch between them, and one set of credits to pay for them. And we’re quickly expanding this as we go.</p><p>You can browse through our <a href="https://developers.cloudflare.com/ai/models"><u>model catalog</u></a> to find the best model for your use case, from open-source models hosted on Cloudflare Workers AI to proprietary models from the major model providers. We’re excited to be expanding access to models from <b>Alibaba Cloud, AssemblyAI, Bytedance, Google, InWorld, MiniMax, OpenAI, Pixverse, Recraft, Runway, and Vidu</b> — who will provide their models through AI Gateway. Notably, we’re expanding our model offerings to include image, video, and speech models so that you can build multimodal applications</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2ez5tichGEEn5k6SzCgWLm/380a685b14ee9732fdf87c6f88c8f39e/BLOG-3209_2.png" />
          </figure><p>Accessing all your models through one API also means you can manage all your AI spend in one place. Most companies today are calling <a href="https://aidbintel.com/pulse-survey"><u>an average of 3.5 models</u></a> across multiple providers, which means no one provider is able to give you a holistic view of your AI usage. <b>With AI Gateway, you’ll get one centralized place to monitor and manage AI spend.</b></p><p>By including custom metadata with your requests, you can get a breakdown of your costs on the attributes that you care about most, like spend by free vs. paid users, by individual customers, or by specific workflows in your app.</p>
            <pre><code>const response = await env.AI.run('@cf/moonshotai/kimi-k2.5',
      {
prompt: 'What is AI Gateway?'
      },
      {
metadata: { "teamId": "AI", "userId": 12345 }
      }
    );</code></pre>
            
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6ez3O7rmbrCUdD5R5UcuP9/4c219ff5ce1e24a0485a931b6af47608/BLOG-3209_3.png" />
          </figure>
    <div>
      <h3>Bring your own model</h3>
      <a href="#bring-your-own-model">
        
      </a>
    </div>
    <p>AI Gateway gives you access to models from all the providers through one API. But sometimes you need to run a model you've fine-tuned on your own data or one optimized for your specific use case. For that, we are working on letting users bring their own model to Workers AI. </p><p>The overwhelming majority of our traffic comes from dedicated instances for Enterprise customers who are running custom models on our platform, and we want to bring this to more customers. To do this, we leverage Replicate’s <a href="https://cog.run/"><u>Cog</u></a> technology to help you containerize machine learning models.</p><p>Cog is designed to be quite simple: all you need to do is write down dependencies in a cog.yaml file, and your inference code in a Python file. Cog abstracts away all the hard things about packaging ML models, such as CUDA dependencies, Python versions, weight loading, etc. </p><p>Example of a <code>cog.yaml</code> file:</p>
            <pre><code>build:
  python_version: "3.13"
  python_requirements: requirements.txt
predict: "predict.py:Predictor"</code></pre>
            <p>Example of a <a href="http://predict.py"><code><u>predict.py</u></code></a> file, which has a function to set up the model and a function that runs when you receive an inference request (a prediction):</p>
            <pre><code>from cog import BasePredictor, Path, Input
import torch

class Predictor(BasePredictor):
    def setup(self):
        """Load the model into memory to make running multiple predictions efficient"""
        self.net = torch.load("weights.pth")

    def predict(self,
            image: Path = Input(description="Image to enlarge"),
            scale: float = Input(description="Factor to scale image by", default=1.5)
    ) -&gt; Path:
        """Run a single prediction on the model"""
        # ... pre-processing ...
        output = self.net(input)
        # ... post-processing ...
        return output</code></pre>
            <p>Then, you can run cog build to build your container image, and push your Cog container to Workers AI. We will deploy and serve the model for you, which you then access through your usual Workers AI APIs. </p><p>We’re working on some big projects to be able to bring this to more customers, like customer-facing APIs and wrangler commands so that you can push your own containers, as well as faster cold starts through GPU snapshotting. We’ve been testing this internally with Cloudflare teams and some external customers who are guiding our vision. If you’re interested in being a design partner with us, please reach out! Soon, anyone will be able to package their model and use it through Workers AI.</p>
    <div>
      <h3>The fast path to first token</h3>
      <a href="#the-fast-path-to-first-token">
        
      </a>
    </div>
    <p>Using Workers AI models with AI Gateway is particularly powerful if you’re building live agents – where a user's perception of speed hinges on time to first token or how quickly the agent starts responding, rather than how long the full response takes. Even if total inference is 3 seconds, getting that first token 50ms faster makes the difference between an agent that feels zippy and one that feels sluggish.</p><p>Cloudflare's network of data centers in 330 cities around the world means AI Gateway is positioned close to both users and inference endpoints, minimizing the network time before streaming begins.</p><p>Workers AI also hosts open-source models on its public catalog, which now includes large models purpose-built for agents, including <a href="https://developers.cloudflare.com/workers-ai/models/kimi-k2.5"><u>Kimi K2.5</u></a> and real-time voice models. When you call these Cloudflare-hosted models through AI Gateway, there's no extra hop over the public Internet since your code and inference run on the same global network, giving your agents the lowest latency possible.</p>
    <div>
      <h3>Built for reliability with automatic failover</h3>
      <a href="#built-for-reliability-with-automatic-failover">
        
      </a>
    </div>
    <p>When building agents, speed is not the only factor that users care about – reliability matters too. Every step in an agent workflow depends on the steps before it. Reliable inference is crucial for agents because one call failing can affect the entire downstream chain. </p><p>Through AI Gateway, if you're calling a model that's available on multiple providers and one provider goes down, we'll automatically route to another available provider without you having to write any failover logic of your own. </p><p>If you’re building <a href="https://blog.cloudflare.com/project-think/"><u>long-running agents with Agents SDK</u></a>, your streaming inference calls are also resilient to disconnects. AI Gateway buffers streaming responses as they’re generated, independently of your agent's lifetime. If your agent is interrupted mid-inference, it can reconnect to AI Gateway and retrieve the response without having to make a new inference call or paying twice for the same output tokens. Combined with the Agents SDK's built-in checkpointing, the end user never notices.</p>
    <div>
      <h3>Replicate</h3>
      <a href="#replicate">
        
      </a>
    </div>
    <p>The Replicate team has officially <a href="https://blog.cloudflare.com/replicate-joins-cloudflare/"><u>joined</u></a> our AI Platform team, so much so that we don’t even consider ourselves separate teams anymore. We’ve been hard at work on integrations between Replicate and Cloudflare, which include bringing all the Replicate models onto AI Gateway and replatforming the hosted models onto Cloudflare infrastructure. Soon, you’ll be able to access the models you loved on Replicate through AI Gateway, and host the models you deployed on Replicate on Workers AI as well.</p>
    <div>
      <h3>Get started</h3>
      <a href="#get-started">
        
      </a>
    </div>
    <p>To get started, check out our documentation for <a href="https://developers.cloudflare.com/ai-gateway"><u>AI Gateway</u></a> or <a href="https://developers.cloudflare.com/workers-ai/"><u>Workers AI</u></a>. Learn more about building agents on Cloudflare through <a href="https://developers.cloudflare.com/agents/"><u>Agents SDK</u></a>. </p>
    <div>
      <h3>Watch on Cloudflare TV</h3>
      <a href="#watch-on-cloudflare-tv">
        
      </a>
    </div>
    <div>
  
</div><p></p> ]]></content:encoded>
            <category><![CDATA[Agents Week]]></category>
            <category><![CDATA[Agents]]></category>
            <category><![CDATA[AI]]></category>
            <category><![CDATA[AI Gateway]]></category>
            <category><![CDATA[Workers AI]]></category>
            <category><![CDATA[Developers]]></category>
            <category><![CDATA[Developer Platform]]></category>
            <category><![CDATA[LLM]]></category>
            <guid isPermaLink="false">2vIUFXJLJcMgjgY6jFnQ7G</guid>
            <dc:creator>Ming Lu</dc:creator>
            <dc:creator>Michelle Chen</dc:creator>
        </item>
    </channel>
</rss>