
<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:media="http://search.yahoo.com/mrss/">
    <channel>
        <title><![CDATA[ The Cloudflare Blog ]]></title>
        <description><![CDATA[ Get the latest news on how products at Cloudflare are built, technologies used, and join the teams helping to build a better Internet. ]]></description>
        <link>https://blog.cloudflare.com</link>
        <atom:link href="https://blog.cloudflare.com/" rel="self" type="application/rss+xml"/>
        <language>en-us</language>
        
        <lastBuildDate>Mon, 13 Apr 2026 19:49:25 GMT</lastBuildDate>
        <item>
            <title><![CDATA[Deploy your own AI vibe coding platform — in one click! ]]></title>
            <link>https://blog.cloudflare.com/deploy-your-own-ai-vibe-coding-platform/</link>
            <pubDate>Tue, 23 Sep 2025 14:00:00 GMT</pubDate>
            <description><![CDATA[ Introducing VibeSDK, an open-source AI "vibe coding" platform that anyone can deploy to build their own custom platform. Comes ready with code generation, sandbox environment, and project deployment.  ]]></description>
            <content:encoded><![CDATA[ <p>It’s an exciting time to build applications. With the recent AI-powered <a href="https://www.cloudflare.com/learning/ai/ai-vibe-coding/"><u>"vibe coding"</u></a> boom, anyone can build a website or application by simply describing what they want in a few sentences. We’re already seeing organizations expose this functionality to both their users and internal employees, empowering anyone to build out what they need.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/40Jzjser2hE91b1y3p80pm/7bc1a7f0ee4cfaeb7a39bb413969b189/1.png" />
          </figure><p>Today, we’re excited to open-source an AI vibe coding platform, VibeSDK, to enable anyone to run an entire vibe coding platform themselves, end-to-end, with just one click.</p><p>Want to see it for yourself? Check out our <a href="https://build.cloudflare.dev/"><u>demo platform</u></a> that you can use to create and deploy applications. Or better yet, click the button below to deploy your own AI-powered platform, and dive into the repo to learn about how it’s built.</p><a href="https://deploy.workers.cloudflare.com/?url=https://github.com/cloudflare/vibesdk"><img src="https://deploy.workers.cloudflare.com/button" /></a>
<p></p><p>Deploying VibeSDK sets up everything you need to run your own AI-powered development platform:</p><ul><li><p><b>Integration with LLM models</b> to generate code, build applications, debug errors, and iterate in real-time, powered by <a href="https://developers.cloudflare.com/agents/"><u>Agents SDK</u></a>. </p></li><li><p><b>Isolated development environments </b>that allow users to safely build and preview their applications in secure sandboxes.</p></li><li><p><b>Infinite scale</b> that allows you to deploy thousands or even millions of applications that end users deploy, all served on Cloudflare’s global network</p></li><li><p><b>Observability and caching</b> across multiple AI providers, giving you <a href="https://www.cloudflare.com/learning/performance/what-is-observability/">insight into costs and performance</a> with built-in caching for popular responses. </p></li><li><p><b>Project templates</b> that the LLM can use as a starting point to build common applications and speed up development.</p></li><li><p><b>One-click project export</b> to the user’s Cloudflare account or GitHub repo, so users can take their code and continue development on their own.</p></li></ul>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3jsExecZKmgJHARsxDsMkM/60803b1ba2c68514a053f4d000bf8576/2.png" />
          </figure><p><b>Building an AI vibe coding platform from start to finish</b></p><p><b>Step 0: Get started immediately with VibeSDK</b></p><p>We’re seeing companies build their own AI vibe coding platforms to enable both internal and external users. With a vibe coding platform, internal teams like marketing, product, and support can build their own landing pages, prototypes, or internal tools without having to rely on the engineering team. Similarly, SaaS companies can embed this capability into their product to allow users to build their own customizations. </p><p>Every platform has unique requirements and specializations. By <a href="https://www.cloudflare.com/learning/ai/how-to-get-started-with-vibe-coding/">building your own</a>, you can write custom logic to prompt LLMs for your specific needs, giving your users more relevant results. This also grants you complete control over the development environment and <a href="https://www.cloudflare.com/developer-platform/solutions/hosting/">application hosting</a>, giving you a secure platform that keeps your data private and within your control. </p><p>We wanted to make it easy for anyone to build this themselves, which is why we built a complete platform that comes with project templates, previews, and project deployment. Developers can repurpose the whole platform, or simply take the components they need and customize them to fit their needs.</p><p><b>Step 1: Finding a safe, isolated environment for running untrusted, AI generated code</b></p><p>AI can now build entire applications, but there's a catch: you need somewhere safe to run this untrusted, AI-generated code. Imagine if an <a href="https://www.cloudflare.com/learning/ai/what-is-large-language-model/"><u>LLM</u></a> writes an application that needs to install packages, run build commands, and start a development server — you can't just run this directly on your infrastructure where it might affect other users or systems.</p><p>With <a href="https://developers.cloudflare.com/changelog/2025-06-24-announcing-sandboxes/"><u>Cloudflare Sandboxes</u></a>, you don't have to worry about this. Every user gets their own isolated environment where the AI-generated code can do anything a normal development environment can do: install npm packages, run builds, start servers, but it's fully contained in a secure, container-based environment that can't affect anything outside its sandbox. </p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1AVtsIotiISgrjspHaRLsg/6cd2e56d5edb7021d63c01183362aafe/3.png" />
          </figure><p>The platform assigns each user to their own sandbox based on their session, so that if a user comes back, they can continue to access the same container with their files intact:</p>
            <pre><code>// Creating a sandbox client for a user session
const sandbox = getSandbox(env.Sandbox, sandboxId);

// Now AI can safely write and execute code in this isolated environment
await sandbox.writeFile('app.js', aiGeneratedCode);
await sandbox.exec('npm install express');
await sandbox.exec('node app.js');</code></pre>
            <p><b>Step 2: Generating the code</b></p><p>Once the sandbox is created, you have a development environment that can bring the code to life. VibeSDK orchestrates the whole workflow from writing the code, installing the necessary packages, and starting the development server. If you ask it to build a to-do app, it will generate the React application, write the component files, run <code>bun install</code> to get the dependencies, and start the server, so you can see the end result. </p><p>Once the user submits their request, the AI will generate all the necessary files, whether it's a React app, Node.js API, or full-stack application, and write them directly to the sandbox:</p>
            <pre><code>async function generateAndWriteCode(instanceId: string) {
    // AI generates the application structure
    const aiGeneratedFiles = await callAIModel("Create a React todo app");
    
    // Write all generated files to the sandbox
    for (const file of aiGeneratedFiles) {
        await sandbox.writeFile(
            `${instanceId}/${file.path}`,
            file.content
        );
        // User sees: "✓ Created src/App.tsx"
        notifyUser(`✓ Created ${file.path}`);
    }
}</code></pre>
            <p>To speed this up even more, we’ve provided a set of templates, stored in an <a href="https://www.cloudflare.com/developer-platform/products/r2/"><u>R2 bucket</u></a>, that the platform can use and quickly customize, instead of generating every file from scratch. This is just an initial set, but you can expand it and add more examples. </p><p><b>Step 3: Getting a preview of your deployment</b></p><p>Once everything is ready, the platform starts the development server and uses the Sandbox SDK to expose it to the internet with a public preview URL which allows users to instantly see their AI-generated application running live:</p>
            <pre><code>// Start the development server in the sandbox
const processId = await sandbox.startProcess(
    `bun run dev`, 
    { cwd: instanceId }
);

// Create a public preview URL 
const preview = await sandbox.exposePort(3000, { 
    hostname: 'preview.example.com' 
});

// User instantly gets: "https://my-app-xyz.preview.example.com"
notifyUser(`✓ Preview ready at: ${preview.url}`);</code></pre>
            <p><b>Step 4: Test, log, fix, repeat</b></p><p>But that’s not all! Throughout this process, the platform will capture console output, build logs, and error messages and feed them back to the LLM for automatic fixes. As the platform makes any updates or fixes, the user can see it all happening live — the file editing, installation progress, and error resolution. </p><p>Deploying applications: From Sandbox to Region Earth</p><p>Once the application is developed, it needs to be deployed. The platform packages everything in the sandbox and then uses a separate specialized "deployment sandbox" to deploy the application to <a href="https://www.cloudflare.com/developer-platform/products/workers/"><u>Cloudflare Workers</u></a>. This deployment sandbox runs <code>wrangler deploy</code> inside the secure environment to publish the application to Cloudflare's global network. </p><p>Since the platform may deploy up to thousands or millions of applications, Workers for Platforms is used to deploy the Workers at scale. Although all the Workers are deployed to the same Namespace, they are all isolated from one another by default, ensuring there’s no cross-tenant access. Once deployed, each application receives its own isolated Worker instance with a unique public URL like <code>my-app.vibe-build.example.com</code>. </p>
            <pre><code>async function deployToWorkersForPlatforms(instanceId: string) {
    // 1. Package the app from development sandbox
    const devSandbox = getSandbox(env.Sandbox, instanceId);
    const packagedApp = await devSandbox.exec('zip -r app.zip .');
    
    // 2. Transfer to specialized deployment sandbox
    const deploymentSandbox = getSandbox(env.DeployerServiceObject, 'deployer');
    await deploymentSandbox.writeFile('app.zip', packagedApp);
    await deploymentSandbox.exec('unzip app.zip');
    
    // 3. Deploy using Workers for Platforms dispatch namespace
    const deployResult = await deploymentSandbox.exec(`
        bunx wrangler deploy \\\\
        --dispatch-namespace vibe-sdk-build-default-namespace
    `);
    
    // Each app gets its own isolated Worker and unique URL
    // e.g., https://my-app.example.com
    return `https://${instanceId}.example.com`;
}</code></pre>
            <p><b>Exportable Applications </b></p><p>The platform also allows users to export their application to their own Cloudflare account and GitHub repo, so they can continue the development on their own. </p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3itKLSmnTzk2NapoaDQG6e/a0aba26f83cb01db5bd7957a8dfe18f4/Screenshot_2025-09-23_at_9.22.28%C3%A2__AM.png" />
          </figure><p>Observability, caching, and multi-model support built in! </p><p>It's no secret that LLM models have their specialties, which means that when building an AI-powered platform, you may end up using a few different models for different operations. By default, VibeSDK leverages Google’s Gemini models (gemini-2.5-pro, gemini-2.5-flash-lite, gemini-2.5-flash) for project planning, code generation, and debugging. </p><p>VibeSDK is automatically set up with <a href="https://www.cloudflare.com/developer-platform/products/ai-gateway/"><u>AI Gateway</u></a>, so that by default, the platform is able to:</p><ul><li><p>Use a unified access point to <a href="https://blog.cloudflare.com/ai-gateway-aug-2025-refresh/"><u>route requests across LLM providers</u></a>, allowing you to use models from a range of providers (OpenAI, Anthropic, Google, and others)</p></li><li><p>Cache popular responses, so when someone asks to "build a to-do list app", the gateway can serve a cached response instead of going to the provider (saving inference costs)</p></li><li><p>Get observability into the requests, tokens used, and response times across all providers in one place</p></li><li><p>Track costs across models and integrations</p></li></ul><p>Open sourced, so you can build your own Platform! </p><p>We're open-sourcing VibeSDK for the same reason Cloudflare open-sourced the Workers runtime — we believe the best development happens in the open. That's why we wanted to make it as easy as possible for anyone to build their own AI coding platform, whether it's for internal company use, for your website builder, or for the next big vibe coding platform. We tied all the pieces together for you, so you can get started with the click of a button instead of spending months figuring out how to connect everything yourself. To learn more, check out our <a href="https://developers.cloudflare.com/reference-architecture/diagrams/ai/ai-vibe-coding-platform/"><u>reference architecture</u></a> for vibe coding platforms. </p><a href="https://deploy.workers.cloudflare.com/?url=https://github.com/cloudflare/vibesdk"><img src="https://deploy.workers.cloudflare.com/button" /></a>
<p></p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2sjy1OUciwTKnKmhXJfbLQ/d0e01bee3867d639077f134fc6374948/5.png" />
          </figure><div>
  
</div><p></p> ]]></content:encoded>
            <category><![CDATA[AI Week]]></category>
            <category><![CDATA[Cloudflare Workers]]></category>
            <category><![CDATA[Containers]]></category>
            <category><![CDATA[Cloudflare for SaaS]]></category>
            <guid isPermaLink="false">6hS4bQv1FRDVwOoB1HrU3u</guid>
            <dc:creator>Ashish Kumar Singh</dc:creator>
            <dc:creator>Abhishek Kankani</dc:creator>
            <dc:creator>Dina Kozlov</dc:creator>
        </item>
        <item>
            <title><![CDATA[AI Gateway now gives you access to your favorite AI models, dynamic routing and more — through just one endpoint]]></title>
            <link>https://blog.cloudflare.com/ai-gateway-aug-2025-refresh/</link>
            <pubDate>Wed, 27 Aug 2025 14:05:00 GMT</pubDate>
            <description><![CDATA[ AI Gateway now gives you access to your favorite AI models, dynamic routing and more — through just one endpoint. ]]></description>
            <content:encoded><![CDATA[ <p>Getting the observability you need is challenging enough when the code is deterministic, but AI presents a new challenge — a core part of your user’s experience now relies on a non-deterministic engine that provides unpredictable outputs. On top of that, there are many factors that can influence the results: the model, the system prompt. And on top of that, you still have to worry about performance, reliability, and costs. </p><p>Solving performance, reliability and observability challenges is exactly what Cloudflare was built for, and two years ago, with the introduction of AI Gateway, we wanted to extend to our users the same levels of control in the age of AI. </p><p>Today, we’re excited to announce several features to make building AI applications easier and more manageable: unified billing, secure key storage, dynamic routing, security controls with Data Loss Prevention (DLP). This means that AI Gateway becomes your go-to place to control costs and API keys, route between different models and providers, and manage your AI traffic. Check out our new <a href="https://ai.cloudflare.com/gateway"><u>AI Gateway landing page</u></a> for more information at a glance.</p>
    <div>
      <h2>Connect to all your favorite AI providers</h2>
      <a href="#connect-to-all-your-favorite-ai-providers">
        
      </a>
    </div>
    <p>When using an AI provider, you typically have to sign up for an account, get an API key, manage rate limits, top up credits — all within an individual provider’s dashboard. Multiply that for each of the different providers you might use, and you’ll soon be left with an administrative headache of bills and keys to manage.</p><p>With <a href="https://www.cloudflare.com/developer-platform/products/ai-gateway/"><u>AI Gateway</u></a>, you can now connect to major AI providers directly through Cloudflare and manage everything through one single plane. We’re excited to partner with Anthropic, Google, Groq, OpenAI, and xAI to provide Cloudflare users with access to their models directly through Cloudflare. With this, you’ll have access to over 350+ models across 6 different providers.</p><p>You can now get billed for usage across different providers directly through your Cloudflare account. This feature is available for Workers Paid users, where you’ll be able to add credits to your Cloudflare account and use them for <a href="https://www.cloudflare.com/learning/ai/inference-vs-training/"><u>AI inference</u></a> to all the supported providers. You’ll be able to see real-time usage statistics and manage your credits through the AI Gateway dashboard. Your AI Gateway inference usage will also be documented in your monthly Cloudflare invoice. No more signing up and paying for each individual model provider account. </p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4t2j5frheaYOLznprTL58p/f0fb4c6de2aad70c82a23bc35873ea50/image1.png" />
          </figure><p>Usage rates are based on then-current list prices from model providers — all you will need to cover is the transaction fee as you load credits into your account. Since this is one of the first times we’re launching a credits based billing system at Cloudflare, we’re releasing this feature in Closed Beta — sign up for access <a href="https://forms.gle/3LGAzN2NDXqtbjKR9"><u>here</u></a>.</p>
    <div>
      <h3>BYO Provider Keys, now with Cloudflare Secrets Store</h3>
      <a href="#byo-provider-keys-now-with-cloudflare-secrets-store">
        
      </a>
    </div>
    <p>Although we’ve introduced unified billing, some users might still want to manage their own accounts and keys with providers. We’re happy to say that AI Gateway will continue supporting our <a href="https://developers.cloudflare.com/ai-gateway/configuration/bring-your-own-keys/"><u>BYO Key feature, </u></a>improving the experience of BYO Provider Keys by integrating with Cloudflare’s secrets management product <a href="https://developers.cloudflare.com/secrets-store/"><u>Secrets Store</u></a>. Now, you can seamlessly and securely store your keys in one centralized location and distribute them without relying on plain text. Secrets Store uses a two level key hierarchy with AES encryption to ensure that your secret stays safe, while maintaining low latency through our global configuration system, <a href="https://blog.cloudflare.com/quicksilver-v2-evolution-of-a-globally-distributed-key-value-store-part-1/"><u>Quicksilver</u></a>.</p><p>You can now save and manage keys directly through your AI Gateway dashboard or through the Secrets Store <a href="http://dash.cloudflare.com/?to=/:account/secrets-store"><u>dashboard</u></a>, <a href="https://developers.cloudflare.com/api/resources/secrets_store/subresources/stores/subresources/secrets/methods/create/"><u>API</u></a>, or <a href="https://developers.cloudflare.com/workers/wrangler/commands/#secrets-store-secret"><u>Wrangler</u></a> by using the new <b>AI Gateway</b> <b>scope</b>. Scoping your secrets to AI Gateway ensures that only this specific service will be able to access your keys, meaning that secret could not be used in a Workers binding or anywhere else on Cloudflare’s platform.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6hiSSQi2lQGWQnGYe4e9p1/dadc4fde865010d9e263badb75847992/2.png" />
          </figure><p>You can pass your AI provider keys without including them directly in the request header. Instead of including the actual value, you can deploy the secret only using the Secrets Store reference: </p>
            <pre><code>curl -X POST https://gateway.ai.cloudflare.com/v1/&lt;ACCOUNT_ID&gt;/my-gateway/anthropic/v1/messages \
 --header 'cf-aig-authorization: CLOUDFLARE_AI_GATEWAY_TOKEN \
 --header 'anthropic-version: 2023-06-01' \
 --header 'Content-Type: application/json' \
 --data  '{"model": "claude-3-opus-20240229", "messages": [{"role": "user", "content": "What is Cloudflare?"}]}'</code></pre>
            <p>Or, using Javascript: </p>
            <pre><code>import Anthropic from '@anthropic-ai/sdk';


const anthropic = new Anthropic({
  apiKey: "CLOUDFLARE_AI_GATEWAY_TOKEN",
  baseURL: "https://gateway.ai.cloudflare.com/v1/&lt;ACCOUNT_ID&gt;/my-gateway/anthropic",
});


const message = await anthropic.messages.create({
  model: 'claude-3-opus-20240229',
  messages: [{role: "user", content: "What is Cloudflare?"}],
  max_tokens: 1024
});</code></pre>
            <p>By using Secrets Store to deploy your secrets, you no longer need to give every developer access to every key — instead, you can rely on Secrets Store’s <a href="https://developers.cloudflare.com/secrets-store/access-control/"><u>role-based access control</u></a> to further lock down these sensitive values. For example, you might want your security administrators to have Secrets Store admin permissions so that they can create, update, and delete the keys when necessary. With Cloudflare <a href="https://developers.cloudflare.com/logs/logpush/logpush-job/datasets/account/audit_logs/?cf_target_id=1C767B900C4419A313C249A5D99921FB"><u>audit logging</u></a>, all such actions will be logged so you know exactly who did what and when. Your developers, on the other hand, might only need Deploy permissions, so they can reference the values in code, whether that is a Worker or AI Gateway or both. This way, you reduce the risk of the secret getting leaked accidentally or intentionally by a malicious actor. This also allows you to update your provider keys in one place and automatically propagate that value to any AI Gateway using those values, simplifying the management. </p>
    <div>
      <h3>Unified Request/Response</h3>
      <a href="#unified-request-response">
        
      </a>
    </div>
    <p>We made it super easy for people to try out different AI models – but the developer experience should match that as well. We found that each provider can have slight differences in how they expect people to send their requests, so we’re excited to launch an automatic translation layer between providers. When you send a request through AI Gateway, it just works – no matter what provider or model you use.</p>
            <pre><code>import OpenAI from "openai";
const client = new OpenAI({
  apiKey: "YOUR_PROVIDER_API_KEY", // Provider API key
  // NOTE: the OpenAI client automatically adds /chat/completions to the end of the URL, you should not add it yourself.
  baseURL:
    "https://gateway.ai.cloudflare.com/v1/{account_id}/{gateway_id}/compat",
});

const response = await client.chat.completions.create({
  model: "google-ai-studio/gemini-2.0-flash",
  messages: [{ role: "user", content: "What is Cloudflare?" }],
});

console.log(response.choices[0].message.content);</code></pre>
            
    <div>
      <h2>Dynamic Routes</h2>
      <a href="#dynamic-routes">
        
      </a>
    </div>
    <p>When we first launched <a href="https://www.cloudflare.com/developer-platform/products/workers/"><u>Cloudflare Workers</u></a>, it was an easy way for people to intercept HTTP requests and customize actions based on different attributes. We think the same customization is necessary for AI traffic, so we’re launching <a href="https://developers.cloudflare.com/ai-gateway/features/dynamic-routing/"><u>Dynamic Routes</u></a> in AI Gateway.</p><p>Dynamic Routes allows you to define certain actions based on different request attributes. If you have free users, maybe you want to ratelimit them to a certain request per second (RPS) or a certain dollar spend. Or maybe you want to conduct an A/B test and split 50% of traffic to Model A and 50% of traffic to Model B. You could also want to chain several models in a row, like adding custom guardrails or enhancing a prompt before it goes to another model. All of this is possible with Dynamic Routes!</p><p>We’ve built a slick UI in the AI Gateway dashboard where you can define simple if/else interactions based on request attributes or a percentage split. Once you define a route, you’ll use the route as the “model” name in your input JSON and we will manage the traffic as you defined. </p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7qLp4KT8ASCLRv2pyM2kxR/3151e32afa4d8447ae07a5a8fb09a9b6/3.png" />
          </figure>
            <pre><code>import OpenAI from "openai";

const cloudflareToken = "CF_AIG_TOKEN";
const accountId = "{account_id}";
const gatewayId = "{gateway_id}";
const baseURL = `https://gateway.ai.cloudflare.com/v1/${accountId}/${gatewayId}`;

const openai = new OpenAI({
  apiKey: cloudflareToken,
  baseURL,
});

try {
  const model = "dynamic/&lt;your-dynamic-route-name&gt;";
  const messages = [{ role: "user", content: "What is a neuron?" }];
  const chatCompletion = await openai.chat.completions.create({
    model,
    messages,
  });
  const response = chatCompletion.choices[0].message;
  console.log(response);
} catch (e) {
  console.error(e);
}</code></pre>
            
    <div>
      <h2>Built-in security with Firewall in AI Gateway</h2>
      <a href="#built-in-security-with-firewall-in-ai-gateway">
        
      </a>
    </div>
    <p>Earlier this year we announced <a href="https://developers.cloudflare.com/changelog/2025-02-26-guardrails/"><u>Guardrails</u></a> in AI Gateway and now we’re expanding our security capabilities and include Data Loss Prevention (DLP) scanning in AI Gateway’s Firewall. With this, you can select the DLP profiles you are interested in blocking or flagging, and we will scan requests for the matching content. DLP profiles include general categories like “Financial Information”, “Social Security, Insurance, Tax and Identifier Numbers” that everyone has access to with a free Zero Trust account. If you would like to create a custom DLP profile to safeguard specific text, the upgraded Zero Trust plan allows you to create custom DLP profiles to catch sensitive data that is unique to your business.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5yti8oy4TF01EdZMtYN1If/d2f3bd804873644862fbd61b07d3574a/4.png" />
          </figure><p>False positives and grey area situations happen, we give admins controls on whether to fully block or just alert on DLP matches. This allows administrators to monitor for potential issues without creating roadblocks for their users.. Each log on AI gateway now includes details about the DLP profiles matched on your request, and the action that was taken:</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2pDdqy8bVmsiyjm4sg2pkG/ff97d9069e200fb859c1dc2daed8e4fa/5.png" />
          </figure>
    <div>
      <h2>More coming soon…</h2>
      <a href="#more-coming-soon">
        
      </a>
    </div>
    <p>If you think about the history of Cloudflare, you’ll notice similar patterns that we’re following for the new vision for AI Gateway. We want developers of AI applications to be able to have simple interconnectivity, observability, security, customizable actions, and more — something that Cloudflare has a proven track record of accomplishing for global Internet traffic. We see AI Gateway as a natural extension of Cloudflare’s mission, and we’re excited to make it come to life.</p><p>We’ve got more launches up our sleeves, but we couldn’t wait to get these first handful of features into your hands. Read up about it in our <a href="https://developers.cloudflare.com/ai-gateway/"><u>developer docs</u></a>, <a href="https://developers.cloudflare.com/ai-gateway/get-started/"><u>give it a try</u></a>, and let us know what you think. If you want to explore larger deployments, <a href="https://www.cloudflare.com/plans/enterprise/contact/?utm_medium=referral&amp;utm_source=blog&amp;utm_campaign=2025-q3-acq-gbl-connectivity-ge-ge-general-ai_week_blog"><u>reach out for a consultation </u></a>with Cloudflare experts.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/LTpdSaZMBbdOzASW8ggoS/6610f437d955174d7f7f1212617a4365/6.png" />
          </figure><div>
  
</div><p></p> ]]></content:encoded>
            <category><![CDATA[AI Week]]></category>
            <category><![CDATA[AI Gateway]]></category>
            <category><![CDATA[AI]]></category>
            <guid isPermaLink="false">6O1tkxTcxxG9hgxI8X9kFH</guid>
            <dc:creator>Michelle Chen</dc:creator>
            <dc:creator>Abhishek Kankani</dc:creator>
            <dc:creator>Mia Malden</dc:creator>
        </item>
        <item>
            <title><![CDATA[Make your apps truly interactive with Cloudflare Realtime and RealtimeKit ]]></title>
            <link>https://blog.cloudflare.com/introducing-cloudflare-realtime-and-realtimekit/</link>
            <pubDate>Wed, 09 Apr 2025 14:05:00 GMT</pubDate>
            <description><![CDATA[ Announcing Cloudflare Realtime and RealtimeKit, a complete toolkit for shipping real-time audio and video apps in days with SDKs for Kotlin, React Native, Swift, JavaScript, and Flutter. ]]></description>
            <content:encoded><![CDATA[ <p>Over the past few years, we’ve seen developers push the boundaries of what’s possible with real-time communication — tools for collaborative work, massive online watch parties, and interactive live classrooms are all exploding in popularity.</p><p>We use AI more and more in our daily lives. Text-based interactions are evolving into something more natural: voice and video. When users interact with the applications and tools that AI developers create, we have high expectations for response time and connection quality. Complex applications of AI are built on not just one tool, but a combination of tools, often from different providers which requires a well connected cloud to sit in the middle for the coordination of different AI tools.</p><p>Developers already use <a href="https://developers.cloudflare.com/workers/"><u>Workers</u></a>, <a href="https://developers.cloudflare.com/workers-ai/"><u>Workers AI</u></a>, and our WebRTC <a href="https://developers.cloudflare.com/calls/"><u>SFU</u></a> and <a href="https://developers.cloudflare.com/calls/turn/"><u>TURN</u></a> services to build powerful apps without needing to think about coordinating compute or media services to be closest to their user. It’s only natural for there to be a singular <a href="https://blog.cloudflare.com/best-place-region-earth-inference/"><u>"Region: Earth"</u></a> for real-time applications.</p><p>We're excited to introduce <a href="https://realtime.cloudflare.com"><u>Cloudflare Realtime</u></a> — a suite of products to help you make your apps truly interactive with real-time audio and video experiences. Cloudflare Realtime now brings together our SFU, STUN, and TURN services, along with the new RealtimeKit.</p>
    <div>
      <h2>Say hello to RealtimeKit</h2>
      <a href="#say-hello-to-realtimekit">
        
      </a>
    </div>
    <p>RealtimeKit is a collection of mobile SDKs (iOS, Android, React Native, Flutter), SDKs for the Web (React, Angular, vanilla JS, WebComponents), and server side services (recording, coordination, transcription) that make it easier than ever to build real-time voice, video, and AI applications. RealtimeKit also includes user interface components to build interfaces quickly. </p><p>The amazing team behind <a href="https://dyte.io/"><u>Dyte</u></a>, a leading company in the real-time ecosystem, joined Cloudflare to accelerate the development of RealtimeKit. The Dyte team spent years focused on making real-time experiences accessible to developers of all skill levels, and had a deep understanding of the developer journey — they built abstractions that hid WebRTC's complexity without removing its power.</p><p>Already a user of Cloudflare’s products, Dyte was a perfect complement to Cloudflare’s existing real-time infrastructure spanning 300+ cities worldwide. They built a developer experience layer that made complex media capabilities accessible. We’re incredibly excited for their team to join Cloudflare as we help developers define the future of user interaction for real-time applications as one team.</p>
    <div>
      <h2>Interactive applications shouldn't require WebRTC expertise </h2>
      <a href="#interactive-applications-shouldnt-require-webrtc-expertise">
        
      </a>
    </div>
    <p>For many developers, what starts as "let's add video chat" can quickly escalate into weeks of technical deep dives into WebSockets and WebRTC. While we are big believers in the <a href="https://blog.cloudflare.com/tag/webrtc/"><u>potential of WebRTC</u></a>, we also know that it comes with real challenges when building for the first time. Debugging WebRTC sessions can require developers to learn about esoteric new concepts such as navigating <a href="https://webrtcforthecurious.com/docs/03-connecting/#ice"><u>ICE candidate failures</u></a>, <a href="https://webrtcforthecurious.com/docs/03-connecting/#turn"><u>TURN server configurations</u></a>, and <a href="https://webrtcforthecurious.com/docs/03-connecting/#turn"><u>SDP negotiation issues</u></a>.</p><p>The challenges of building a WebRTC app for the first time don’t stop there. Device management adds another layer of complexity. Inconsistent camera and microphone APIs across browsers and mobile platforms introduce unexpected behaviors in production. Chrome handles resolution switching one way, Safari another, and Android WebViews break in uniquely frustrating ways. We regularly see applications that function perfectly in testing environments fail mysteriously when deployed to certain devices or browsers.</p><p>Systems that work flawlessly with 5 test users collapse under the load of 50 real-world participants. Bandwidth adaptation falters, connection management becomes unwieldy, and maintaining consistent quality across diverse network conditions proves nearly impossible without specialized expertise. </p><p>What starts as a straightforward feature becomes a multi-month project requiring low-level engineering to solve problems that aren’t core to your business.</p><p>We realized that we needed to extend our products to client devices to help solve these problems.</p>
    <div>
      <h2>RealtimeKit SDKs for Kotlin, React Native, Swift, JavaScript, Flutter</h2>
      <a href="#realtimekit-sdks-for-kotlin-react-native-swift-javascript-flutter">
        
      </a>
    </div>
    
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/20EM65tMDpRznldLcfRYSo/90db0a5576bcecf0eaa3d28f7feaa65e/Final.png" />
          </figure><p>RealtimeKit is our toolkit for building real-time applications without common WebRTC headaches. The core of RealtimeKit is a set of cross-platform SDKs that handle all the low-level complexities, from session establishment and media permissions to NAT traversal and connection management. Instead of spending weeks implementing and debugging these foundations, you can focus entirely on creating unique experiences for your users.</p><p>Recording capabilities come built-in, eliminating one of the most commonly requested yet difficult-to-implement features in real-time applications. Whether you need to capture meetings for compliance, save virtual classroom sessions for students who couldn't attend live, or enable content creators to archive their streams, RealtimeKit handles the entire media pipeline. No more wrestling with MediaRecorder APIs or building custom recording infrastructure — it just works, scaling alongside your user base.</p><p>We've also integrated voice AI capabilities from providers like ElevenLabs directly into the platform. Adding AI participants to conversations becomes as simple as a function call, opening up entirely new interaction models. These AI voices operate with the same low latency as human participants — tens of milliseconds across our global network — creating truly synchronous experiences where AI and humans converse naturally. Combined with RealtimeKit's ability to scale to millions of concurrent participants, this enables entirely new categories of applications that weren't feasible before.</p>
    <div>
      <h2>The Developer Experience</h2>
      <a href="#the-developer-experience">
        
      </a>
    </div>
    
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7GAxgCMn36QUgxSlF7m0xL/34574d1d1ba3da305e46b41bc455e769/2.png" />
          </figure><p>RealtimeKit focuses on what developers want to accomplish, rather than how the underlying protocols work. Adding participants or turning on recording are just an API call away. SDKs handle device enumeration, permission requests, and UI rendering across platforms. Behind the scenes, we’re solving the thorny problems of media orchestration and state management that can be challenging to debug.</p><p>We’ve been quietly working towards launching the Cloudflare RealtimeKit for years. From the very beginning, our global network has been optimized for minimizing latency between our network and end users, which is where the majority of network disruptions are introduced.</p><p>We developed a <a href="https://blog.cloudflare.com/cloudflare-calls-anycast-webrtc/"><u>Selective Forwarding Unit (SFU)</u></a> that intelligently routes media streams between participants, dynamically adjusting quality based on network conditions. Our <a href="https://blog.cloudflare.com/lt-lt/webrtc-turn-using-anycast/"><u>TURN infrastructure</u></a> solves the <a href="https://webrtchacks.com/an-intro-to-webrtcs-natfirewall-problem/"><u>complex problem of NAT traversal</u></a>, allowing connections to be established reliably behind firewalls. With Workers AI, we brought inference capabilities to the edge, minimizing latency for AI-powered interactions. Workers and Durable Objects provided the WebSockets coordination layer necessary for maintaining consistent state across participants.</p>
    <div>
      <h2>SFU and TURN services are now Generally Available</h2>
      <a href="#sfu-and-turn-services-are-now-generally-available">
        
      </a>
    </div>
    <p>We’re also announcing the General Availability of our SFU and TURN services for WebRTC developers that need more control and a low-level integration with the Cloudflare network.</p><p>SFU now supports simulcast, a very common feature request. Simulcast allows developers to select media streams from multiple options, similar to selecting the quality level of an online video, but for WebRTC. Users with different network qualities are now able to receive different levels of quality, either automatically defined by the SFU or manually selected.</p><p>Our TURN service now offers advanced analytics with insight into regional, country, and city level usage metrics. Together with <a href="https://developers.cloudflare.com/calls/turn/replacing-existing/#tag-users-with-custom-identifiers"><u>Custom Identifiers</u></a>, and revocable tokens, Cloudflare’s TURN service offers an in-depth view into usage and helps avoid abuse.</p><p>Our SFU and TURN products continue to be one of the most affordable ways to build WebRTC apps at scale, at 5 cents per GB after 1,000 GB of free usage each month.</p>
    <div>
      <h2>Partnering with Hugging Face to make realtime AI communication seamless</h2>
      <a href="#partnering-with-hugging-face-to-make-realtime-ai-communication-seamless">
        
      </a>
    </div>
    <p><a href="https://fastrtc.org/"><u>FastRTC</u></a> is a lightweight Python library from Hugging Face that makes it easy to stream real-time audio and video into and out of AI models using WebRTC. TURN servers are a critical part of WebRTC infrastructure and ensure that media streams can reliably connect across firewalls and NATs. For users of FastRTC, setting up a globally distributed TURN server can be complex and expensive.  </p><p>Through our new partnership with Hugging Face, FastRTC users now have free access to Cloudflare’s TURN Server product, giving them reliable connectivity out of the box. Developers get 10 GB of TURN bandwidth each month using just a Hugging Face access token — no setup, no credit card, no servers to manage. As projects grow, they can easily switch to a Cloudflare account for more capacity and a larger free tier.</p><p>This integration allows AI developers to focus on building voice interfaces, video pipelines, and multimodal apps without worrying about NAT traversal or network reliability. FastRTC simplifies the code, and Cloudflare ensures it works everywhere. See these <a href="https://huggingface.co/fastrtc"><u>demos</u></a> to get started.</p>
    <div>
      <h2>Ship AI-powered realtime apps in days, not weeks</h2>
      <a href="#ship-ai-powered-realtime-apps-in-days-not-weeks">
        
      </a>
    </div>
    
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1b4lK5Qvq1ImlBa3lFEH7l/bf212f51a1f178285747e759c1365ec9/3.png" />
          </figure><p>With RealtimeKit, developers can now implement complex real-time experiences in hours. The SDKs abstract away the most time-consuming aspects of WebRTC development while providing APIs tailored to common implementation patterns. Here are a few of the possibilities: </p><ul><li><p><b>Video conferencing</b>: Add multi-participant video calls to your application with just a few lines of code. RealtimeKit handles the connection management, bandwidth adaptation, and device permissions that typically consume weeks of development time.</p></li><li><p><b>Live streaming</b>: Build interactive broadcasts where hosts can stream to thousands of viewers while selectively bringing participants on-screen. The SFU automatically optimizes media routing based on participant roles and network conditions.</p></li><li><p><b>Real-time synchronization</b>: Implement watch parties or collaborative viewing experiences where content playback stays synchronized across all participants. The timing API handles the complex delay calculations and adjustments traditionally required.</p></li><li><p><b>Voice AI integrations</b>: Add transcription and AI voice participants without building custom media pipelines. RealtimeKit's media processing APIs integrate with your existing authentication and storage systems rather than requiring separate infrastructure.</p></li></ul><p>When we’ve seen our early testers use the RealtimeKit, it doesn't just accelerate their existing projects, it fundamentally changes which projects become viable. </p>
    <div>
      <h2>Get started with RealtimeKit</h2>
      <a href="#get-started-with-realtimekit">
        
      </a>
    </div>
    <p>Starting today, you'll notice a new <a href="https://dash.cloudflare.com/?to=/:account/realtime"><u>Realtime section in your Cloudflare Dashboard</u></a>. This section includes our TURN and SFU products alongside our latest product, RealtimeKit. </p><p>RealtimeKit is currently in a closed beta ready for select customers to start kicking the tires. There is currently no cost to test it out during the beta. Request early access <a href="https://www.cloudflare.com/cloudflare-realtimekit-signup/"><u>here</u></a> or via the link in your <a href="https://dash.cloudflare.com/?to=/:account/realtime"><u>Cloudflare dashboard</u></a>. We can’t wait to see what you build. </p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/53RI7RZhs5Y0zHMHKg6fLh/e155081853355a7714e052ff23db6269/4.png" />
          </figure><p></p> ]]></content:encoded>
            <category><![CDATA[Developer Week]]></category>
            <category><![CDATA[WebRTC]]></category>
            <category><![CDATA[Cloudflare Calls]]></category>
            <category><![CDATA[Real-time]]></category>
            <category><![CDATA[TURN Server]]></category>
            <guid isPermaLink="false">opC8hYtVRkyCEv7Yze4R0</guid>
            <dc:creator>Zaid Farooqui</dc:creator>
            <dc:creator>Will Allen</dc:creator>
            <dc:creator>Abhishek Kankani</dc:creator>
        </item>
    </channel>
</rss>