
<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:media="http://search.yahoo.com/mrss/">
    <channel>
        <title><![CDATA[ The Cloudflare Blog ]]></title>
        <description><![CDATA[ Get the latest news on how products at Cloudflare are built, technologies used, and join the teams helping to build a better Internet. ]]></description>
        <link>https://blog.cloudflare.com</link>
        <atom:link href="https://blog.cloudflare.com/" rel="self" type="application/rss+xml"/>
        <language>en-us</language>
        
        <lastBuildDate>Wed, 15 Apr 2026 01:09:27 GMT</lastBuildDate>
        <item>
            <title><![CDATA[How we use Abstract Syntax Trees (ASTs) to turn Workflows code into visual diagrams ]]></title>
            <link>https://blog.cloudflare.com/workflow-diagrams/</link>
            <pubDate>Fri, 27 Mar 2026 13:00:00 GMT</pubDate>
            <description><![CDATA[ Workflows are now visualized via step diagrams in the dashboard. Here’s how we translate your TypeScript code into a visual representation of the workflow.  ]]></description>
            <content:encoded><![CDATA[ <p><a href="https://www.cloudflare.com/developer-platform/products/workflows/"><u>Cloudflare Workflows</u></a> is a durable execution engine that lets you chain steps, retry on failure, and persist state across long-running processes. Developers use Workflows to power background agents, manage data pipelines, build human-in-the-loop approval systems, and more.</p><p>Last month, we <a href="https://developers.cloudflare.com/changelog/post/2026-02-03-workflows-visualizer/"><u>announced</u></a> that every workflow deployed to Cloudflare now has a complete visual diagram in the dashboard.</p><p>We built this because being able to visualize your applications is more important now than ever before. Coding agents are writing code that you may or may not be reading. However, the shape of what gets built still matters: how the steps connect, where they branch, and what's actually happening.</p><p>If you've seen diagrams from visual workflow builders before, those are usually working from something declarative: JSON configs, YAML, drag-and-drop. However, Cloudflare Workflows are just code. They can include <a href="https://developers.cloudflare.com/workflows/build/workers-api/"><u>Promises, Promise.all, loops, conditionals,</u></a> and/or be nested in functions or classes. This dynamic execution model makes rendering a diagram a bit more complicated.</p><p>We use Abstract Syntax Trees (ASTs) to statically derive the graph, tracking <code>Promise</code> and <code>await</code> relationships to understand what runs in parallel, what blocks, and how the pieces connect. </p><p>Keep reading to learn how we built these diagrams, or deploy your first workflow and see the diagram for yourself.</p><a href="https://deploy.workers.cloudflare.com/?url=https://github.com/cloudflare/templates/tree/main/workflows-starter-template"><img src="https://deploy.workers.cloudflare.com/button" /></a>
<p></p><p>Here’s an example of a diagram generated from Cloudflare Workflows code:</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/44NnbqiNda2vgzIEneHQ3W/044856325693fbeb75ed1ab38b4db1c2/image1.png" />
          </figure>
    <div>
      <h3>Dynamic workflow execution</h3>
      <a href="#dynamic-workflow-execution">
        
      </a>
    </div>
    <p>Generally, workflow engines can execute according to either dynamic or sequential (static) execution order. Sequential execution might seem like the more intuitive solution: trigger workflow → step A → step B → step C, where step B starts executing immediately after the engine completes Step A, and so forth.</p><p><a href="https://developers.cloudflare.com/workflows/"><u>Cloudflare Workflows</u></a> follow the dynamic execution model. Since workflows are just code, the steps execute as the runtime encounters them. When the runtime discovers a step, that step gets passed over to the workflow engine, which manages its execution. The steps are not inherently sequential unless awaited — the engine executes all unawaited steps in parallel. This way, you can write your workflow code as flow control without additional wrappers or directives. Here’s how the handoff works:</p><ol><li><p>An <i>engine</i>, which is a “supervisor” Durable Object for that instance, spins up. The engine is responsible for the logic of the actual workflow execution. </p></li><li><p>The engine triggers a <a href="https://developers.cloudflare.com/cloudflare-for-platforms/workers-for-platforms/how-workers-for-platforms-works/#user-workers"><u>user worker</u></a> via <a href="https://developers.cloudflare.com/cloudflare-for-platforms/workers-for-platforms/configuration/dynamic-dispatch/"><u>dynamic dispatch</u></a>, passing control over to Workers runtime.</p></li><li><p>When Runtime encounters a <code>step.do</code>, it passes the execution back to the engine.</p></li><li><p>The engine executes the step, persists the result (or throws an error, if applicable) and triggers the user Worker again.  </p></li></ol><p>With this architecture, the engine does not inherently “know” the order of the steps that it is executing — but for a diagram, the order of steps becomes crucial information. The challenge here lies in getting the vast majority of workflows translated accurately into a diagnostically helpful graph; with the diagrams in beta, we will continue to iterate and improve on these representations.</p>
    <div>
      <h3>Parsing the code</h3>
      <a href="#parsing-the-code">
        
      </a>
    </div>
    <p>Fetching the script at <a href="https://developers.cloudflare.com/workers/get-started/guide/#4-deploy-your-project"><u>deploy time</u></a>, instead of run time, allows us to parse the workflow in its entirety to statically generate the diagram. </p><p>Taking a step back, here is the life of a workflow deployment:</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1zoOCYji26ahxzh594VavQ/63ad96ae033653ffc7fd98df01ea6e27/image5.png" />
          </figure><p>To create the diagram, we fetch the script after it has been bundled by the internal configuration service which deploys Workers (step 2 under Workflow deployment). Then, we use a parser to create an abstract syntax tree (AST) representing the workflow, and our internal service generates and traverses an intermediate graph with all WorkflowEntrypoints and calls to workflows steps. We render the diagram based on the final result on our API. </p><p>When a Worker is deployed, the configuration service bundles (using <a href="https://esbuild.github.io/"><u>esbuild</u></a> by default) and minifies the code <a href="https://developers.cloudflare.com/workers/wrangler/configuration/#inheritable-keys"><u>unless specified otherwise</u></a>. This presents another challenge — while Workflows in TypeScript follow an intuitive pattern, their minified Javascript (JS) can be dense and indigestible. There are also different ways that code can be minified, depending on the bundler. </p><p>Here’s an example of Workflow code that shows <b>agents executing in parallel:</b></p>
            <pre><code>const summaryPromise = step.do(
         `summary agent (loop ${loop})`,
         async () =&gt; {
           return runAgentPrompt(
             this.env,
             SUMMARY_SYSTEM,
             buildReviewPrompt(
               'Summarize this text in 5 bullet points.',
               draft,
               input.context
             )
           );
         }
       );
        const correctnessPromise = step.do(
         `correctness agent (loop ${loop})`,
         async () =&gt; {
           return runAgentPrompt(
             this.env,
             CORRECTNESS_SYSTEM,
             buildReviewPrompt(
               'List correctness issues and suggested fixes.',
               draft,
               input.context
             )
           );
         }
       );
        const clarityPromise = step.do(
         `clarity agent (loop ${loop})`,
         async () =&gt; {
           return runAgentPrompt(
             this.env,
             CLARITY_SYSTEM,
             buildReviewPrompt(
               'List clarity issues and suggested fixes.',
               draft,
               input.context
             )
           );
         }
       );</code></pre>
            <p>Bundling with <a href="https://rspack.rs/"><u>rspack</u></a>, a snippet of the minified code looks like this:</p>
            <pre><code>class pe extends e{async run(e,t){de("workflow.run.start",{instanceId:e.instanceId});const r=await t.do("validate payload",async()=&gt;{if(!e.payload.r2Key)throw new Error("r2Key is required");if(!e.payload.telegramChatId)throw new Error("telegramChatId is required");return{r2Key:e.payload.r2Key,telegramChatId:e.payload.telegramChatId,context:e.payload.context?.trim()}}),s=await t.do("load source document from r2",async()=&gt;{const e=await this.env.REVIEW_DOCUMENTS.get(r.r2Key);if(!e)throw new Error(`R2 object not found: ${r.r2Key}`);const t=(await e.text()).trim();if(!t)throw new Error("R2 object is empty");return t}),n=Number(this.env.MAX_REVIEW_LOOPS??"5"),o=this.env.RESPONSE_TIMEOUT??"7 days",a=async(s,i,c)=&gt;{if(s&gt;n)return le("workflow.loop.max_reached",{instanceId:e.instanceId,maxLoops:n}),await t.do("notify max loop reached",async()=&gt;{await se(this.env,r.telegramChatId,`Review stopped after ${n} loops for ${e.instanceId}. Start again if you still need revisions.`)}),{approved:!1,loops:n,finalText:i};const h=t.do(`summary agent (loop ${s})`,async()=&gt;te(this.env,"You summarize documents. Keep the output short, concrete, and factual.",ue("Summarize this text in 5 bullet points.",i,r.context)))...</code></pre>
            <p>Or, bundling with <a href="https://vite.dev/"><u>vite</u></a>, here is a minified snippet:</p>
            <pre><code>class ht extends pe {
  async run(e, r) {
    b("workflow.run.start", { instanceId: e.instanceId });
    const s = await r.do("validate payload", async () =&gt; {
      if (!e.payload.r2Key)
        throw new Error("r2Key is required");
      if (!e.payload.telegramChatId)
        throw new Error("telegramChatId is required");
      return {
        r2Key: e.payload.r2Key,
        telegramChatId: e.payload.telegramChatId,
        context: e.payload.context?.trim()
      };
    }), n = await r.do(
      "load source document from r2",
      async () =&gt; {
        const i = await this.env.REVIEW_DOCUMENTS.get(s.r2Key);
        if (!i)
          throw new Error(`R2 object not found: ${s.r2Key}`);
        const c = (await i.text()).trim();
        if (!c)
          throw new Error("R2 object is empty");
        return c;
      }
    ), o = Number(this.env.MAX_REVIEW_LOOPS ?? "5"), l = this.env.RESPONSE_TIMEOUT ?? "7 days", a = async (i, c, u) =&gt; {
      if (i &gt; o)
        return H("workflow.loop.max_reached", {
          instanceId: e.instanceId,
          maxLoops: o
        }), await r.do("notify max loop reached", async () =&gt; {
          await J(
            this.env,
            s.telegramChatId,
            `Review stopped after ${o} loops for ${e.instanceId}. Start again if you still need revisions.`
          );
        }), {
          approved: !1,
          loops: o,
          finalText: c
        };
      const h = r.do(
        `summary agent (loop ${i})`,
        async () =&gt; _(
          this.env,
          et,
          K(
            "Summarize this text in 5 bullet points.",
            c,
            s.context
          )
        )
      )...</code></pre>
            <p>Minified code can get pretty gnarly — and depending on the bundler, it can get gnarly in a bunch of different directions.</p><p>We needed a way to parse the various forms of minified code quickly and precisely. We decided <code>oxc-parser</code> from the <a href="https://oxc.rs/"><u>JavaScript Oxidation Compiler</u></a> (OXC) was perfect for the job. We first tested this idea by having a container running Rust. Every script ID was sent to a <a href="https://developers.cloudflare.com/queues/"><u>Cloudflare Queue</u></a>, after which messages were popped and sent to the container to process. Once we confirmed this approach worked, we moved to a Worker written in Rust. Workers supports running <a href="https://developers.cloudflare.com/workers/languages/rust/"><u>Rust via WebAssembly</u></a>, and the package was small enough to make this straightforward.</p><p>The Rust Worker is responsible for first converting the minified JS into AST node types, then converting the AST node types into the graphical version of the workflow that is rendered on the dashboard. To do this, we generate a graph of pre-defined <a href="https://developers.cloudflare.com/workflows/build/visualizer/"><u>node types</u></a> for each workflow and translate into our graph representation through a series of node mappings. </p>
    <div>
      <h3>Rendering the diagram</h3>
      <a href="#rendering-the-diagram">
        
      </a>
    </div>
    <p>There were two challenges to rendering a diagram version of the workflow: how to track step and function relationships correctly, and how to define the workflow node types as simply as possible while covering all the surface area.</p><p>To guarantee that step and function relationships are tracked correctly, we needed to collect both the function and step names. As we discussed earlier, the engine only has information about the steps, but a step may be dependent on a function, or vice versa. For example, developers might wrap steps in functions or define functions as steps. They could also call steps within a function that come from different <a href="https://blog.cloudflare.com/workers-javascript-modules/"><u>modules</u></a> or rename steps. </p><p>Although the library passes the initial hurdle by giving us the AST, we still have to decide how to parse it.  Some code patterns require additional creativity. For example, functions — within a <code>WorkflowEntrypoint</code>, there can be functions that call steps directly, indirectly, or not at all. Consider <code>functionA</code>, which contains <code>console.log(await functionB(), await functionC()</code>) where <code>functionB</code> calls a <code>step.do()</code>. In that case, both <code>functionA</code> and <code>functionB</code> should be included on the workflow diagram; however, <code>functionC</code> should not. To catch all functions which include direct and indirect step calls, we create a subgraph for each function and check whether it contains a step call itself or whether it calls another function which might. Those subgraphs are represented by a function node, which contains all of its relevant nodes. If a function node is a leaf of the graph, meaning it has no direct or indirect workflow steps within it, it is trimmed from the final output. </p><p>We check for other patterns as well, including a list of static steps from which we can infer the workflow diagram or variables, defined in up to ten different ways. If your script contains multiple workflows, we follow a similar pattern to the subgraphs created for functions, abstracted one level higher. </p><p>For every AST node type, we had to consider every way they could be used inside of a workflow: loops, branches, promises, parallels, awaits, arrow functions… the list goes on. Even within these paths, there are dozens of possibilities. Consider just a few of the possible ways to loop:</p>
            <pre><code>// for...of
for (const item of items) {
	await step.do(`process ${item}`, async () =&gt; item);
}
// while
while (shouldContinue) {
	await step.do('poll', async () =&gt; getStatus());
}
// map
await Promise.all(
	items.map((item) =&gt; step.do(`map ${item}`, async () =&gt; item)),
);
// forEach
await items.forEach(async (item) =&gt; {
	await step.do(`each ${item}`, async () =&gt; item);
});</code></pre>
            <p>And beyond looping, how to handle branching:</p>
            <pre><code>// switch / case
switch (action.type) {
	case 'create':
		await step.do('handle create', async () =&gt; {});
		break;
	default:
		await step.do('handle unknown', async () =&gt; {});
		break;
}

// if / else if / else
if (status === 'pending') {
	await step.do('pending path', async () =&gt; {});
} else if (status === 'active') {
	await step.do('active path', async () =&gt; {});
} else {
	await step.do('fallback path', async () =&gt; {});
}

// ternary operator
await (cond
	? step.do('ternary true branch', async () =&gt; {})
	: step.do('ternary false branch', async () =&gt; {}));

// nullish coalescing with step on RHS
const myStepResult =
	variableThatCanBeNullUndefined ??
	(await step.do('nullish fallback step', async () =&gt; 'default'));

// try/catch with finally
try {
	await step.do('try step', async () =&gt; {});
} catch (_e) {
	await step.do('catch step', async () =&gt; {});
} finally {
	await step.do('finally step', async () =&gt; {});
}</code></pre>
            <p>Our goal was to create a concise API that communicated what developers need to know without overcomplicating it. But converting a workflow into a diagram meant accounting for every pattern (whether it follows best practices, or not) and edge case possible. As we discussed earlier, each step is not explicitly sequential, by default, to any other step. If a workflow does not utilize <code>await</code> and <code>Promise.all()</code>, we assume that the steps will execute in the order in which they are encountered. But if a workflow included <code>await</code>, <code>Promise</code> or <code>Promise.all()</code>, we needed a way to track those relationships.</p><p>We decided on tracking execution order, where each node has a <code>starts:</code> and <code>resolves:</code> field. The <code>starts</code> and <code>resolves</code> indices tell us when a promise started executing and when it ends relative to the first promise that started without an immediate, subsequent conclusion. This correlates to vertical positioning in the diagram UI (i.e., all steps with <code>starts:1</code> will be inline). If steps are awaited when they are declared, then <code>starts</code> and <code>resolves</code> will be undefined, and the workflow will execute in the order of the steps’ appearance to the runtime.</p><p>While parsing, when we encounter an unawaited <code>Promise</code> or <code>Promise.all()</code>, that node (or nodes) are marked with an entry number, surfaced in the <code>starts</code> field. If we encounter an <code>await</code> on that promise, the entry number is incremented by one and saved as the exit number (which is the value in <code>resolves</code>). This allows us to know which promises run at the same time and when they’ll complete in relation to each other.</p>
            <pre><code>export class ImplicitParallelWorkflow extends WorkflowEntrypoint&lt;Env, Params&gt; {
 async run(event: WorkflowEvent&lt;Params&gt;, step: WorkflowStep) {
   const branchA = async () =&gt; {
     const a = step.do("task a", async () =&gt; "a"); //starts 1
     const b = step.do("task b", async () =&gt; "b"); //starts 1
     const c = await step.waitForEvent("task c", { type: "my-event", timeout: "1 hour" }); //starts 1 resolves 2
     await step.do("task d", async () =&gt; JSON.stringify(c)); //starts 2 resolves 3
     return Promise.all([a, b]); //resolves 3
   };

   const branchB = async () =&gt; {
     const e = step.do("task e", async () =&gt; "e"); //starts 1
     const f = step.do("task f", async () =&gt; "f"); //starts 1
     return Promise.all([e, f]); //resolves 2
   };

   await Promise.all([branchA(), branchB()]);

   await step.sleep("final sleep", 1000);
 }
}</code></pre>
            <p>You can see the steps’ alignment in the diagram:</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6EZJ38J3H55yH0OnT11vgg/6dde06725cd842725ee3af134b1505c0/image3.png" />
          </figure><p>After accounting for all of those patterns, we settled on the following list of node types: 	</p>
            <pre><code>| StepSleep
| StepDo
| StepWaitForEvent
| StepSleepUntil
| LoopNode
| ParallelNode
| TryNode
| BlockNode
| IfNode
| SwitchNode
| StartNode
| FunctionCall
| FunctionDef
| BreakNode;</code></pre>
            <p>Here are a few samples of API output for different behaviors: </p><p><code>function</code> call:</p>
            <pre><code>{
  "functions": {
    "runLoop": {
      "name": "runLoop",
      "nodes": []
    }
  }
}</code></pre>
            <p><code>if</code> condition branching to <code>step.do</code>:</p>
            <pre><code>{
  "type": "if",
  "branches": [
    {
      "condition": "loop &gt; maxLoops",
      "nodes": [
        {
          "type": "step_do",
          "name": "notify max loop reached",
          "config": {
            "retries": {
              "limit": 5,
              "delay": 1000,
              "backoff": "exponential"
            },
            "timeout": 10000
          },
          "nodes": []
        }
      ]
    }
  ]
}</code></pre>
            <p><code>parallel</code> with <code>step.do</code> and <code>waitForEvent</code>:</p>
            <pre><code>{
  "type": "parallel",
  "kind": "all",
  "nodes": [
    {
      "type": "step_do",
      "name": "correctness agent (loop ${...})",
      "config": {
        "retries": {
          "limit": 5,
          "delay": 1000,
          "backoff": "exponential"
        },
        "timeout": 10000
      },
      "nodes": [],
      "starts": 1
    },
...
    {
      "type": "step_wait_for_event",
      "name": "wait for user response (loop ${...})",
      "options": {
        "event_type": "user-response",
        "timeout": "unknown"
      },
      "starts": 3,
      "resolves": 4
    }
  ]
}</code></pre>
            
    <div>
      <h3>What’s next</h3>
      <a href="#whats-next">
        
      </a>
    </div>
    <p>Ultimately, the goal of these Workflow diagrams is to serve as a full-service debugging tool. That means you’ll be able to:</p><ul><li><p>Trace an execution through the graph in real time</p></li><li><p>Discover errors, wait for human-in-the-loop approvals, and skip steps for testing</p></li><li><p>Access visualizations in local development</p></li></ul><p>Check out the diagrams on your <a href="https://dash.cloudflare.com/?to=/:account/workers/workflows"><u>Workflow overview pages</u></a>. If you have any feature requests or notice any bugs, share your feedback directly with the Cloudflare team by joining the <a href="https://discord.cloudflare.com/"><u>Cloudflare Developers community on Discord</u></a>.</p> ]]></content:encoded>
            <category><![CDATA[Workflows]]></category>
            <category><![CDATA[Cloudflare Workers]]></category>
            <category><![CDATA[Developers]]></category>
            <guid isPermaLink="false">4HOWpzOgT3eVU2wFa4adFU</guid>
            <dc:creator>André Venceslau</dc:creator>
            <dc:creator>Mia Malden</dc:creator>
        </item>
        <item>
            <title><![CDATA[A closer look at Python Workflows, now in beta]]></title>
            <link>https://blog.cloudflare.com/python-workflows/</link>
            <pubDate>Mon, 10 Nov 2025 14:00:00 GMT</pubDate>
            <description><![CDATA[ Cloudflare Workflows, our durable execution engine for running multi-step applications, now supports Python. That means less friction, more possibilities, and another reason to build on Cloudflare. ]]></description>
            <content:encoded><![CDATA[ <p>Developers can <a href="https://blog.cloudflare.com/building-workflows-durable-execution-on-workers/"><u>already</u></a> use Cloudflare Workflows to build long-running, multi-step applications on Workers. Now, Python Workflows are here, meaning you can use your language of choice to orchestrate multi-step applications.</p><p>With <a href="https://developers.cloudflare.com/workflows/"><u>Workflows</u></a>, you can automate a sequence of idempotent steps in your application with built-in error handling and retry behavior. But Workflows were originally supported only in TypeScript. Since Python is the de facto language of choice for data pipelines, artificial intelligence/machine learning, and task automation – all of which heavily rely on orchestration – this created friction for many developers.</p><p>Over the years, we’ve been giving developers the tools to build these applications in Python, on Cloudflare. In 2020, we brought <a href="https://blog.cloudflare.com/cloudflare-workers-announces-broad-language-support/"><u>Python to Workers via Transcrypt</u></a> before directly integrating Python into <a href="https://github.com/cloudflare/workerd?cf_target_id=33101FA5C99A5BD54E7D452C9B282CD8"><u>workerd</u></a> in 2024. Earlier this year, we built support for <a href="https://developers.cloudflare.com/workers/languages/python/stdlib/"><u>CPython</u></a> along with <a href="https://pyodide.org/en/stable/usage/packages-in-pyodide.html"><u>any packages built in Pyodide</u></a>, like matplotlib and pandas, in Workers. Now, Python Workflows are supported as well, so developers can create robust applications using the language they know best.</p>
    <div>
      <h2>Why Python for Workflows?</h2>
      <a href="#why-python-for-workflows">
        
      </a>
    </div>
    <p>Imagine you’re training an <a href="https://www.cloudflare.com/learning/ai/what-is-large-language-model/"><u>LLM</u></a>. You need to label the dataset, feed data, wait for the model to run, evaluate the loss, adjust the model, and repeat. Without automation, you’d need to start each step, monitor manually until completion, and then start the next one. Instead, you could use a workflow to orchestrate the training of the model, triggering each step pending the completion of its predecessor. For any manual adjustments needed, like evaluating the loss and adjusting the model accordingly, you can implement a step that notifies you and waits for the necessary input.</p><p>Consider data pipelines, which are a top Python use case for ingesting and processing data. By automating the data pipeline through a defined set of idempotent steps, developers can deploy a workflow that handles the entire data pipeline for them.</p><p>Take another example: building <a href="https://www.cloudflare.com/learning/ai/what-is-agentic-ai/"><u>AI agents</u></a>, such as an agent to manage your groceries. Each week, you input your list of recipes, and the agent (1) compiles the list of necessary ingredients, (2) checks what ingredients you have left over from previous weeks, and (3) orders the differential for pickup from your local grocery store. Using a Workflow, this could look like:</p><ol><li><p><code>await step.wait_for_event()</code> the user inputs the grocery list</p></li><li><p><code>step.do()</code> compile list of necessary ingredients</p></li><li><p><code>step.do()</code> check list of necessary ingredients against left over ingredients</p></li><li><p><code>step.do()</code> make an API call to place the order</p></li><li><p><code>step.do() </code>proceed with payment</p></li></ol><p>Using workflows as a tool to <a href="https://agents.cloudflare.com/"><u>build agents on Cloudflare</u></a> can simplify agents’ architecture and improve their odds for reaching completion through individual step retries and state persistence. Support for Python Workflows means building agents with Python is easier than ever.</p>
    <div>
      <h3>How Python Workflows work</h3>
      <a href="#how-python-workflows-work">
        
      </a>
    </div>
    <p>Cloudflare Workflows uses the underlying infrastructure that we created for durable execution, while providing an idiomatic way for Python users to write their workflows. In addition, we aimed for complete feature parity between the Javascript and the Python SDK. This is possible because Cloudflare Workers support Python directly in the runtime itself. </p>
    <div>
      <h4>Creating a Python Workflow</h4>
      <a href="#creating-a-python-workflow">
        
      </a>
    </div>
    <p>Cloudflare Workflows are fully built on top of <a href="https://www.cloudflare.com/developer-platform/products/workers/"><u>Workers</u></a> and <a href="https://www.cloudflare.com/developer-platform/products/durable-objects/"><u>Durable Objects</u></a>. Each element plays a part in storing Workflow metadata, and instance level information. For more detail on how the Workflows platform works, <a href="https://blog.cloudflare.com/building-workflows-durable-execution-on-workers/"><u>check out this blog post</u></a>.</p><p>At the very bottom of the Workflows control plane sits the user Worker, which is the <code>WorkflowEntrypoint</code>. When the Workflow instance is ready to run, the Workflow engine will call into the <code>run</code> method of the user worker via RPC, which in this case will be a Python Worker.</p><p>This is an example skeleton for a Workflow declaration, provided by the official documentation:</p>
            <pre><code>export class MyWorkflow extends WorkflowEntrypoint&lt;Env, Params&gt; {
  async run(event: WorkflowEvent&lt;Params&gt;, step: WorkflowStep) {
    // Steps here
  }
}</code></pre>
            <p>The <code>run</code> method, as illustrated above, provides a <a href="https://developers.cloudflare.com/workflows/build/workers-api/#workflowstep"><u>WorkflowStep</u></a> parameter that implements the durable execution APIs. This is what users rely on for at-most-once execution. These APIs are implemented in JavaScript and need to be accessed in the context of the Python Worker.</p><p>A <code>WorkflowStep</code> must cross the RPC barrier, meaning the engine (caller) exposes it as an <code>RpcTarget</code>. This setup allows the user's Workflow (callee) to substitute the parameter with a stub. This stub then enables the use of durable execution APIs for Workflows by RPCing back to the engine. To read more about RPC serialization and how functions can be passed from caller and callee, read the <a href="https://developers.cloudflare.com/workers/runtime-apis/rpc/"><u>Remote-Procedure call documentation</u></a>.</p><p>All of this is true for both Python and JavaScript Workflows, since we don’t really change how the user Worker is called from the Workflows side. However, in the Python case, there is another barrier – language bridging between Python and the JavaScript module. When an RPC request targets a Python Worker, there is a Javascript entrypoint module responsible for proxying the request to be handled by the Python script, and then returned to the caller. This process typically involves type translation before and after handling the request.</p>
    <div>
      <h4>Overcoming the language barrier</h4>
      <a href="#overcoming-the-language-barrier">
        
      </a>
    </div>
    <p>Python workers rely on <a href="https://pyodide.org/en/stable/"><u>Pyodide</u></a>, which is a port of CPython to WebAssembly. Pyodide provides a foreign function interface (FFI) to JavaScript which allows for calling into JavaScript methods from Python. This is the mechanism that allows other bindings and Python packages to work within the Workers platform. Therefore, we use this FFI layer not only to allow using the Workflow binding directly, but also to provide <code>WorkflowStep</code> methods in Python. In other words, by considering that <code>WorkflowEntrypoint</code> is a special class for the runtime, the run method is manually wrapped so that <code>WorkflowStep</code> is exposed as a <a href="https://pyodide.org/en/stable/usage/api/python-api/ffi.html?cf_target_id=B32B42023AAEDEF833BCC2D9FD6096A3#pyodide.ffi.JsProxy"><u>JsProxy</u></a> instead of being type translated like other JavaScript objects. Moreover, by wrapping the APIs from the perspective of the user Worker, we allow ourselves to make some adjustments to the overall development experience, instead of simply exposing a JavaScript SDK to a different language with different semantics. </p>
    <div>
      <h4>Making the Python Workflows SDK Pythonic</h4>
      <a href="#making-the-python-workflows-sdk-pythonic">
        
      </a>
    </div>
    <p>A big part of porting Workflows to Python includes exposing an interface that Python users will be familiar with and have no problems using, similarly to what happens with our JavaScript APIs. Let's take a step back and look at a snippet for a Workflow (written in Typescript) definition.</p>
            <pre><code>import { WorkflowEntrypoint, WorkflowStep, WorkflowEvent} from 'cloudflare:workers';
 
export class MyWorkflow extends WorkflowEntrypoint {
    async run(event: WorkflowEvent&lt;YourEventType&gt;, step: WorkflowStep) {
        let state = step.do("my first step", async () =&gt; {
          // Access your properties via event.payload
          let userEmail = event.payload.userEmail
          let createdTimestamp = event.payload.createdTimestamp
          return {"userEmail": userEmail, "createdTimestamp": createdTimestamp}
	    })
 
        step.sleep("my first sleep", "30 minutes");
 
        await step.waitForEvent&lt;EventType&gt;("receive example event", { type: "simple-event", timeout: "1 hour" })
 
   	 const developerWeek = Date.parse("22 Sept 2025 13:00:00 UTC");
        await step.sleepUntil("sleep until X times out", developerWeek)
    }
}</code></pre>
            <p>The Python implementation of the workflows API requires modification of the do method. Unlike other languages, Python does not easily support anonymous callbacks. This behavior is typically achieved through the use of <a href="https://www.w3schools.com/python/python_decorators.asp"><u>decorators</u></a>, which in this case allow us to intercept the method and expose it idiomatically. In other words, all parameters maintain their original order, with the decorated method serving as the callback.</p><p>The methods <code>waitForEvent</code>, <code>sleep</code>, and <code>sleepUntil</code> can retain their original signatures, as long as their names are converted to snake case.</p><p>Here’s the corresponding Python version for the same workflow, achieving similar behavior:</p>
            <pre><code>from workers import WorkflowEntrypoint
 
class MyWorkflow(WorkflowEntrypoint):
    async def run(self, event, step):
        @step.do("my first step")
        async def my_first_step():
            user_email = event["payload"]["userEmail"]
            created_timestamp = event["payload"]["createdTimestamp"]
            return {
                "userEmail": user_email,
                "createdTimestamp": created_timestamp,
            }
 
        await my_first_step()
 
        step.sleep("my first sleep", "30 minutes")
 
         await step.wait_for_event(
            "receive example event",
            "simple-event",
            timeout="1 hour",
        )
 
        developer_week = datetime(2024, 10, 24, 13, 0, 0, tzinfo=timezone.utc)
        await step.sleep_until("sleep until X times out", developer_week)</code></pre>
            
    <div>
      <h4>DAG Workflows</h4>
      <a href="#dag-workflows">
        
      </a>
    </div>
    <p>When designing Workflows, we’re often managing dependencies between steps even when some of these tasks can be handled concurrently. Even though we’re not thinking about it, many Workflows have a directed acyclic graph (DAG) execution flow. Concurrency is achievable in the first iteration of Python Workflows (i.e.: minimal port to Python Workers) because Pyodide captures Javascript thenables and proxies them into Python awaitables. </p><p>Consequently, <code>asyncio.gather</code> works as a counterpart to <code>Promise.all</code>. Although this is perfectly fine and ready to be used in the SDK, we also support a declarative approach.</p><p>One of the advantages of decorating the do method is that we can essentially provide further abstractions on the original API, and have them work on the entrypoint wrapper. Here’s an example of a Python API making use of the DAG capabilities introduced:</p>
            <pre><code>from workers import Response, WorkflowEntrypoint

class PythonWorkflowDAG(WorkflowEntrypoint):
    async def run(self, event, step):

        @step.do('dependency 1')
        async def dep_1():
            # does stuff
            print('executing dep1')

        @step.do('dependency 2')
        async def dep_2():
            # does stuff
            print('executing dep2')

        @step.do('demo do', depends=[dep_1, dep_2], concurrent=True)
        async def final_step(res1=None, res2=None):
            # does stuff
            print('something')

        await final_step()</code></pre>
            <p>This kind of approach makes the Workflow declaration much cleaner, leaving state management to the Workflows engine data plane, as well as the Python workers Workflow wrapper. Note that even though multiple steps can run with the same name, the engine will slightly modify the name of each step to ensure uniqueness. In Python Workflows, a dependency is considered resolved once the initial step involving it has been successfully completed.</p>
    <div>
      <h3>Try it out</h3>
      <a href="#try-it-out">
        
      </a>
    </div>
    <p>Check out <a href="https://developers.cloudflare.com/workers/languages/python/"><u>writing Workers in Python</u></a> and <a href="https://developers.cloudflare.com/workflows/python/"><u>create your first Python Workflow</u></a> today! If you have any feature requests or notice any bugs, share your feedback directly with the Cloudflare team by joining the <a href="https://discord.cloudflare.com/"><u>Cloudflare Developers community on Discord</u></a>.</p> ]]></content:encoded>
            <category><![CDATA[Workflows]]></category>
            <category><![CDATA[Cloudflare Workers]]></category>
            <category><![CDATA[Python]]></category>
            <guid isPermaLink="false">JmiSM0vpXaKtbQJ49ehaB</guid>
            <dc:creator>Caio Nogueira</dc:creator>
            <dc:creator>Mia Malden</dc:creator>
        </item>
        <item>
            <title><![CDATA[Building a better testing experience for Workflows, our durable execution engine for multi-step applications]]></title>
            <link>https://blog.cloudflare.com/better-testing-for-workflows/</link>
            <pubDate>Tue, 04 Nov 2025 14:00:00 GMT</pubDate>
            <description><![CDATA[ End-to-end testing for Cloudflare Workflows was challenging. We're introducing first-class support for Workflows in cloudflare:test, enabling full introspection, mocking, and isolated, reliable tests for your most complex applications. ]]></description>
            <content:encoded><![CDATA[ <p></p><p><a href="https://www.cloudflare.com/developer-platform/products/workflows/"><u>Cloudflare Workflows</u></a> is our take on "Durable Execution." They provide a serverless engine, powered by the <a href="https://www.cloudflare.com/developer-platform/"><u>Cloudflare Developer Platform</u></a>, for building long-running, multi-step applications that persist through failures. When Workflows became <a href="https://blog.cloudflare.com/workflows-ga-production-ready-durable-execution/"><u>generally available</u></a> earlier this year, they allowed developers to orchestrate complex processes that would be difficult or impossible to manage with traditional stateless functions. Workflows handle state, retries, and long waits, allowing you to focus on your business logic.</p><p>However, complex orchestrations require robust testing to be reliable. To date, testing Workflows was a black-box process. Although you could test if a Workflow instance reached completion through an <code>await</code> to its status, there was no visibility into the intermediate steps. This made debugging really difficult. Did the payment processing step succeed? Did the confirmation email step receive the correct data? You couldn't be sure without inspecting external systems or logs. </p>
    <div>
      <h3>Why was this necessary?</h3>
      <a href="#why-was-this-necessary">
        
      </a>
    </div>
    <p>As developers ourselves, we understand the need to ensure reliable code, and we heard your feedback loud and clear: the developer experience for testing Workflows needed to be better.</p><p>The black box nature of testing was one part of the problem. Beyond that, though, the limited testing offered came at a high cost. If you added a workflow to your project, even if you weren't testing the workflow directly, you were required to disable isolated storage because we couldn't guarantee isolation between tests. Isolated storage is a vitest-pool-workers feature to guarantee that each test runs in a clean, predictable environment, free from the side effects of other tests. Being forced to have it disabled meant that state could leak between tests, leading to flaky, unpredictable, and hard-to-debug failures.</p><p>This created a difficult choice for developers building complex applications. If your project used <a href="https://www.cloudflare.com/developer-platform/products/workers/"><u>Workers</u></a>, <a href="https://www.cloudflare.com/developer-platform/products/durable-objects/"><u>Durable Objects</u></a>, and <a href="https://www.cloudflare.com/developer-platform/products/r2/"><u>R2</u></a> alongside Workflows, you had to either abandon isolated testing for your <i>entire project</i> or skip testing. This friction resulted in a poor testing experience, which in turn discouraged the adoption of Workflows. Solving this wasn't just an improvement, it was a critical <i>step</i> in making Workflows part of any well-tested Cloudflare application.</p>
    <div>
      <h3>Introducing isolated testing for Workflows</h3>
      <a href="#introducing-isolated-testing-for-workflows">
        
      </a>
    </div>
    <p>We're introducing a new set of APIs that enable comprehensive, granular, and isolated testing for your Workflows, all running locally and offline with <code>vitest-pool-workers</code>, our testing framework that supports running tests in the Workers runtime <code>workerd</code>. This enables fast, reliable, and cheap test runs that don't depend on a network connection.</p><p>They are available through the <code>cloudflare:test</code> module, with <code>@cloudflare/vitest-pool-workers</code> version <b>0.9.0</b> and above. The new test module provides two primary functions to introspect your Workflows:</p><ul><li><p><code>introspectWorkflowInstance</code>: useful for unit tests with known instance IDs</p></li><li><p><code>introspectWorkflow</code>: useful for integration tests where IDs are typically generated dynamically.</p></li></ul><p>Let's walk through a practical example.</p>
    <div>
      <h3>A practical example: testing a blog moderation workflow</h3>
      <a href="#a-practical-example-testing-a-blog-moderation-workflow">
        
      </a>
    </div>
    <p>Imagine a simple Workflow for moderating a blog. When a user submits a comment, the Workflow requests a review from workers-ai. Based on the violation score returned, it then waits for a moderator to approve or deny the comment. If approved, it calls a <code>step.do</code> to publish the comment via an external API.</p><p>Testing this without our new APIs would be impossible. You'd have no direct way to simulate the step’s outcomes and simulate the moderator's approval. Now, you can mock everything.</p><p>Here’s the test code using <code>introspectWorkflowInstance</code> with a known instance ID:</p>
            <pre><code>import { env, introspectWorkflowInstance } from "cloudflare:test";

it("should mock a an ambiguous score, approve comment and complete", async () =&gt; {
   // CONFIG
   await using instance = await introspectWorkflowInstance(
       env.MODERATOR,
       "my-workflow-instance-id-123"
   );
   await instance.modify(async (m) =&gt; {
       await m.mockStepResult({ name: "AI content scan" }, { violationScore: 50 });
       await m.mockEvent({ 
           type: "moderation-approval", 
           payload: { action: "approved" },
       });
       await m.mockStepResult({ name: "publish comment" }, { status: "published" });
   });

   await env.MODERATOR.create({ id: "my-workflow-instance-id-123" });
   
   // ASSERTIONS
   expect(await instance.waitForStepResult({ name: "AI content scan" })).toEqual(
       { violationScore: 50 }
   );
   expect(
       await instance.waitForStepResult({ name: "publish comment" })
   ).toEqual({ status: "published" });

   await expect(instance.waitForStatus("complete")).resolves.not.toThrow();
});</code></pre>
            <p>This test mocks the outcomes of steps that require external API calls, such as the 'AI content scan', which calls <a href="https://www.cloudflare.com/developer-platform/products/workers-ai/"><u>Workers AI</u></a>, and the 'publish comment' step, which calls an external blog API.</p><p>If the instance ID is not known, because you are either making a worker request that starts one/multiple Workflow instances with random generated ids, you can call <code>introspectWorkflow(env.MY_WORKFLOW)</code>. Here’s the test code for that scenario, where only one Workflow instance is created:</p>
            <pre><code>it("workflow mock a non-violation score and be successful", async () =&gt; {
   // CONFIG
   await using introspector = await introspectWorkflow(env.MODERATOR);
   await introspector.modifyAll(async (m) =&gt; {
       await m.disableSleeps();
       await m.mockStepResult({ name: "AI content scan" }, { violationScore: 0 });
   });

   await SELF.fetch(`https://mock-worker.local/moderate`);

   const instances = introspector.get();
   expect(instances.length).toBe(1);

   // ASSERTIONS
   const instance = instances[0];
   expect(await instance.waitForStepResult({ name: "AI content scan"  })).toEqual({ violationScore: 0 });
   await expect(instance.waitForStatus("complete")).resolves.not.toThrow();
});</code></pre>
            <p>Notice how in both examples we’re calling the introspectors with <code>await using</code> - this is the <a href="https://developer.mozilla.org/en-US/docs/Web/JavaScript/Guide/Resource_management#the_using_and_await_using_declarations"><u>Explicit Resource Management</u></a> syntax from modern JavaScript. It is crucial here because when the introspector objects go out of scope at the end of the test, its disposal method is automatically called. This is how we ensure each test works with its own isolated storage.</p><p>The <code>modify</code> and <code>modifyAll</code> functions are the gateway to controlling instances. Inside its callback, you get access to a modifier object with methods to inject behavior such as mocking step outcomes, events and disabling sleeps.</p><p>You can find detailed documentation on the <a href="https://developers.cloudflare.com/workers/testing/vitest-integration/test-apis/#workflows"><u>Workers Cloudflare Docs</u></a>.</p><p><b>How we connected Vitest to the Workflows Engine</b></p><p>To understand the solution, you first need to understand the local architecture. When you run <code>wrangler dev,</code> your Workflows are powered by Miniflare, a simulator for testing Cloudflare Workers, and <code>workerd</code>. Each running workflow instance is backed by its own SQLite Durable Object, which we call the "Engine DO". This Engine DO is responsible for executing steps, persisting state, and managing the instance's lifecycle. It lives inside the local isolated Workers runtime.</p><p>Meanwhile, the Vitest test runner is a separate Node.js process living outside of <code>workerd</code>. This is why we have a Vitest custom pool that allows tests to run inside <code>workerd</code> called vitest-pool-workers. Vitest-pool-workers has a Runner Worker, which is a worker to run the tests with bindings to everything specified in the user wrangler.json file. This worker has access to the APIs under the “cloudflare:test” module. It communicates with Node.js through a special DO called Runner Object via WebSocket/RPC.</p><p>The first approach we considered was to use the test runner worker. In its current state, Runner worker has access to Workflow bindings from Workflows defined on the wrangler file. We considered also binding each Workflow's Engine DO namespace to this runner worker. This would give vitest-pool-workers direct access to the Engine DOs where it would be possible to directly call Engine methods. </p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3ptKRqwpfvK1dxY6T5Kuin/fbf92915b2d2a95542bf6bec8addd5ad/image1.png" />
          </figure><p>While promising, this approach would have required undesirable changes to the core of Miniflare and vitest-pool-workers, making it too invasive for this single feature. </p><p>Firstly, we would have needed to add a new <i>unsafe</i> field to Miniflare's Durable Objects. Its sole purpose would be to specify the service name of our Engines, preventing Miniflare from applying its default user prefix which would otherwise prevent the Durable Objects from being found.</p><p>Secondly, vitest-pool-workers would have been forced to bind every Engine DO from the Workflows in the project to its runner, even those not being tested. This would introduce unwanted bindings into the test environment, requiring an additional cleanup to ensure they were not exposed to the user's tests env.</p><p><b>The breakthrough</b></p><p>The solution is a combination of privileged local-only APIs and Remote Procedure Calls (RPC).</p><p>First, we added a set of <code>unsafe</code> functions to the <i>local</i> implementation of the Workflows binding, functions that are not available in the production environment. They act as a controlled access point, accessible from the test environment, allowing the test runner to get a stub to a specific Engine DO by providing its instance ID.</p><p>Once the test runner has this stub, it uses RPC to call specific, trusted methods on the Engine DO via a special <code>RpcTarget</code> called <code>WorkflowInstanceModifier</code>. Any class that extends <code>RpcTarget</code> has its objects replaced by a stub. Calling a method on this stub, in turn, makes an RPC back to the original object.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3AObAsJuBplii3aeqMw2bn/74b21880b09a293fef6f84de1ae1318e/image2.png" />
          </figure><p>This simpler approach is far less invasive because it's confined to the Workflows environment, which also ensures any future feature changes are safely isolated.</p><p><b>Introspecting Workflows with unknown IDs</b></p><p>When creating Workflows instances (either by <code>create()</code> or <code>createBatch()</code>) developers can provide a specific ID or have it automatically generated for them. This ID identifies the Workflow instance and is then used to create the associated Engine DO ID.</p><p>The logical starting point for implementation was <code>introspectWorkflowInstance(binding, instanceID)</code>, as the instance ID is known in advance. This allows us to generate the Engine DO ID required to identify the engine associated with that Workflow instance.</p><p>But often, one part of your application (like an HTTP endpoint) will create a Workflow instance with a randomly generated ID. How can we introspect an instance when we don't know its ID until after it's created?</p><p>The answer was to use a powerful feature of JavaScript: <code>Proxy</code> objects.</p><p>When you use <code>introspectWorkflow(binding)</code>, we wrap the Workflow binding in a Proxy. This proxy non-destructively intercepts all calls to the binding, specifically looking for <code>.create()</code> and <code>.createBatch()</code>. When your test triggers a workflow creation, the proxy inspects the call. It captures the instance ID — either one you provided or the random one generated — and immediately sets up the introspection on that ID, applying all the modifications you defined in the <code>modifyAll</code> call. The original creation call then proceeds as normal.</p>
            <pre><code>env[workflow] = new Proxy(env[workflow], {
  get(target, prop) {
    if (prop === "create") {
      return new Proxy(target.create, {
        async apply(_fn, _this, [opts = {}]) {

          // 1. Ensure an ID exists 
          const optsWithId = "id" in opts ? opts : { id: crypto.randomUUID(), ...opts };

          // 2. Apply test modifications before creation
          await introspectAndModifyInstance(optsWithId.id);

          // 3. Call the original 'create' method 
          return target.create(optsWithId);
        },
      });
    }

    // Same logic for createBatch()
  }
}</code></pre>
            <p>When the <code>await using</code> block from <code>introspectWorkflow()</code> finishes, or the <code>dispose()</code> method is called at the end of the test, the introspector is disposed of, and the proxy is removed, leaving the binding in its original state. It’s a low-impact approach that prioritizes developer experience and long-term maintainability.</p>
    <div>
      <h3>Get started with testing Workflows</h3>
      <a href="#get-started-with-testing-workflows">
        
      </a>
    </div>
    <p>Ready to add tests to your Workflows? Here’s how to get started:</p><ol><li><p><b>Update your dependencies:</b> Make sure you are using <code>@cloudflare/vitest-pool-workers</code> version <b>0.9.0 </b>or newer. Run the following command in your project: <code>npm install @cloudflare/vitest-pool-workers@latest</code></p></li><li><p><b>Configure your test environment:</b> If you're new to testing on Workers, follow our <a href="https://developers.cloudflare.com/workers/testing/vitest-integration/write-your-first-test/"><u>guide to write your first test</u></a>.</p></li></ol><p><b>Start writing tests</b>: Import <code>introspectWorkflowInstance</code> or <code>introspectWorkflow</code> from <code>cloudflare:test</code> in your test files and use the patterns shown in this post to mock, control, and assert on your Workflow's behavior. Also check out the official <a href="https://developers.cloudflare.com/workers/testing/vitest-integration/test-apis/#workflows"><u>API reference</u></a>.</p> ]]></content:encoded>
            <category><![CDATA[Developers]]></category>
            <category><![CDATA[Internship Experience]]></category>
            <category><![CDATA[Product News]]></category>
            <category><![CDATA[Developer Platform]]></category>
            <category><![CDATA[Cloudflare Workers]]></category>
            <category><![CDATA[Workflows]]></category>
            <guid isPermaLink="false">5Kq3w0WQ8bFIvLmxsDpIjO</guid>
            <dc:creator>Olga Silva</dc:creator>
            <dc:creator>Mia Malden</dc:creator>
        </item>
        <item>
            <title><![CDATA[AI Gateway now gives you access to your favorite AI models, dynamic routing and more — through just one endpoint]]></title>
            <link>https://blog.cloudflare.com/ai-gateway-aug-2025-refresh/</link>
            <pubDate>Wed, 27 Aug 2025 14:05:00 GMT</pubDate>
            <description><![CDATA[ AI Gateway now gives you access to your favorite AI models, dynamic routing and more — through just one endpoint. ]]></description>
            <content:encoded><![CDATA[ <p>Getting the observability you need is challenging enough when the code is deterministic, but AI presents a new challenge — a core part of your user’s experience now relies on a non-deterministic engine that provides unpredictable outputs. On top of that, there are many factors that can influence the results: the model, the system prompt. And on top of that, you still have to worry about performance, reliability, and costs. </p><p>Solving performance, reliability and observability challenges is exactly what Cloudflare was built for, and two years ago, with the introduction of AI Gateway, we wanted to extend to our users the same levels of control in the age of AI. </p><p>Today, we’re excited to announce several features to make building AI applications easier and more manageable: unified billing, secure key storage, dynamic routing, security controls with Data Loss Prevention (DLP). This means that AI Gateway becomes your go-to place to control costs and API keys, route between different models and providers, and manage your AI traffic. Check out our new <a href="https://ai.cloudflare.com/gateway"><u>AI Gateway landing page</u></a> for more information at a glance.</p>
    <div>
      <h2>Connect to all your favorite AI providers</h2>
      <a href="#connect-to-all-your-favorite-ai-providers">
        
      </a>
    </div>
    <p>When using an AI provider, you typically have to sign up for an account, get an API key, manage rate limits, top up credits — all within an individual provider’s dashboard. Multiply that for each of the different providers you might use, and you’ll soon be left with an administrative headache of bills and keys to manage.</p><p>With <a href="https://www.cloudflare.com/developer-platform/products/ai-gateway/"><u>AI Gateway</u></a>, you can now connect to major AI providers directly through Cloudflare and manage everything through one single plane. We’re excited to partner with Anthropic, Google, Groq, OpenAI, and xAI to provide Cloudflare users with access to their models directly through Cloudflare. With this, you’ll have access to over 350+ models across 6 different providers.</p><p>You can now get billed for usage across different providers directly through your Cloudflare account. This feature is available for Workers Paid users, where you’ll be able to add credits to your Cloudflare account and use them for <a href="https://www.cloudflare.com/learning/ai/inference-vs-training/"><u>AI inference</u></a> to all the supported providers. You’ll be able to see real-time usage statistics and manage your credits through the AI Gateway dashboard. Your AI Gateway inference usage will also be documented in your monthly Cloudflare invoice. No more signing up and paying for each individual model provider account. </p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4t2j5frheaYOLznprTL58p/f0fb4c6de2aad70c82a23bc35873ea50/image1.png" />
          </figure><p>Usage rates are based on then-current list prices from model providers — all you will need to cover is the transaction fee as you load credits into your account. Since this is one of the first times we’re launching a credits based billing system at Cloudflare, we’re releasing this feature in Closed Beta — sign up for access <a href="https://forms.gle/3LGAzN2NDXqtbjKR9"><u>here</u></a>.</p>
    <div>
      <h3>BYO Provider Keys, now with Cloudflare Secrets Store</h3>
      <a href="#byo-provider-keys-now-with-cloudflare-secrets-store">
        
      </a>
    </div>
    <p>Although we’ve introduced unified billing, some users might still want to manage their own accounts and keys with providers. We’re happy to say that AI Gateway will continue supporting our <a href="https://developers.cloudflare.com/ai-gateway/configuration/bring-your-own-keys/"><u>BYO Key feature, </u></a>improving the experience of BYO Provider Keys by integrating with Cloudflare’s secrets management product <a href="https://developers.cloudflare.com/secrets-store/"><u>Secrets Store</u></a>. Now, you can seamlessly and securely store your keys in one centralized location and distribute them without relying on plain text. Secrets Store uses a two level key hierarchy with AES encryption to ensure that your secret stays safe, while maintaining low latency through our global configuration system, <a href="https://blog.cloudflare.com/quicksilver-v2-evolution-of-a-globally-distributed-key-value-store-part-1/"><u>Quicksilver</u></a>.</p><p>You can now save and manage keys directly through your AI Gateway dashboard or through the Secrets Store <a href="http://dash.cloudflare.com/?to=/:account/secrets-store"><u>dashboard</u></a>, <a href="https://developers.cloudflare.com/api/resources/secrets_store/subresources/stores/subresources/secrets/methods/create/"><u>API</u></a>, or <a href="https://developers.cloudflare.com/workers/wrangler/commands/#secrets-store-secret"><u>Wrangler</u></a> by using the new <b>AI Gateway</b> <b>scope</b>. Scoping your secrets to AI Gateway ensures that only this specific service will be able to access your keys, meaning that secret could not be used in a Workers binding or anywhere else on Cloudflare’s platform.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6hiSSQi2lQGWQnGYe4e9p1/dadc4fde865010d9e263badb75847992/2.png" />
          </figure><p>You can pass your AI provider keys without including them directly in the request header. Instead of including the actual value, you can deploy the secret only using the Secrets Store reference: </p>
            <pre><code>curl -X POST https://gateway.ai.cloudflare.com/v1/&lt;ACCOUNT_ID&gt;/my-gateway/anthropic/v1/messages \
 --header 'cf-aig-authorization: CLOUDFLARE_AI_GATEWAY_TOKEN \
 --header 'anthropic-version: 2023-06-01' \
 --header 'Content-Type: application/json' \
 --data  '{"model": "claude-3-opus-20240229", "messages": [{"role": "user", "content": "What is Cloudflare?"}]}'</code></pre>
            <p>Or, using Javascript: </p>
            <pre><code>import Anthropic from '@anthropic-ai/sdk';


const anthropic = new Anthropic({
  apiKey: "CLOUDFLARE_AI_GATEWAY_TOKEN",
  baseURL: "https://gateway.ai.cloudflare.com/v1/&lt;ACCOUNT_ID&gt;/my-gateway/anthropic",
});


const message = await anthropic.messages.create({
  model: 'claude-3-opus-20240229',
  messages: [{role: "user", content: "What is Cloudflare?"}],
  max_tokens: 1024
});</code></pre>
            <p>By using Secrets Store to deploy your secrets, you no longer need to give every developer access to every key — instead, you can rely on Secrets Store’s <a href="https://developers.cloudflare.com/secrets-store/access-control/"><u>role-based access control</u></a> to further lock down these sensitive values. For example, you might want your security administrators to have Secrets Store admin permissions so that they can create, update, and delete the keys when necessary. With Cloudflare <a href="https://developers.cloudflare.com/logs/logpush/logpush-job/datasets/account/audit_logs/?cf_target_id=1C767B900C4419A313C249A5D99921FB"><u>audit logging</u></a>, all such actions will be logged so you know exactly who did what and when. Your developers, on the other hand, might only need Deploy permissions, so they can reference the values in code, whether that is a Worker or AI Gateway or both. This way, you reduce the risk of the secret getting leaked accidentally or intentionally by a malicious actor. This also allows you to update your provider keys in one place and automatically propagate that value to any AI Gateway using those values, simplifying the management. </p>
    <div>
      <h3>Unified Request/Response</h3>
      <a href="#unified-request-response">
        
      </a>
    </div>
    <p>We made it super easy for people to try out different AI models – but the developer experience should match that as well. We found that each provider can have slight differences in how they expect people to send their requests, so we’re excited to launch an automatic translation layer between providers. When you send a request through AI Gateway, it just works – no matter what provider or model you use.</p>
            <pre><code>import OpenAI from "openai";
const client = new OpenAI({
  apiKey: "YOUR_PROVIDER_API_KEY", // Provider API key
  // NOTE: the OpenAI client automatically adds /chat/completions to the end of the URL, you should not add it yourself.
  baseURL:
    "https://gateway.ai.cloudflare.com/v1/{account_id}/{gateway_id}/compat",
});

const response = await client.chat.completions.create({
  model: "google-ai-studio/gemini-2.0-flash",
  messages: [{ role: "user", content: "What is Cloudflare?" }],
});

console.log(response.choices[0].message.content);</code></pre>
            
    <div>
      <h2>Dynamic Routes</h2>
      <a href="#dynamic-routes">
        
      </a>
    </div>
    <p>When we first launched <a href="https://www.cloudflare.com/developer-platform/products/workers/"><u>Cloudflare Workers</u></a>, it was an easy way for people to intercept HTTP requests and customize actions based on different attributes. We think the same customization is necessary for AI traffic, so we’re launching <a href="https://developers.cloudflare.com/ai-gateway/features/dynamic-routing/"><u>Dynamic Routes</u></a> in AI Gateway.</p><p>Dynamic Routes allows you to define certain actions based on different request attributes. If you have free users, maybe you want to ratelimit them to a certain request per second (RPS) or a certain dollar spend. Or maybe you want to conduct an A/B test and split 50% of traffic to Model A and 50% of traffic to Model B. You could also want to chain several models in a row, like adding custom guardrails or enhancing a prompt before it goes to another model. All of this is possible with Dynamic Routes!</p><p>We’ve built a slick UI in the AI Gateway dashboard where you can define simple if/else interactions based on request attributes or a percentage split. Once you define a route, you’ll use the route as the “model” name in your input JSON and we will manage the traffic as you defined. </p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7qLp4KT8ASCLRv2pyM2kxR/3151e32afa4d8447ae07a5a8fb09a9b6/3.png" />
          </figure>
            <pre><code>import OpenAI from "openai";

const cloudflareToken = "CF_AIG_TOKEN";
const accountId = "{account_id}";
const gatewayId = "{gateway_id}";
const baseURL = `https://gateway.ai.cloudflare.com/v1/${accountId}/${gatewayId}`;

const openai = new OpenAI({
  apiKey: cloudflareToken,
  baseURL,
});

try {
  const model = "dynamic/&lt;your-dynamic-route-name&gt;";
  const messages = [{ role: "user", content: "What is a neuron?" }];
  const chatCompletion = await openai.chat.completions.create({
    model,
    messages,
  });
  const response = chatCompletion.choices[0].message;
  console.log(response);
} catch (e) {
  console.error(e);
}</code></pre>
            
    <div>
      <h2>Built-in security with Firewall in AI Gateway</h2>
      <a href="#built-in-security-with-firewall-in-ai-gateway">
        
      </a>
    </div>
    <p>Earlier this year we announced <a href="https://developers.cloudflare.com/changelog/2025-02-26-guardrails/"><u>Guardrails</u></a> in AI Gateway and now we’re expanding our security capabilities and include Data Loss Prevention (DLP) scanning in AI Gateway’s Firewall. With this, you can select the DLP profiles you are interested in blocking or flagging, and we will scan requests for the matching content. DLP profiles include general categories like “Financial Information”, “Social Security, Insurance, Tax and Identifier Numbers” that everyone has access to with a free Zero Trust account. If you would like to create a custom DLP profile to safeguard specific text, the upgraded Zero Trust plan allows you to create custom DLP profiles to catch sensitive data that is unique to your business.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5yti8oy4TF01EdZMtYN1If/d2f3bd804873644862fbd61b07d3574a/4.png" />
          </figure><p>False positives and grey area situations happen, we give admins controls on whether to fully block or just alert on DLP matches. This allows administrators to monitor for potential issues without creating roadblocks for their users.. Each log on AI gateway now includes details about the DLP profiles matched on your request, and the action that was taken:</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2pDdqy8bVmsiyjm4sg2pkG/ff97d9069e200fb859c1dc2daed8e4fa/5.png" />
          </figure>
    <div>
      <h2>More coming soon…</h2>
      <a href="#more-coming-soon">
        
      </a>
    </div>
    <p>If you think about the history of Cloudflare, you’ll notice similar patterns that we’re following for the new vision for AI Gateway. We want developers of AI applications to be able to have simple interconnectivity, observability, security, customizable actions, and more — something that Cloudflare has a proven track record of accomplishing for global Internet traffic. We see AI Gateway as a natural extension of Cloudflare’s mission, and we’re excited to make it come to life.</p><p>We’ve got more launches up our sleeves, but we couldn’t wait to get these first handful of features into your hands. Read up about it in our <a href="https://developers.cloudflare.com/ai-gateway/"><u>developer docs</u></a>, <a href="https://developers.cloudflare.com/ai-gateway/get-started/"><u>give it a try</u></a>, and let us know what you think. If you want to explore larger deployments, <a href="https://www.cloudflare.com/plans/enterprise/contact/?utm_medium=referral&amp;utm_source=blog&amp;utm_campaign=2025-q3-acq-gbl-connectivity-ge-ge-general-ai_week_blog"><u>reach out for a consultation </u></a>with Cloudflare experts.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/LTpdSaZMBbdOzASW8ggoS/6610f437d955174d7f7f1212617a4365/6.png" />
          </figure><div>
  
</div><p></p> ]]></content:encoded>
            <category><![CDATA[AI Week]]></category>
            <category><![CDATA[AI Gateway]]></category>
            <category><![CDATA[AI]]></category>
            <guid isPermaLink="false">6O1tkxTcxxG9hgxI8X9kFH</guid>
            <dc:creator>Michelle Chen</dc:creator>
            <dc:creator>Abhishek Kankani</dc:creator>
            <dc:creator>Mia Malden</dc:creator>
        </item>
        <item>
            <title><![CDATA[Vulnerability disclosure on SSL for SaaS v1 (Managed CNAME)]]></title>
            <link>https://blog.cloudflare.com/vulnerability-disclosure-on-ssl-for-saas-v1-managed-cname/</link>
            <pubDate>Fri, 01 Aug 2025 13:00:00 GMT</pubDate>
            <description><![CDATA[ An upcoming vulnerability disclosure in Cloudflare’s SSL for SaaSv1 is detailed, explaining the steps we’ve taken towards deprecation. ]]></description>
            <content:encoded><![CDATA[ <p>Earlier this year, a group of external researchers identified and reported a vulnerability in Cloudflare’s SSL for SaaS v1 (Managed CNAME) product offering through Cloudflare’s <a href="https://hackerone.com/cloudflare?type=team"><u>bug bounty</u></a> program. We officially deprecated SSL for SaaS v1 in 2021; however, some customers received extensions for extenuating circumstances that prevented them from migrating to SSL for SaaS v2 (Cloudflare for SaaS). We have continually worked with the remaining customers to migrate them onto Cloudflare for SaaS over the past four years and have successfully migrated the vast majority of these customers. For most of our customers, there is no action required; for the very small number of SaaS v1 customers, we will be actively working to help migrate you to SSL for SaaS v2 (Cloudflare for SaaS).   </p>
    <div>
      <h2>Background on SSL for SaaS v1 at Cloudflare</h2>
      <a href="#background-on-ssl-for-saas-v1-at-cloudflare">
        
      </a>
    </div>
    <p>Back in 2017, Cloudflare <a href="https://blog.cloudflare.com/introducing-ssl-for-saas/"><u>announced SSL for SaaS</u></a>, a product that allows SaaS providers to extend the benefits of Cloudflare security and performance to their end customers. Using a “Managed CNAME” configuration, providers could bring their customer’s domain onto Cloudflare. In the first version of SSL for SaaS (v1), the traffic for Custom Hostnames is proxied to the origin based on the IP addresses assigned to the zone. In this Managed CNAME configuration, the end customers simply pointed their domains to the SaaS provider origin using a CNAME record. The customer’s origin would then be configured to accept traffic from these hostnames. </p>
    <div>
      <h2>What are the security concerns with v1 (Managed CNAME)?</h2>
      <a href="#what-are-the-security-concerns-with-v1-managed-cname">
        
      </a>
    </div>
    <p>While SSL for SaaS v1 enabled broad adoption of Cloudflare for end customer domains, its architecture introduced a subtle but important security risk – one that motivated us to build Cloudflare for SaaS. </p><p>As adoption scaled, so did our understanding of the security and operational limitations of SSL for SaaS v1. The architecture depended on IP-based routing and didn’t verify domain ownership before proxying traffic. That meant that any custom hostname pointed to the correct IP could be served through Cloudflare — even if ownership hadn’t been proven. While this produced the desired functionality, this design introduced risks and created friction when customers needed to make changes without downtime. </p><p>A malicious CF user aware of another customer's Managed CNAME (via social engineering or publicly available info), could abuse the way SSL for SaaS v1 handles host header redirects through DNS manipulation and Man-in-The-Middle attack because of the way Cloudflare serves the valid TLS certificate for the Managed CNAME.</p><p>For regular connections to Cloudflare, the certificate served by Cloudflare is determined by the <a href="https://www.cloudflare.com/learning/ssl/what-is-sni/"><u>SNI provided by the client in the TLS handshake</u></a>, while the zone configuration applied to a request is determined based on the host-header of the HTTP request.</p><p>In contrast, SSL for SaaS v1/Managed CNAME setups work differently. The certificate served by Cloudflare is still based on the TLS SNI, but the zone configuration is determined solely based on the specific Cloudflare anycast IP address the client connected to.</p><p>For example, let’s assume that <code>192.0.2.1</code> is the anycast IP address assigned to a SaaS provider. All connections to this IP address will be routed to the SaaS provider's origin server, irrespective of the host-header in the HTTP request. This means that for the following request:</p>
            <pre><code>$ curl --connect-to ::192.0.2.1 https://www.cloudflare.com</code></pre>
            <p>The certificate served by Cloudflare will be valid for <a href="http://www.cloudflare.com"><u>www.cloudflare.com</u></a>, but the request will not be sent to the origin server of <a href="http://www.cloudflare.com"><u>www.cloudflare.com</u></a>. It will instead be sent to the origin server of the SaaS provider assigned to the <code>192.0.2.1</code> IP address.</p><p>While the likelihood of exploiting this vulnerability is low and requires multiple complex conditions to be met, the vulnerability can be paired with other issues and potentially exploit other Cloudflare customers if:</p><ol><li><p>The adversary is able to perform <a href="https://www.cloudflare.com/learning/dns/dns-cache-poisoning/"><u>DNS poisoning</u></a> on the target domain to change the IP address that the end-user connects to when visiting the target domain</p></li><li><p>The adversary is able to place a malicious payload on the Managed CNAME customer’s website, or discovers an existing cross-site scripting vulnerability on the website</p></li></ol>
    <div>
      <h2>Mitigation: A Phased Transition</h2>
      <a href="#mitigation-a-phased-transition">
        
      </a>
    </div>
    <p>To address these challenges, we launched SSL for SaaS v2 (Cloudflare for SaaS) and <a href="https://developers.cloudflare.com/cloudflare-for-platforms/cloudflare-for-saas/reference/versioning/"><u>deprecated SSL for SaaS v1</u></a> in 2021. Cloudflare for SaaS transitioned away from IP-based routing towards a verified custom hostname model. Now, custom hostnames must pass a <a href="https://developers.cloudflare.com/cloudflare-for-platforms/cloudflare-for-saas/domain-support/hostname-validation/"><u>hostname verification step</u></a> alongside SSL certificate validation to proxy to the customer origin. This improves security by limiting origin access to authorized hostnames and reduces downtime through<a href="https://developers.cloudflare.com/cloudflare-for-platforms/cloudflare-for-saas/domain-support/hostname-validation/pre-validation/"><u> hostname pre-validation</u></a>, which allows customers to verify ownership before traffic is proxied through Cloudflare.</p><p>When Cloudflare for SaaS became generally available, we began a careful and deliberate deprecation of the original architecture. Starting in March 2021, we notified all v1 users of the then upcoming sunset in favor of v2 in September 2021 with instructions to migrate. Although we officially deprecated Managed CNAME, some customers were granted exceptions and various zones remained on SSL for SaaS v1. Cloudflare was notified this year through our Bug Bounty program that an external researcher had identified the SSL for SaaS v1 vulnerabilities in the midst of our continued efforts to migrate all customers.</p><p>The majority of customers have successfully migrated to the modern v2 setup. For those few that require more time to migrate, we've implemented compensating controls to limit the potential scope and reach of this issue for the remaining v1 users. Specifically:</p><ul><li><p>This feature is unavailable for new customer accounts, and new zones within existing customer accounts, to configure via the UI or API</p></li><li><p>Cloudflare actively maintains an allowlist of zones &amp; customers that currently use the v1 service</p></li></ul><p>We have also implemented WAF custom rules configurations for the remaining customers such that any requests targeting an unauthorized destination will be caught and blocked in their L7 firewall.</p><p>The architectural improvement of Cloudflare for SaaS not only closes the gap between certificate and routing validation but also ensures that only verified and authorized domains are routed to their respective origins—effectively eliminating this class of vulnerability.</p>
    <div>
      <h2>Next steps</h2>
      <a href="#next-steps">
        
      </a>
    </div>
    <p>There is no action necessary for Cloudflare customers, with the exception of remaining SSL for SaaS v1 customers, with whom we are actively working to help migrate. While we move to the final phases of sunsetting v1, Cloudflare for SaaS is now the standard across our platform, and all current and future deployments will use this secure, validated model by default.</p>
    <div>
      <h2>Conclusion</h2>
      <a href="#conclusion">
        
      </a>
    </div>
    <p>As always, thank you to the external researchers for responsibly disclosing this vulnerability. We encourage all of our Cloudflare community to submit any identified vulnerabilities to help us continually improve upon the security posture of our products and platform.</p><p>We also recognize that the trust you place in us is paramount to the success of your infrastructure on Cloudflare. We consider these vulnerabilities with the utmost concern and will continue to do everything in our power to mitigate impact. Although we are confident in our steps to mitigate impact, we recognize the concern that such incidents may induce. We deeply appreciate your continued trust in our platform and remain committed not only to prioritizing security in all we do, but also acting swiftly and transparently whenever an issue does arise.</p> ]]></content:encoded>
            <category><![CDATA[Vulnerabilities]]></category>
            <category><![CDATA[Cloudflare for SaaS]]></category>
            <category><![CDATA[Security]]></category>
            <guid isPermaLink="false">4W7e9grs33H6l2VfLX03C2</guid>
            <dc:creator>Mia Malden</dc:creator>
            <dc:creator>Albert Pedersen</dc:creator>
            <dc:creator>Trishna</dc:creator>
            <dc:creator>Ross Jacobs</dc:creator>
        </item>
        <item>
            <title><![CDATA[Introducing Cloudflare Secrets Store (Beta): secure your secrets, simplify your workflow]]></title>
            <link>https://blog.cloudflare.com/secrets-store-beta/</link>
            <pubDate>Wed, 09 Apr 2025 14:00:00 GMT</pubDate>
            <description><![CDATA[ Securely store, manage and deploy account level secrets to Cloudflare Workers through Cloudflare Secrets Store, available in beta – with role-based access control, audit logging and Wrangler support. ]]></description>
            <content:encoded><![CDATA[ <p>Every cloud platform needs a secure way to store API tokens, keys, and credentials — welcome, Cloudflare Secrets Store! Today, we are very excited to announce and launch Secrets Store in beta. We built <a href="https://developers.cloudflare.com/secrets-store/"><u>Cloudflare Secrets Store</u></a> to help our customers centralize management, improve security, and restrict access to sensitive values on the Cloudflare platform. </p><p>Wherever secrets exist at Cloudflare – from our <a href="https://developers.cloudflare.com/learning-paths/workers/devplat/intro-to-devplat/"><u>developer platform</u></a>, to <a href="https://developers.cloudflare.com/products/?product-group=AI"><u>AI products</u></a>, to <a href="https://blog.cloudflare.com/cloudflare-one/"><u>Cloudflare One</u></a> –  we’ve built a centralized platform that allows you to manage them in one place. </p><p>We are excited to integrate Cloudflare Secrets Store with the whole portfolio of Cloudflare products, starting today with Cloudflare Workers. </p>
    <div>
      <h2>Securing your secrets across Workers</h2>
      <a href="#securing-your-secrets-across-workers">
        
      </a>
    </div>
    <p>If you have a secret you want to use across multiple Workers, you can now use the Cloudflare Secrets Store to do so. You can spin up your store from the dashboard or by using Wrangler CLI:</p>
            <pre><code>wrangler secrets-store store create &lt;name&gt;
</code></pre>
            <p>Then, create a secret:</p>
            <pre><code>wrangler secrets-store secret create &lt;store-id&gt;
</code></pre>
            <p>Once the secret is created, you can specify the binding to deploy in a Worker immediately. </p>
            <pre><code>secrets_store_secrets = [
{ binding = "'open_AI_KEY'", store_id= "abc123", secret_name = "open_AI_key"},
]
</code></pre>
            <p>Last step – you can now reference the secret in code!</p>
            <pre><code>const openAIkey = await env.open_AI_key.get();
</code></pre>
            <p><a href="https://blog.cloudflare.com/workers-secrets-environment/"><u>Environment variables and secrets</u></a> were first launched in Cloudflare Workers back in 2020. Now, there are millions of local secrets deployed on Workers scripts. However, these are not all <i>unique</i>. Many of these secrets have duplicate values within a customer’s account. For example, a customer may reuse the same API token in ten different scripts, but since each secret is accessible only on the per-Worker level, that value would be stored in ten different local secrets. Plus, if you need to roll that secret, there is no seamless way to do so that preserves a single source of truth.</p><p>With thousands of secrets duplicated across scripts — each requiring manual creation and updates  — scoping secrets to individual Workers has created significant friction for developers. Additionally, because Workers secrets are created and deployed locally, any secret is accessible – in terms of creation, editing, and deletion – to anyone who has access to that script. </p><p>Now, you can create account-level secrets and variables that can be shared across all Workers scripts, centrally managed and protected within the Secrets Store. </p>
    <div>
      <h2>Building a secure secrets manager</h2>
      <a href="#building-a-secure-secrets-manager">
        
      </a>
    </div>
    <p>The most important feature of a Secret Store, of course, is to make sure that your secrets are stored securely. </p><p>Once the secret is created, its value will not be readable by anyone, be it developers, admins, or Cloudflare employees. Only the permitted service will be able to use the value at runtime. </p><p>This is why the first thing that happens when you deploy a new secret to Cloudflare is encrypting the secret prior to storing it in our database. We make sure your tokens are safe and protected using a two-level key hierarchy, where the root key never leaves a secure system. This is done by making use of DEKs (Data Encryption Keys) to encrypt your secrets and a separate KEK (Key Encryption Key) to encrypt the DEKs themselves. The data encryption keys are refreshed frequently, making the possibility and impact scope of a single DEK exposure very small. In the future, we will introduce periodic key rotations for our KEK and also provide a way for customers to have their own account-specific DEKs.</p><p>After the secrets are encrypted, there are two permissions checks when deploying a secret from the Secrets Store to a Worker. First, the user must have sufficient permissions to create the binding. Second, when the Worker makes a <code>fetch</code> call to retrieve the secret value, we verify that the Worker has an appropriate binding to access that secret. </p><p>The secrets are automatically propagated across our network using <a href="https://blog.cloudflare.com/introducing-quicksilver-configuration-distribution-at-internet-scale/"><u>Quicksilver</u></a> – so that every secret is on every server– to ensure they’re immediately accessible and ready for the Worker to use. Wherever your Worker is deployed, your secrets will be, too. </p><p>If you’d like to use a secret to secure your AI model keys before passing on to AI Gateway: </p>
            <pre><code>export default {
 async fetch(request, env, ctx) {
   const prompt = "Write me a pun about Cloudflare";
   const openAIkey = await env.open_AI_key.get();

   const response = await fetch("https://gateway.ai.cloudflare.com/v1/YOUR_ACCOUNT_TAG/openai/chat/completions", {
     method: "POST",
     headers: {
       "Content-Type": "application/json",
       "Authorization": `Bearer ${openAIkey}`,
     },
     body: JSON.stringify({
       model: "gpt-3.5-turbo",
       messages: [
         { role: "user", content: prompt }
       ],
       temperature: 0.8,
       max_tokens: 100,
     }),
   });

   const data = await response.json();
   const answer = data.choices?.[0]?.message?.content || "No pun found 😢";

   return new Response(answer, {
     headers: { "Content-Type": "text/plain" },
   });
 }
};
</code></pre>
            
    <div>
      <h2>Cloudflare Secrets Store, with built-in RBAC</h2>
      <a href="#cloudflare-secrets-store-with-built-in-rbac">
        
      </a>
    </div>
    <p>Now, a secret’s value can be updated once and applied everywhere — but not by everyone. Cloudflare Secrets Store uses <a href="https://www.cloudflare.com/learning/access-management/role-based-access-control-rbac/"><u>role-based access control (RBAC)</u></a> to ensure that only those with permission can view, create, edit, or delete secrets. Additionally, any changes to the Secrets Store are recorded in the <a href="https://developers.cloudflare.com/logs/reference/log-fields/account/audit_logs/"><u>audit logs</u></a>, allowing you to track changes. </p><p>Whereas per-Worker secrets are tied to the Workers account role, meaning that anyone who can modify the Worker can modify the secret, access to account-level secrets is restricted with more granular controls. This allows for differentiation between security admins who manage secrets and developers who use them in the code.</p><div>
    <figure>
        <table>
            <colgroup>
                <col></col>
                <col></col>
                <col></col>
                <col></col>
            </colgroup>
            <tbody>
                <tr>
                    <td> </td>
                    <td>
                        <p><span><span>Secrets Store Admin</span></span></p>
                    </td>
                    <td>
                        <p><span><span>Secrets Store Reporter</span></span></p>
                    </td>
                    <td>
                        <p><span><span>Secrets Store Deployer</span></span></p>
                    </td>
                </tr>
                <tr>
                    <td>
                        <p><span><span>Create secrets</span></span></p>
                    </td>
                    <td>
                        <p><span><span>✓</span></span></p>
                    </td>
                    <td> </td>
                    <td> </td>
                </tr>
                <tr>
                    <td>
                        <p><span><span>Update secrets</span></span></p>
                    </td>
                    <td>
                        <p><span><span>✓</span></span></p>
                    </td>
                    <td> </td>
                    <td> </td>
                </tr>
                <tr>
                    <td>
                        <p><span><span>Delete secrets</span></span></p>
                    </td>
                    <td>
                        <p><span><span>✓</span></span></p>
                    </td>
                    <td> </td>
                    <td> </td>
                </tr>
                <tr>
                    <td>
                        <p><span><span>View secrets metadata</span></span></p>
                    </td>
                    <td>
                        <p><span><span>✓</span></span></p>
                    </td>
                    <td>
                        <p><span><span>✓</span></span></p>
                    </td>
                    <td>
                        <p><span><span>✓</span></span></p>
                    </td>
                </tr>
                <tr>
                    <td>
                        <p><span><span>Deploy secrets (i.e. bind to a Worker)</span></span></p>
                    </td>
                    <td>
                        <p> </p>
                    </td>
                    <td> </td>
                    <td>
                        <p><span><span>✓</span></span></p>
                    </td>
                </tr>
            </tbody>
        </table>
    </figure>
</div><p>Each secret can also be scoped to a particular Cloudflare product to ensure the value is only used where it is meant to be. Today, the secrets are restricted to Workers by default, but once the Secrets Store supports multiple products, you’ll be able to specify where the secret can be used (e.g. “I only want this secret to be accessible through Firewall Rules”). </p>
    <div>
      <h2>What’s next for Secrets Store</h2>
      <a href="#whats-next-for-secrets-store">
        
      </a>
    </div>
    <p>Secrets Store will support all secrets across Cloudflare, including:</p><ul><li><p>Cloudflare Access has <a href="https://developers.cloudflare.com/cloudflare-one/identity/service-tokens/"><u>service tokens</u></a> to authenticate against your Zero Trust policies.</p></li><li><p><a href="https://developers.cloudflare.com/rules/transform/"><u>Transform Rules</u></a> require sensitive values in the request headers to grant access or pass onto to something else.</p></li><li><p><a href="https://developers.cloudflare.com/ai-gateway/"><u>AI Gateway</u></a> relies upon secret keys from each provider to position Cloudflare between the end user and the AI model. </p></li></ul><p>…and more! </p><p>Right now, to use a secret within a Worker, you have to create a binding for that specific secret. In the future, we’ll allow you to create a binding to the store itself so that the Worker can access any secret within that store. We’ll also allow customers to create multiple secret stores within their account so that they can manage secrets by group when creating access policies. </p><p>Every Cloudflare account can create up to twenty secrets for free. We’re currently finalizing our pricing and will publish more details for each tier soon.</p><p>We’re thrilled to get Secrets Store into our customers’ hands and are excited to continue building it out to support more products and features as we work towards making Secrets Store GA.</p>
    <div>
      <h2>Try it out today! </h2>
      <a href="#try-it-out-today">
        
      </a>
    </div>
    <p>Cloudflare Secrets Store with the Workers integration is <a href="http://dash.cloudflare.com/?to=/:account/secrets-store"><u>available for all customers via UI</u></a> and API today. For instructions to get started in the Cloudflare dashboard, take a look at our <a href="https://developers.cloudflare.com/secrets-store/"><u>developer documentation</u></a>. </p><p>If you have any feedback or feature requests, we’d love for you to share those with us on this <a href="https://docs.google.com/forms/d/e/1FAIpQLSejhdh-0x2C0OHdVz9xabGYww3PWtOOZ1MwNLARZIt3s5ioYg/viewform?usp=header"><u>Google form</u></a>. </p> ]]></content:encoded>
            <category><![CDATA[Developer Week]]></category>
            <category><![CDATA[Beta]]></category>
            <category><![CDATA[Secrets Store]]></category>
            <guid isPermaLink="false">3ctRz9zcwJFS3GuxmXchlS</guid>
            <dc:creator>Mia Malden</dc:creator>
            <dc:creator>Mitali Rawat</dc:creator>
            <dc:creator>James Vaughan</dc:creator>
        </item>
        <item>
            <title><![CDATA[Simplify allowlist management and lock down origin access with Cloudflare Aegis]]></title>
            <link>https://blog.cloudflare.com/aegis-deep-dive/</link>
            <pubDate>Thu, 20 Mar 2025 13:00:00 GMT</pubDate>
            <description><![CDATA[ Cloudflare Aegis provides dedicated egress IPs for Zero Trust origin access strategies, now supporting BYOIP and customer-facing configurability, with observability of Aegis IP utilization soon. ]]></description>
            <content:encoded><![CDATA[ <p>Today, we’re taking a deep dive into <a href="https://developers.cloudflare.com/aegis/"><u>Aegis</u></a>, Cloudflare’s origin protection product, to help you understand what the product is, how it works, and how to take full advantage of it for locking down access to your origin. We’re excited to announce the availability of <a href="https://developers.cloudflare.com/byoip/"><u>Bring Your Own IPs (BYOIP)</u></a> for Aegis, a customer-accessible Aegis API, and a gradual rollout for observability of Aegis IP utilization.</p><p>If you are new to Cloudflare Aegis, let’s take a step back and understand the product’s purpose and security benefits, process, and how it came to be. </p>
    <div>
      <h3>Origin protection then…</h3>
      <a href="#origin-protection-then">
        
      </a>
    </div>
    <p>Allowlisting a specific set of IP addresses has long existed as one of the simplest ways of restricting access to a server. This firewall mechanism is a starting state that just about every server supports. As we built Cloudflare’s network, one of the first features that customers requested was the ability to restrict access to their origin, so only Cloudflare could make requests to it. Back then, the most natural way to support this was to tell our customers which IP addresses belong to us, so they could allowlist those in their origin firewall. To that end, we have published our <a href="https://www.cloudflare.com/ips/"><u>IP address ranges</u></a>, providing an easy configuration to ensure that all traffic accessing your origin comes from Cloudflare’s network.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4I2fP03AeszxuHL78Ap9lA/bf75dcb9259b8b97b73f55831b3c019f/BLOG-2609_2.png" />
          </figure><p>However, Cloudflare’s IP ranges are used across multiple Cloudflare services and customers, so restricting access to the full list doesn’t necessarily give customers the security benefit they need. With the <a href="https://www.cloudflare.com/2024-api-security-management-report/#:~:text=Cloudflare%20serves%20over%2050%20million,billion%20cyber%20threats%20each%20day."><u>frequency</u></a> and <a href="https://blog.cloudflare.com/how-cloudflare-auto-mitigated-world-record-3-8-tbps-ddos-attack/"><u>scale</u></a> of IP-based and DDoS attacks every day, origin protection is absolutely paramount. Some customers have the need for more stringent security precautions to ensure that traffic is only coming from Cloudflare’s network and, more specifically, only coming from their zones within Cloudflare.</p>
    <div>
      <h3>Origin protection now…</h3>
      <a href="#origin-protection-now">
        
      </a>
    </div>
    <p>Cloudflare has built out additional services to lock down access to your origin, like <a href="https://developers.cloudflare.com/ssl/origin-configuration/authenticated-origin-pull/"><u>authenticated origin pulls</u></a> (mTLS) and <a href="https://developers.cloudflare.com/cloudflare-one/connections/connect-networks/"><u>Cloudflare Tunnels</u></a>, that no longer rely on IP addresses as an indicator of identity. These are part of a global effort towards <a href="https://www.cloudflare.com/learning/security/glossary/what-is-zero-trust/#:~:text=Zero%20Trust%20security%20means%20that,shown%20to%20prevent%20data%20breaches."><u>Zero Trust security</u></a>: whereas the Internet used to operate under a trust-but-verify model, we aim to operate as nothing is trusted, and everything is verified. </p><p>Having non-ephemeral IP addresses — upon which the firewall allowlist mechanism relies — does not quite fit the Zero Trust system. Although mTLS and similar solutions present a more modern approach to origin security, they aren’t always feasible for customers, depending on their hardware or system architecture. </p><p>We launched <a href="https://blog.cloudflare.com/cloudflare-aegis/"><u>Cloudflare Aegis</u></a> in March 2023 for customers seeking an intermediary security solution. Aegis provides a dedicated IP address, or set of addresses, from which Cloudflare sends requests, allowing you to further lock down your origin’s layer 3 firewall. Aegis also simplifies management by only requiring you to allowlist a small number of IP addresses. </p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3KmeBCAqzygLJWR7oMf7Mw/5303403c266e80d271160b9c32cbd764/BLOG-2609_3.png" />
          </figure><p>Normally, Cloudflare’s <a href="https://www.cloudflare.com/ips/"><u>publicly listed IP ranges</u></a> are used to egress from Cloudflare’s network to the customer origin. With these IP addresses distributed across Cloudflare’s network, the customer traffic can egress from many servers to the customer origin.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3RbmfIwMajVSjnHdEBN1Y/f99b2f41cb289df93f90c1960bf6a497/BLOG-2609_4.png" />
          </figure><p>With Aegis, a customer does not necessarily have an Aegis IP address on every server if they are using IPv4. That means requests must be routed through Cloudflare’s network to a server where Aegis IP addresses are present before the traffic can egress to the customer origin. </p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2TdNsZJmckJZuVCmAyCGbA/d7ed52f94c8a6fc375363bf011e40229/BLOG-2609_5.png" />
          </figure>
    <div>
      <h3>How requests are routed with Aegis</h3>
      <a href="#how-requests-are-routed-with-aegis">
        
      </a>
    </div>
    <p>A few terms, before we begin:</p><ul><li><p>Anycast: a technology where each of our data centers “announces” and can handle the same IP address ranges</p></li><li><p>Unicast: a technology where each server is given its own, unique <i>unicast</i> IP address</p></li></ul><p>Dedicated egress Aegis IPs are located in a particular set of specific data centers. This list is handpicked by the customer, in conversation with Cloudflare, to be geographically close to their origin servers for optimal security and performance in tandem. </p><p>Aegis relies on a technology called <a href="https://blog.cloudflare.com/cloudflare-servers-dont-own-ips-anymore/#soft-unicast-is-indistinguishable-from-magic"><u>soft-unicasting,</u> which allows</a> us to share a /32 egress IPv4 amongst many servers, thereby enabling us to spread a single subnet across many data centers. Then, the traffic going back from the origin servers (the return path) is routed to their closest data center. Once in Cloudflare's network, our in-house <a href="https://blog.cloudflare.com/unimog-cloudflares-edge-load-balancer/"><u>L4 XDP-based load balancer, Unimog,</u></a> ensures that the return packets make it back to the machine that connected to the origin servers at the start.</p><p>This supports fast, local, and reliable egress from Cloudflare’s network. With this configuration, we essentially use <a href="https://www.cloudflare.com/learning/cdn/glossary/anycast-network/"><u>Anycast</u></a> at the <a href="https://www.cloudflare.com/learning/security/glossary/what-is-bgp/"><u>BGP layer</u></a> before using an IP and port range to reach a specific machine in the correct data center. Across Cloudflare’s network, we use a significant range of egress IPs to cover all data centers and machines. Since Aegis customers only have a few IPv4 addresses, the range is limited to a few data centers rather than Cloudflare’s entire egress IP range.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6Y2sZmlh9uy6tRAiJhCZOp/ae186c36f30ba80500b615451952dfd7/BLOG-2609_6.png" />
          </figure>
    <div>
      <h3>The capacity issue</h3>
      <a href="#the-capacity-issue">
        
      </a>
    </div>
    <p>Every IP address has <a href="https://www.cloudflare.com/learning/network-layer/what-is-a-computer-port/#:~:text=There%20are%2065%2C535%20possible%20port,File%20Transfer%20Protocol%20(FTP)."><u>65,535 ports</u></a>. A request egresses from exactly one port on the Aegis IP address to exactly one port on the origin IP address. </p><p>Each TCP request consists of a 4-way tuple that contains:</p><ol><li><p>Source IP address</p></li><li><p>Source port</p></li><li><p>Destination IP address</p></li><li><p>Destination port</p></li></ol><p>A <a href="https://blog.cloudflare.com/everything-you-ever-wanted-to-know-about-udp-sockets-but-were-afraid-to-ask-part-1/"><u>UDP request</u></a> can also consist of a 4-way tuple (if it’s connected) or a 2-way tuple (if it’s unconnected), simply including a bind IP address and port. Aegis supports both TCP and UDP traffic — in either case, the requests rely upon IP:port pairings between the source and destination. </p><p>When a request reaches the origin, it opens a <i>connection</i>, through which data can pass between the source and destination. One source port can sustain multiple connections at a time, <i>only</i> if the destination ip:ports are different. </p><p>Normally at Cloudflare, an IP address establishes connections to a variety of different destination IP ports or addresses to support high traffic volumes. With Aegis, that is no longer the case. The challenge with Aegis IP capacity is exactly that: all the traffic is egressing to the same (or a small set of) origin IP address(es) from the same (or a small set of) source IP address(es). That means Aegis IP addresses have capacity constraints associated with them.</p><p>The number of <i>concurrent connections</i> is the number of simultaneous connections for a given 4-way tuple. Between one client and one server, the volume of concurrent connections is inherently limited by the number of ports on an IP address to 65,535 — each source ip:port can only support a single outbound connection per destination IP:port. In practice, that maximum number of concurrent connections is often lower due to assignments of port ranges across many servers and imperfect load distribution. </p><p>For planning purposes, we use an estimate of ~80% of the IP capacity (the volume of concurrent connections a source IP address can support to a destination IP address) to protect against overload, in case of traffic spikes. If every port on an IP address is maintaining a concurrent connection, that address would reach and exceed capacity — it would be overloaded with port usage exhaustion. Requests may then be dropped since no new connections can be established. To build in resiliency, we only plan to support 40k concurrent connections per Aegis IP address per origin.</p>
    <div>
      <h3>Aegis with IPv6</h3>
      <a href="#aegis-with-ipv6">
        
      </a>
    </div>
    <p>Each customer who onboards with Cloudflare Aegis receives two <a href="https://www.ripe.net/about-us/press-centre/understanding-ip-addressing/#:~:text=of%20IPv6%20addresses.-,IPv6%20Relative%20Network%20Sizes,-/128"><u>/64 prefixes</u></a> to be globally allocated and announced. That means, outside of Cloudflare’s China Network, every Cloudflare data center has hundreds or even thousands of addresses reserved for egressing your traffic directly to your origin. Without Aegis, any data center in Cloudflare’s Anycast network can serve as a point of egress – so we built Aegis with IPv6 to preserve that level of resiliency and performance. The sheer scale of IPv6, with its available address space, allows us to cushion Aegis’ capacity to a point far beyond any reasonable concern. Globally allocating and announcing your Aegis IPv6 addresses maintains all of Cloudflare’s functionality as a reverse proxy without inducing additional friction.</p>
    <div>
      <h3>Aegis with IPv4</h3>
      <a href="#aegis-with-ipv4">
        
      </a>
    </div>
    <p>Although using IPv6 with Aegis facilitates the best possible speed and resiliency for your traffic, we recognize the transition from IPv4 to IPv6 can be challenging for some customers. Moreover, some customers prefer Aegis IPv4 for granular control over their traffic’s physical egress locations. Still, IPv4 space is more limited and more expensive — while all Cloudflare Aegis customers simply receive two dedicated /64s for IPv6, enabling Aegis with IPv4 requires a touch more tailoring. When you onboard to Aegis, we work with you to determine the ideal number of IPv4 addresses for your Aegis configuration to maintain optimal performance and resiliency, while also ensuring cost efficiency. </p><p>Naturally, this introduces a bottleneck — whereas every Cloudflare data center can serve as a point of egress with Aegis IPv6, only a small fraction will have that capability with Aegis IPv4. We aim to mitigate this impact by careful provisioning of the IPv4 addresses. </p><p>Now that BYOIP for Aegis is supported, you can also onboard an entire IPv4 <a href="https://www.ripe.net/about-us/press-centre/understanding-ip-addressing/#:~:text=in%20that%20%E2%80%9Cblock%E2%80%9D.-,IPv4,-The%20size%20of"><u>/24</u></a> prefix or IPv6 /64 for Aegis, allowing for a cost-effective configuration with a much higher volume of capacity.</p><p>When we launched Aegis, each IP address was allocated to one data center, requiring at least two IPv4 addresses for appropriate resiliency. To reduce the number of IP addresses necessary in your layer 3 firewall allowlist, and to manage the cost to the customer of leasing IPs, we expanded our Aegis functionality so that one address can be announced from up to four data centers. To do this, we essentially slice the available IP port range into four subsets and provision each at a unique data center. </p><p>A quick refresher: when a request travels through Cloudflare, it first hits our network via an <i>ingress data center</i>. The ingress data center is generally near the eyeball, who is sending the request. Then, the request is routed following BGP – or <a href="https://developers.cloudflare.com/argo-smart-routing/"><u>Argo Smart Routing</u></a>, when enabled – to an <i>exit, or egress, data center</i>. The exit data center will generally fall in close geographic proximity to the request’s destination, which is the customer origin. This mitigates latency induced by the final hop from Cloudflare’s network to your origin.</p><p>With Aegis, the possible exit data centers are limited to the data centers in which an Aegis IP address has been allocated. For IPv6, this is a non-issue, since every data center outside our China Network is covered. With IPv4, however, the exit data centers are limited to a much smaller number (4 x the number of Aegis IPs). Aegis IP addresses are allocated, then, to data centers in close geographic proximity to your origin(s). This maximizes the likelihood that whichever data centers would ordinarily have been selected as the egress data center are already announcing Aegis IP addresses. Theoretically, no extra hop is necessary from the optimal exit data center to an Aegis-enabled data center – they are one and the same. In practice, this cannot be guaranteed 100% of the time because optimal routes are ever-changing. We recommend IPv6 to ensure optimal performance because of this possibility of an extra hop with IPv4.</p><p>A brief comparison, to summarize:</p><table><tr><th><p>
</p></th><th><p><b>Aegis IPv4</b></p></th><th><p><b>Aegis IPv6</b></p></th></tr><tr><td><p>Physical points of egress</p></td><td><p>4 physical data center sites (1-2 cities near origin) per IP address</p></td><td><p>All 300+ Cloudflare <a href="https://www.cloudflare.com/network/"><u>locations</u></a> (excluding China network)</p></td></tr><tr><td><p>Capacity</p></td><td><p>One IPv4 address per 40,000 concurrent connections per origin</p></td><td><p>Two /64 prefixes for all Aegis customers (&gt;36 quintillion IP addresses)</p><p>~50,000x capacity of IPv4 config</p></td></tr><tr><td><p>Pricing model</p></td><td><p>Monthly fee based on IPv4 leases or BYOIP for Aegis prefix fees</p></td><td><p>Included with product purchase or BYOIP for Aegis prefix fees</p></td></tr></table><p>Now, with Aegis analytics coming soon, customers can monitor and manage their IP address usage by Cloudflare data centers in aggregate. Every Cloudflare data center will now run a service with the sole purpose of calculating and reporting Aegis usage for each origin IP:port at regular intervals. Written to an internal database, these reports will be aggregated and exposed to customers via Cloudflare’s <a href="https://developers.cloudflare.com/analytics/graphql-api/"><u>GraphQL Analytics API</u></a>. Several aggregation functions will be available, such as average usage over a period of time, or total summed usage.</p><p>This will allow customers to track their own IP address usage to further optimize the distribution of traffic and addresses across different points of presence for IPv4. Additionally, the improved observability will support customer-created notifications via RSS feeds such that you can design your own notification thresholds for port usage.</p>
    <div>
      <h3>How Aegis benefits from connection reuse &amp; coalescence</h3>
      <a href="#how-aegis-benefits-from-connection-reuse-coalescence">
        
      </a>
    </div>
    <p>As we mentioned earlier, requests egress from the source IP address to the destination IP address only when a connection has been established between the two. In early Internet protocols, requests and connections were 1:1. Now, once that connection is open, it can remain open and support hundreds or thousands of requests between that source and destination via <i>connection reuse</i> and <i>connection coalescing</i>. </p><p>Connection reuse, implemented by <a href="https://datatracker.ietf.org/doc/html/rfc2616?cf_history_state=%7B%22guid%22%3A%22C255D9FF78CD46CDA4F76812EA68C350%22%2C%22historyId%22%3A41%2C%22targetId%22%3A%223BBCDD89688CD49F2C3350ED8037BC6F%22%7D"><u>HTTP/1.1</u></a>, allows for requests with the same source ip:port and destination IP:port to pass through the same connection to the origin. A “simple” website by modern standards can send hundreds of requests just to load initially; by streamlining these into a single origin connection, connection reuse reduces the latency derived from constantly opening and closing new connections between two endpoints. Still, any request from a different domain would need to create a new, unique connection to communicate with the origin. </p><p>As of <a href="https://datatracker.ietf.org/doc/html/rfc7540"><u>HTTP/2</u></a>, connection coalescing can group requests from different domains into one connection if the requests have the same destination IP address and the server certificate is authoritative for both domains. Depending on the traffic patterns routing from the eyeball to an Aegis IP address, the volume of connection reuse &amp; coalescence can vary. One connection most likely facilitates the traffic of many requests, but each connection requires at least one request to open it in the first place. Therefore, the worst possible ratio between concurrent connections and concurrent requests is 1:1. </p><p>In practice, a 1:1 ratio between connections and requests almost <i>never</i> happens. Connection reuse and connection coalescence are very common but highly variable, due to sporadic traffic patterns. We size our Aegis IP address allocations accordingly, erring on the conservative side to minimize risk of capacity overload. With the proper number of dedicated egress IP addresses and optimal allocation to Cloudflare points of presence, we are able to lock down your origin with IPv4 addresses to block malicious layer 7 traffic and reduce overall load to your origin. </p><p>Connection reuse and coalescence pairs well with Aegis to reduce load on the origin’s side as well. Because a request can be reused if it comes from the same source IP:port and shares a destination IP:port, routing traffic from a reduced number of source IP addresses (Aegis addresses, in this case) to your origin facilitates a smaller number of total connections. Not only does this improve security by limiting open connection access, but also it reduces latency since fewer connections need to be opened. Maintaining fewer connections is also less resource intensive — more connections means more CPU and more memory handling the inbound requests. By reducing the number of connections to the origin through reuse and coalescence, HTTP/2 lowers the overall cost of operation by optimizing resource usage. </p>
    <div>
      <h3>Recap and recommendations</h3>
      <a href="#recap-and-recommendations">
        
      </a>
    </div>
    <p>Cloudflare Aegis locks down your origin by restricting access via your origin’s layer 3 firewall. By routing traffic from Cloudflare’s network to your origin through dedicated egress IP addresses, you can ensure that requests coming from Cloudflare are legitimate customer traffic. With a simple flip-switch configuration — allow listing your Aegis IP addresses in your origin’s firewall — you can block excessive noise and bad actors from access. So, to help you take full advantage of Aegis, let’s recap:</p><ul><li><p>Concurrent connections can be, at worst, a 1:1 ratio to concurrent requests.</p></li><li><p>Cloudflare bases our IP address usage recommendations on 40,000 concurrent connections to minimize risk of capacity overload.</p></li><li><p>Each Aegis IP address supports an estimated 40,000 concurrent connections per origin IP address.</p></li></ul><p>Additionally, we’re excited to now support:</p><ul><li><p><a href="https://developers.cloudflare.com/api/resources/zones/subresources/settings/methods/edit/"><u>Public Aegis API</u></a> </p></li><li><p><a href="https://developers.cloudflare.com/aegis/setup/"><u>BYOIP for Aegis</u></a> </p></li><li><p>Customer-facing Aegis observability (coming soon via gradual rollout)</p></li></ul><p>For customers leasing Cloudflare-owned Aegis IP addresses, the Aegis API will allow you to enable and disable Aegis on zones within your parent account (parent being the account which owns the IP lease). If you deploy your Aegis IP addresses across multiple accounts, you’ll still rely on Cloudflare’s account team to enable and disable Aegis on zones within those additional accounts.</p><p>For customers who leverage BYOIP for Aegis, the Aegis API will allow you to enable and disable Aegis on zones within your parent account <i>and</i> within any accounts to which you <a href="https://developers.cloudflare.com/byoip/concepts/prefix-delegations/#:~:text=BYOIP%20supports%20prefix%20delegations%2C%20which,service%20used%20with%20the%20prefix."><u>delegate prefix permissions</u></a>. We recommend BYOIP for Aegis for improved configurability and cost efficiency. </p><table><tr><th><p>
</p></th><th><p><b>BYOIP</b></p></th><th><p><b>Cloudflare-owned IPs</b></p></th></tr><tr><td><p>Enable Aegis on zones on parent account</p></td><td><p>✓</p></td><td><p>✓</p></td></tr><tr><td><p>Enable Aegis on zones beyond parent account</p></td><td><p>✓</p></td><td><p>✗</p></td></tr><tr><td><p>Disable Aegis on zones on parent account</p></td><td><p>✓</p></td><td><p>✓</p></td></tr><tr><td><p>Disable Aegis on zones beyond parent account</p></td><td><p>✓</p></td><td><p>✗</p></td></tr><tr><td><p>Access Aegis analytics via the API</p></td><td><p>✓</p></td><td><p>✓</p></td></tr></table><p>With the improved Aegis observability, all Aegis customers will be able to monitor their port usage by IP address, account, zone, and data centers in aggregate via the API. You will also be able to ingest these metrics to configure your own, customizable alerts based on certain port usage thresholds. Alongside the new configurability of Aegis, this visibility will better equip customers to manage their Aegis deployments themselves and alert <i>us</i> to any changes, rather than the other way around.</p><p>We also have a few adjacent recommendations to optimize your Aegis configuration. We generally encourage the following best practices for security hygiene for your origin and traffic as well.</p><ol><li><p><b>IPv6 Compatibility</b>: if your origin(s) support IPv6, you will experience even better resiliency, performance, and availability with your dedicated egress IP addresses at a lower overall cost.</p></li><li><p><a href="https://www.cloudflare.com/learning/performance/http2-vs-http1.1/"><b><u>HTTP/2</u></b></a><b> or </b><a href="https://www.cloudflare.com/learning/performance/what-is-http3/"><b><u>HTTP/3</u></b></a><b> adoption</b>: by supporting connection reuse and coalescence, you will reduce overall load to your origin and latency in the path of your request.</p></li><li><p><b>Multi-level origin protection</b>: while Aegis protects your origin at the application level, it pairs well with <a href="https://blog.cloudflare.com/access-aegis-cni/"><u>Access and CNI</u></a>, <a href="https://developers.cloudflare.com/ssl/origin-configuration/authenticated-origin-pull/"><u>Authenticated Origin Pulls</u></a>, and/or other Cloudflare products to holistically protect, verify, and facilitate your traffic from edge to origin.</p></li></ol><p>If you or your organization want to enhance security and lock down your origin with dedicated egress IP addresses reach out to your account team to onboard today. </p> ]]></content:encoded>
            <category><![CDATA[Security Week]]></category>
            <category><![CDATA[Aegis]]></category>
            <category><![CDATA[Egress]]></category>
            <category><![CDATA[IPv4]]></category>
            <category><![CDATA[IPv6]]></category>
            <guid isPermaLink="false">LPhv5n2cp5pkZBwAC8hN0</guid>
            <dc:creator>Mia Malden</dc:creator>
            <dc:creator>Adrien Vasseur</dc:creator>
        </item>
        <item>
            <title><![CDATA[Enhanced security and simplified controls with automated botnet protection, cipher suite selection, and URL Scanner updates]]></title>
            <link>https://blog.cloudflare.com/enhanced-security-and-simplified-controls-with-automated-botnet-protection/</link>
            <pubDate>Mon, 17 Mar 2025 13:00:00 GMT</pubDate>
            <description><![CDATA[ Enhanced security, simplified control! This Security Week, Cloudflare unveils automated botnet protection, flexible cipher suites, and an upgraded URL Scanner. ]]></description>
            <content:encoded><![CDATA[ <p>At Cloudflare, we are constantly innovating and launching new features and capabilities across our product portfolio. Today, we're releasing a number of new features aimed at improving the security tools available to our customers.</p><p><b>Automated security level: </b>Cloudflare’s Security Level setting has been improved and no longer requires manual configuration. By integrating botnet data along with other request rate signals, all customers are protected from confirmed known malicious botnet traffic without any action required.</p><p><b>Cipher suite selection:</b> You now have greater control over encryption settings via the Cloudflare dashboard, including specific cipher suite selection based on our client or compliance requirements.</p><p><b>Improved URL scanner:</b> New features include bulk scanning, similarity search, location picker and more.</p><p>These updates are designed to give you more power and flexibility when managing online security, from proactive threat detection to granular control over encryption settings.</p>
    <div>
      <h3>Automating Security Level to provide stronger protection for all</h3>
      <a href="#automating-security-level-to-provide-stronger-protection-for-all">
        
      </a>
    </div>
    <p>Cloudflare’s <a href="https://developers.cloudflare.com/waf/tools/security-level/"><u>Security Level feature</u></a> was designed to protect customer websites from malicious activity.</p><p>Available to all Cloudflare customers, including the free tier, it has always had very simple logic: if a connecting client IP address has shown malicious behavior across our network, issue a <a href="https://developers.cloudflare.com/waf/reference/cloudflare-challenges/"><u>managed challenge</u></a>. The system tracks malicious behavior by assigning a threat score to each IP address. The more bad behavior is observed, the higher the score. Cloudflare customers could configure <a href="https://developers.cloudflare.com/waf/tools/security-level/"><u>the threshold that would trigger the challenge</u></a>.</p><p>We are now announcing an update to how Security Level works, by combining the IP address threat signal with threshold and botnet data. The resulting detection improvements have allowed us to automate the configuration, no longer requiring customers to set a threshold.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1RFWQl2Da9xu9MdfbJCRhy/8750770351d124ecf8d2f2b274f2e3cc/image1.png" />
          </figure><p>The Security Level setting is now <b>Always protected</b> in the dashboard, and ip_threat_score fields in WAF Custom Rules will no longer be populated. No change is required by Cloudflare customers. The <a href="https://developers.cloudflare.com/fundamentals/reference/under-attack-mode/"><u>“I am under attack”</u></a> option remains unchanged.</p>
    <div>
      <h3>Stronger protection, by default, for all customers</h3>
      <a href="#stronger-protection-by-default-for-all-customers">
        
      </a>
    </div>
    <p>Although we always favor simplicity, privacy-related services, including our own WARP, have seen growing use. Meanwhile, <a href="https://en.wikipedia.org/wiki/Carrier-grade_NAT"><u>carrier-grade network address translation (CGNATs)</u></a> and outbound forward proxies have been widely used for many years.</p><p>These services often result in multiple users sharing the same IP address, which can lead to legitimate users being challenged unfairly since individual addresses don’t strictly correlate with unique client behavior. Moreover, threat actors have become increasingly adept at anonymizing and dynamically changing their IP addresses using tools like VPNs, proxies, and botnets, further diminishing the reliability of IP addresses as a standalone indicator of malicious activity. Recognising these limitations, it was time for us to revisit Security Level’s logic to reduce the number of false positives being observed.</p><p>In February 2024, we introduced a new security system that automatically combines the real-time DDoS score with a traffic threshold and a botnet tracking system. The real-time DDoS score is part of our autonomous DDoS detection system, which analyzes traffic patterns to identify potential threats. This system superseded and replaced the existing Security Level logic, and is deployed on all customer traffic, including free plans. After thorough monitoring and analysis over the past year, we have confirmed that these behavior-based mitigation systems provide more accurate results. Notably, we've observed a significant reduction in false positives, demonstrating the limitations of the previous IP address-only logic.</p>
    <div>
      <h4>Better botnet tracking</h4>
      <a href="#better-botnet-tracking">
        
      </a>
    </div>
    <p>Our new logic combines IP address signals with behavioral and threshold indicators to improve the accuracy of botnet detection. While IP addresses alone can be unreliable due to potential false positives, we enhance their utility by integrating them with additional signals. We monitor surges in traffic from known "bad" IP addresses and further refine this data by examining specific properties such as path, accept, and host headers.</p><p>We also introduced a new botnet tracking system that continuously detects and tracks botnet activity across the Cloudflare network. From our unique vantage point as a <a href="https://w3techs.com/technologies/overview/proxy"><u>reverse proxy for nearly 20% of all websites</u></a>, we maintain a dynamic database of IP addresses associated with botnet activity. This database is continuously updated, enabling us to automatically respond to emerging threats without manual intervention. This effect is visible in the <a href="https://radar.cloudflare.com/security-and-attacks?dateStart=2024-02-01&amp;dateEnd=2024-03-31#mitigated-traffic-sources"><u>Cloudflare Radar chart</u></a> below, as we saw sharp growth in DDoS mitigations in February 2024 as the botnet tracking system was implemented.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3yOP8zoC5ZLVi4WHnXI0jH/ef3fd6ad10e8357b6b4f1bfb90e6d6b6/image4.png" />
          </figure>
    <div>
      <h4>What it means for our customers and their users</h4>
      <a href="#what-it-means-for-our-customers-and-their-users">
        
      </a>
    </div>
    <p>Customers now get better protection while having to manage fewer configurations, and they can rest assured that their online presence remains fully protected. These security measures are integrated and enabled by default across all of our plans, ensuring protection without the need for manual configuration or rule management.
This improvement is particularly beneficial for users accessing sites through proxy services or CGNATs, as these setups can sometimes trigger unnecessary security checks, potentially disrupting access to websites.</p>
    <div>
      <h4>What’s next</h4>
      <a href="#whats-next">
        
      </a>
    </div>
    <p>Our team is looking at defining the next generation of threat scoring mechanisms. This initiative aims to provide our customers with more relevant and effective controls and tools to combat today's and tomorrow's potential security threats.</p><p>Effective March 17, 2025, we are removing the option to configure manual rules using the threat score parameter in the Cloudflare dashboard. The "I'm Under Attack" mode remains available, allowing users to issue managed challenges to all traffic when needed.</p><p>By the end of Q1 2026, we anticipate disabling all rules that rely on IP threat score. This means that using the threat score parameter in the Rulesets API and via Terraform won’t be available after the end of the transition period. However, we encourage customers to be proactive and edit or remove the rules containing the threat score parameter starting today.</p>
    <div>
      <h3>Cipher suite selection now available in the UI</h3>
      <a href="#cipher-suite-selection-now-available-in-the-ui">
        
      </a>
    </div>
    
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2e5Q0ghzpkuTQrR335fzIa/156b9531735fd9164768970fd08f5f85/image5.png" />
          </figure><p>Building upon our core security features, we're also giving you more control over your encryption: cipher suite selection is now available in the Cloudflare dashboard! </p><p>When a client initiates a visit to a Cloudflare-protected website, a <a href="https://www.cloudflare.com/learning/ssl/what-happens-in-a-tls-handshake/"><u>TLS handshake</u></a> occurs, where clients present a list of supported <a href="https://developers.cloudflare.com/ssl/edge-certificates/additional-options/cipher-suites/"><u>cipher suites</u></a> — cryptographic algorithms crucial for secure connections. While newer algorithms enhance security, balancing this with broad compatibility is key, as some customers prioritise reach by supporting older devices, even with less secure ciphers. To accommodate varied client needs, Cloudflare's default settings emphasise wide compatibility, allowing customers to tailor cipher suite selection based on their priorities: strong security, compliance (PCI DSS, FIPS 140-2), or legacy device support.</p><p>Previously, customizing cipher suites required multiple API calls, proving cumbersome for many users. Now, Cloudflare introduces Cipher Suite Selection to the dashboard. This feature introduces user-friendly selection flows like security recommendations, compliance presets, and custom selections.  </p>
    <div>
      <h4>Understanding cipher suites</h4>
      <a href="#understanding-cipher-suites">
        
      </a>
    </div>
    <p>Cipher suites are collections of cryptographic algorithms used for key exchange, authentication, encryption, and message integrity, essential for a TLS handshake. During the handshake’s initiation, the client sends a "client hello" message containing a list of supported cipher suites. The server responds with a "server hello" message, choosing a cipher suite from the client's list based on security and compatibility. This chosen cipher suite forms the basis of TLS termination and plays a crucial role in establishing a secure HTTPS connection. Here’s a quick overview of each component:</p><ul><li><p><b>Key exchange algorithm:</b> Secures the exchange of encryption keys between parties.</p></li><li><p><b>Authentication algorithm:</b> Verifies the identities of the communicating parties.</p></li><li><p><b>Encryption algorithm:</b> Ensures the confidentiality of the data.</p></li><li><p><b>Message integrity algorithm:</b> Confirms that the data remains unaltered during transmission.</p></li></ul><p><a href="https://www.geeksforgeeks.org/perfect-forward-secrecy/"><b><u>Perfect forward secrecy</u></b></a><b> </b>is an important feature of modern cipher suites. It ensures that each session's encryption keys are generated independently, which means that even if a server’s private key is compromised in the future, past communications remain secure.</p>
    <div>
      <h4>What we are offering </h4>
      <a href="#what-we-are-offering">
        
      </a>
    </div>
    <p>You can find cipher suite configuration under Edge Certificates in your zone’s SSL/TLS dashboard. There, you will be able to view your allow-listed set of cipher suites. </p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6fT7BvPow3zvKTl1JYw7yX/8dcd8b797f671b05211defaaf4c4bb83/image5.png" />
          </figure><p>Additionally, you will be able to choose from three different user flows, depending on your specific use case, to seamlessly select your appropriate list. Those three user flows are: security recommendation selection, compliance selection, or custom selection. The goal of the user flows is to outfit customers with cipher suites that match their goals and priorities, whether those are maximum compatibility or best possible security.</p><p>1. Security recommendations </p><p>To streamline the process, we have turned our <a href="https://developers.cloudflare.com/ssl/reference/cipher-suites/recommendations/"><u>cipher suites recommendations</u></a> into selectable options. This is in an effort to expose our customers to cipher suites in a tangible way and enable them to choose between different security configurations and compatibility. Here is what they mean:</p><ul><li><p><b>Modern:</b> Provides the highest level of security and performance with support for Perfect Forward Secrecy and <a href="https://www.ietf.org/archive/id/draft-irtf-cfrg-aead-properties-03.html"><u>Authenticated Encryption (AEAD)</u></a>. Ideal for customers who prioritize top-notch security and performance, such as financial institutions, healthcare providers, or government agencies. This selection requires TLS 1.3 to be enabled and the minimum TLS version set to 1.2.</p></li><li><p><b>Compatible:</b> Balances security and compatibility by offering forward-secret cipher suites that are broadly compatible with older systems. Suitable for most customers who need a good balance between security and reach. This selection also requires TLS 1.3 to be enabled and the minimum TLS version set to 1.2.</p></li><li><p><b>Legacy:</b> Optimizes for the widest reach, supporting a wide range of legacy devices and systems. Best for customers who do not handle sensitive data and need to accommodate a variety of visitors. This option is ideal for blogs or organizations that rely on older systems.</p></li></ul><p>2. Compliance selection</p><p>Additionally, we have also turned our <a href="https://developers.cloudflare.com/ssl/reference/cipher-suites/compliance-status/"><u>compliance recommendations</u></a> into selectable options to make it easier for our customers to meet their PCI DSS or FIPS-140-2 requirements.</p><ul><li><p><a href="https://www.pcisecuritystandards.org/standards/pci-dss/"><b><u>PCI DSS Compliance:</u></b></a> Ensures that your cipher suite selection aligns with PCI DSS standards for protecting cardholder data. This option will enforce a requirement to set a minimum TLS version of 1.2, and TLS 1.3 to be enabled, to maintain compliance.</p><ul><li><p>Since the list of supported cipher suites require TLS 1.3 to be enabled and a minimum TLS version of 1.2 in order to be compliant, we will disable compliance selection until the zone settings are updated to meet those requirements. This effort is to ensure that our customers are truly compliant and have the proper zone settings to be so. </p></li></ul></li><li><p><a href="https://csrc.nist.gov/pubs/fips/140-2/upd2/final"><b><u>FIPS 140-2 Compliance</u></b><u>:</u></a> Tailored for customers needing to meet federal security standards for cryptographic modules. Ensures that your encryption practices comply with FIPS 140-2 requirements.</p></li></ul><p>3. Custom selection </p><p>For customers needing precise control, the custom selection flow allows individual cipher suite selection, excluding TLS 1.3 suites which are automatically enabled with TLS 1.3. To prevent disruptions, guardrails ensure compatibility by validating that the minimum TLS version aligns with the selected cipher suites and that the <a href="https://www.cloudflare.com/application-services/products/ssl/">SSL/TLS certificate</a> is compatible (e.g., RSA certificates require RSA cipher suites).</p>
    <div>
      <h3>API </h3>
      <a href="#api">
        
      </a>
    </div>
    <p>The <a href="https://developers.cloudflare.com/ssl/edge-certificates/additional-options/cipher-suites/"><u>API</u></a> will still be available to our customers. This aims to support an existing framework, especially to customers who are already API reliant. Additionally, Cloudflare preserves the specified cipher suites in the order they are set via the API and that control of ordering will remain unique to our API offering. </p><p>With your Advanced Certificate Manager or Cloudflare for SaaS subscription, head to Edge Certificates in your zone’s SSL dashboard and give it a try today!</p>
    <div>
      <h3>Smarter scanning, safer Internet with the new version of URL Scanner</h3>
      <a href="#smarter-scanning-safer-internet-with-the-new-version-of-url-scanner">
        
      </a>
    </div>
    
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5eFwJMzk3JuwYNKcSk4kiH/63e4a8713be583d83df737cf6f59281d/image10.png" />
          </figure><p>Cloudflare's URL Scanner is a tool designed to detect and analyze potential security threats like phishing and malware by scanning and evaluating websites, providing detailed insights into their safety and technology usage. We've leveraged our own <a href="https://developers.cloudflare.com/radar/investigate/url-scanner/"><u>URL Scanner</u></a> to enhance our internal <u>Trust &amp; Safety efforts</u>, automating the detection and mitigation of some forms of abuse on our platform. This has not only strengthened our own security posture, but has also directly influenced the development of the new features we're announcing today. </p><p>Phishing attacks are on the rise across the Internet, and we saw a major opportunity to be "customer zero" for our URL Scanner to address abuse on our own network. By working closely with our Trust &amp; Safety team to understand how the URL Scanner could better identify potential phishing attempts, we've improved the speed and accuracy of our response to abuse reports, making the Internet safer for everyone. Today, we're excited to share the new API version and the latest updates to URL Scanner, which include the ability to scan from specific geographic locations, bulk scanning, search by Indicators of Compromise (IOCs), improved UI and information display, comprehensive IOC listings, advanced sorting options, and more. These features are the result of our own experiences in leveraging URL Scanner to safeguard our platform and our customers, and we're confident that they will prove useful to our security analysts and threat intelligence users.</p>
    <div>
      <h4>Scan up to 100 URLs at once by using bulk submissions</h4>
      <a href="#scan-up-to-100-urls-at-once-by-using-bulk-submissions">
        
      </a>
    </div>
    <p>Cloudflare Enterprise customers can now conduct routine scans of their web assets to identify emerging vulnerabilities, ensuring that potential threats are addressed proactively, by using the <a href="https://developers.cloudflare.com/api/resources/url_scanner/subresources/scans/methods/bulk_create/"><u>Bulk Scanning API endpoint</u></a>. Another use case for the bulk scanning functionality is developers leveraging bulk scanning to verify that all URLs your team is accessing are secure and free from potential exploits before launching new websites or updates.</p><p>Scanning of multiple URLs addresses the specific needs of our users engaged in threat hunting. Many of them maintain extensive lists of URLs that require swift investigation to identify potential threats. Currently, they face the task of submitting these URLs one by one, which not only slows down their workflow but also increases the manual effort involved in their security processes. With the introduction of bulk submission capabilities, users can now submit up to 100 URLs at a time for scanning. </p>
    <div>
      <h4>How we built the bulk scanning feature</h4>
      <a href="#how-we-built-the-bulk-scanning-feature">
        
      </a>
    </div>
    <p>Let’s look at a regular workflow:</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6l8aN6xhN4HEfw4ZMi1MT8/5eb62472b42f75487c55b17b3415b584/image6.png" />
          </figure><p>In this workflow, when the user submits a new scan, we create a <a href="https://developers.cloudflare.com/durable-objects/"><u>Durable Object</u></a> with the same ID as the scan, save the scan options, like the URL to scan, to the <a href="https://developers.cloudflare.com/durable-objects/api/storage-api/"><u>Durable Objects’s storage</u></a> and schedule an <a href="https://developers.cloudflare.com/durable-objects/api/storage-api/#setalarm"><u>alarm</u></a> for a few seconds later. This allows us to respond immediately to the user, signalling a successful submission. A few seconds later the alarm triggers, and we start the scan itself. </p><p>However, with bulk scanning, the process is slightly different:</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2kXLJ5sSGBbM06H3Ftsrqi/a4440fd0efc7c0271580c6da6f08f814/image9.png" />
          </figure><p>In this case, there are no Durable Objects involved just yet; the system simply sends each URL in the bulk scan submission as a new message to the queue.</p><p>Notice that in both of these cases the scan is triggered asynchronously. In the first case, it starts when the Durable Objects alarm fires and, in the second case, when messages in the queue are consumed. While the durable object alarm will always fire in a few seconds, messages in the queue have no predetermined processing time, they may be processed seconds to minutes later, depending on how many messages are already in the queue and how fast the system processes them.</p><p>When users bulk scan, having the scan done at <i>some </i>point in time is more important than having it done <i>now</i>. When using the regular scan workflow, users are limited in the number of scans per minute they can submit. However, when using bulk scan this is not a concern, and users can simply send all URLs they want to process in a single HTTP request. This comes with the tradeoff that scans may take longer to complete, which is a perfect fit for <a href="https://developers.cloudflare.com/queues/"><u>Cloudflare Queues</u></a>. Having the ability to <a href="https://developers.cloudflare.com/queues/configuration/configure-queues/#consumer-worker-configuration"><u>configure</u></a> retries, max batch size, max batch timeouts, and max concurrency is something we’ve found very useful. As the scans are completed asynchronously, users can request the resulting scan reports <a href="https://developers.cloudflare.com/api/resources/url_scanner/subresources/scans/methods/get/"><u>via the API</u></a>.</p>
    <div>
      <h4>Discover related scans and better IOC search</h4>
      <a href="#discover-related-scans-and-better-ioc-search">
        
      </a>
    </div>
    <p>The <i>Related Scans</i> feature allows <a href="https://developers.cloudflare.com/api/resources/url_scanner/subresources/scans/methods/list/"><u>API</u></a>, <a href="http://dash.cloudflare.com"><u>Cloudflare dashboard</u></a> and <a href="http://radar.cloudflare.com"><u>Radar</u></a> users alike to view related scans directly within the URL Scanner Report. This helps users analyze and understand the context of a scanned URL by providing insights into similar URLs based on various attributes. Filter and search through URL Scanner reports to retrieve information on related scans, including those with identical favicons, similar HTML structures, and matching IP addresses.</p><p>The <i>Related Scans</i> tab presents a table with key headers corresponding to four distinct filters. Each entry includes the scanned URL and a direct link to view the detailed scan report, allowing for quick access to further information. </p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6yRzKVd0M9sNF1uGOWA1vb/212008b5296ad6df23088571f0602930/image3.png" />
          </figure><p>We've introduced the ability to search by indicators of compromise (IOCs), such as IP addresses and hashes, directly within the user interface. Additionally, we've added advanced filtering options by various criteria, including screenshots, hashes, favicons, and HTML body content. This allows for more efficient organization and prioritization of URLs based on specific needs. While attackers often make minor modifications to the HTML structure of phishing pages to evade detection, our advanced filtering options enable users to search for URLs with similar HTML content. This means that even if the visual appearance of a phishing page changes slightly, we can still identify connections to known phishing campaigns by comparing the underlying HTML structure. This proactive approach helps users identify and block these threats effectively.</p><p>Another use case for the advanced filtering options is the search by hash; a user who has identified a malicious JavaScript file through a previous investigation can now search using the file's hash. By clicking on an HTTP transaction, you'll find a direct link to the relevant hash, immediately allowing you to pivot your investigation. The real benefit comes from identifying other potentially malicious sites that have that same hash. This means that if you know a given script is bad, you can quickly uncover other compromised websites delivering the same malware.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3rWKgTrGLW297cVFbH9hSY/4555697b668d90f3df4d740bd91d3116/image7.png" />
          </figure><p>The user interface has also undergone significant improvements to enhance the overall experience. Other key updates include:</p><ul><li><p>Page title and favicon surfaced, providing immediate visual context</p></li><li><p>Detailed summaries are now available</p></li><li><p>Redirect chains allow users to understand the navigation path of a URL</p></li><li><p>The ability to scan files from URLs that trigger an automatic file download</p></li></ul>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5O55W8CLMrlPANpzkPAUY0/35748cb200feb79de6251c79d2be87f9/image2.png" />
          </figure>
    <div>
      <h4>Download HAR files</h4>
      <a href="#download-har-files">
        
      </a>
    </div>
    <p>With the latest updates to our URL Scanner, users can now download both the <a href="https://en.wikipedia.org/wiki/HAR_(file_format)"><u>HAR (HTTP Archive) file</u></a> and the JSON report from their scans. The <a href="https://blog.cloudflare.com/introducing-har-sanitizer-secure-har-sharing/"><u>HAR file</u></a> provides a detailed record of all interactions between the web browser and the scanned website, capturing crucial data such as request and response headers, timings, and status codes. This format is widely recognized in the industry and can be easily analyzed using various tools, making it invaluable for developers and security analysts alike.</p><p>For instance, a threat intelligence analyst investigating a suspicious URL can download the HAR file to examine the network requests made during the scan. By analyzing this data, they can identify potential malicious behavior, such as unexpected redirects and correlate these findings with other threat intelligence sources. Meanwhile, the JSON report offers a structured overview of the scan results, including security verdicts and associated IOCs, which can be integrated into broader security workflows or automated systems.</p>
    <div>
      <h4>New API version</h4>
      <a href="#new-api-version">
        
      </a>
    </div>
    <p>Finally, we’re announcing a <a href="https://developers.cloudflare.com/api/operations/urlscanner-create-scan-v2"><u>new version of our API</u></a>, allowing users to transition effortlessly to our service without needing to overhaul their existing workflows. Moving forward, any future features will be integrated into this updated API version, ensuring that users have access to the latest advancements in our URL scanning technology.</p><p>We understand that many organizations rely on automation and integrations with our previous API version. Therefore, we want to reassure our customers that there will be no immediate deprecation of the old API. Users can continue to use the existing API without disruption, giving them the flexibility to migrate at their own pace. We invite you to try the <a href="https://developers.cloudflare.com/api/operations/urlscanner-create-scan-v2"><u>new API</u></a> today and explore these new features to help with your web security efforts.</p>
    <div>
      <h3>Never miss an update</h3>
      <a href="#never-miss-an-update">
        
      </a>
    </div>
    <p>In summary, these updates to Security Level, cipher suite selection, and URL Scanner help us provide comprehensive, accessible, and proactive security solutions. Whether you're looking for automated protection, granular control over your encryption, or advanced threat detection capabilities, these new features are designed to empower you to build a safer and more secure online presence. We encourage you to explore these features in your Cloudflare dashboard and discover how they can benefit your specific needs.</p><p><i>We’ll continue to share roundup blog posts as we build and innovate. Follow along on the </i><a href="https://blog.cloudflare.com/"><i>Cloudflare Blog</i></a><i> for the latest news and updates. </i></p> ]]></content:encoded>
            <category><![CDATA[Security Week]]></category>
            <category><![CDATA[URL Scanner]]></category>
            <category><![CDATA[Threat Intelligence]]></category>
            <category><![CDATA[Security]]></category>
            <guid isPermaLink="false">5E0Ceo6CEHszKOpdxV3sl0</guid>
            <dc:creator>Alexandra Moraru</dc:creator>
            <dc:creator>Mia Malden</dc:creator>
            <dc:creator>Yomna Shousha</dc:creator>
            <dc:creator>Sofia Cardita</dc:creator>
        </item>
        <item>
            <title><![CDATA[Announcing Cloudflare Incident Alerts]]></title>
            <link>https://blog.cloudflare.com/incident-alerts/</link>
            <pubDate>Mon, 25 Sep 2023 13:00:53 GMT</pubDate>
            <description><![CDATA[ Customers may now subscribe to Cloudflare Incident Alerts and choose when to get notified based on affected products and level of impact ]]></description>
            <content:encoded><![CDATA[ <p></p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5zBBFxvFHSVMKPKZZ6sIhN/99bd2ca484df99774b400629df2758b0/image2-5.png" />
            
            </figure><p>A lot of people rely on Cloudflare. We serve over 46 million HTTP requests per second on average; millions of customers use our services, including 31% of the Fortune 1000. And these numbers are only growing.</p><p>Given the privileged position we sit in to help the Internet to operate, we’ve always placed a very large emphasis on <a href="https://developers.cloudflare.com/support/about-cloudflare/enterprise-documentation/customer-incident-management-policy/">transparency during incidents. But we’re constantly striving to do better.</a></p><p>That’s why today we are excited to announce Incident Alerts — available via <a href="https://developers.cloudflare.com/notifications/create-notifications/">email, webhook, or PagerDuty</a>. These notifications are accessible easily in the Cloudflare dashboard, and they’re customizable to prevent notification overload. And best of all, they’re available to everyone; you simply need a free account to get started.</p>
    <div>
      <h3>Lifecycle of an incident</h3>
      <a href="#lifecycle-of-an-incident">
        
      </a>
    </div>
    
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6P0JRUijhlkJUdsVhCn9Zs/e229f73088b616881e991ec7903829c9/Incident-Workflow.png" />
            
            </figure><p>Without proper transparency, incidents cause confusion and waste resources for anyone that relies on the Internet. With so many different entities working together to make the Internet operate, diagnosing and troubleshooting can be complicated and time-consuming. By far the best solution is for providers to have transparent and proactive alerting, so any time something goes wrong, it’s clear exactly where the problem is.</p>
    <div>
      <h3>Cloudflare incident response</h3>
      <a href="#cloudflare-incident-response">
        
      </a>
    </div>
    <p>We understand the importance of proactive and transparent alerting around incidents. We have worked to improve communications by directly alerting enterprise level customers and allowing everyone to subscribe to an RSS feed or leverage the <a href="https://www.cloudflarestatus.com/api">Cloudflare Status API</a>. Additionally, we update the <a href="https://www.cloudflarestatus.com/">Cloudflare status page</a> — which catalogs incident reports, updates, and resolutions — throughout an incident’s lifecycle, as well as tracking scheduled maintenance.</p><p>However, not everyone wants to use the Status API or subscribe to an RSS feed. Both of these options require some infrastructure and programmatic efforts from the customer’s end, and neither offers simple configuration to filter out noise like scheduled maintenance. For those who don’t want to build anything themselves, visiting the status page is still a pull, rather than a push, model. Customers themselves need to take it upon themselves to monitor Cloudflare’s status — and timeliness in these situations can make a world of difference.</p><p>Without a proactive channel of communication, there can be a disconnect between Cloudflare and our customers during incidents. Although we update the status page as soon as possible, the lack of a push notification represents a gap in meeting our customers’ expectations. The new Cloudflare Incident Alerts aim to remedy that.</p>
    <div>
      <h3>Simple, free, and fast notifications</h3>
      <a href="#simple-free-and-fast-notifications">
        
      </a>
    </div>
    <p>We want to proactively notify you as soon as a Cloudflare incident may be affecting your service —- without any programmatic steps on your end. Unlike the Status API and an RSS feed, Cloudflare Incident Alerts are configurable through just a few clicks in the dashboard, and you can choose to receive email, PagerDuty, or web hook alerts for incidents involving specific products at different levels of impact. The Status API will continue to be available.</p><p>With this multidimensional granularity, you can filter notifications by specific service and severity. If you are, for example, a Cloudflare for SaaS customer, you may want alerts for delays in custom hostname activation but not for increased latency on Stream. Likewise, you may only care about critical incidents instead of getting notified for minor incidents. Incident Alerts give you the ability to choose.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/w52Ra0KXOX0yPPmfMo72w/8e6987f0f4606c7b63ae2d8c2bd14c48/Incident-Workflow-with-alerts.png" />
            
            </figure><p>Lifecycle of an Incident</p>
    <div>
      <h3>How to filter incidents to fit your needs</h3>
      <a href="#how-to-filter-incidents-to-fit-your-needs">
        
      </a>
    </div>
    <p>You can filter incident notifications with the following categories:</p><ul><li><p>Cloudflare Sites and Services: get notified when an incident is affecting certain products or product areas.</p></li><li><p>Impact level: get notified for critical, major, and/or minor incidents.</p></li></ul><p>These categories are not mutually exclusive. Here are a few possible configurations:</p><ul><li><p>Notify me via email for <b>all critical incidents.</b></p></li><li><p>Notify me via webhook for <b>critical &amp; major incidents affecting Pages.</b></p></li><li><p>Notify me via PagerDuty for <b>all incidents affecting Stream.</b></p></li></ul><p>With over fifty different <a href="https://developers.cloudflare.com/fundamentals/notifications/notification-available/">alerts available via the dashboard</a>, you can tailor your notifications to what you need. You can customize not only which alerts you are receiving but also how you would like to be notified. With PagerDuty, webhooks, and email integrated into the system, you have the flexibility of choosing what will work best with your working environment. Plus, with multiple configurations within many of the available notifications, we make it easy to only get alerts about what you want, when you want them.</p>
    <div>
      <h3>Try it out</h3>
      <a href="#try-it-out">
        
      </a>
    </div>
    <p>You can start to configure incident alerts on your Cloudflare account today. Here’s how:</p><ol><li><p>Navigate to the Cloudflare dashboard → Notifications.</p></li><li><p>Select “Add”.</p></li><li><p>Select “Incident Alerts”.</p></li><li><p>Enter your notification name and description.</p></li><li><p>Select the impact level(s) and component(s) for which you would like to be notified. If either field is left blank, it will default to all impact levels or all components, respectively.</p></li><li><p>Select how you want to receive the notifications:</p></li><li><p>Check PagerDuty</p></li><li><p>Add Webhook</p></li><li><p>Add email recipient</p></li><li><p>Select “Save”.</p></li><li><p>Test the notification by selecting “Test” on the right side of its row.</p></li></ol>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6UVSTYyXZ7n66QlV8KppM7/2d3d8388d03e858d4082ec1aa44a0df6/Screenshot-2023-09-22-at-11.40.01.png" />
            
            </figure><p>For more information on Cloudflare’s Alert Notification System, visit our documentation <a href="https://developers.cloudflare.com/fundamentals/notifications/">here</a>.</p> ]]></content:encoded>
            <category><![CDATA[Birthday Week]]></category>
            <category><![CDATA[General Availability]]></category>
            <category><![CDATA[Product News]]></category>
            <guid isPermaLink="false">3HzORDk6OvVvdXScB5NTM</guid>
            <dc:creator>Mia Malden</dc:creator>
        </item>
        <item>
            <title><![CDATA[Introducing new Cloudflare for SaaS documentation]]></title>
            <link>https://blog.cloudflare.com/introducing-new-cloudflare-for-saas-documentation/</link>
            <pubDate>Tue, 09 Aug 2022 13:00:00 GMT</pubDate>
            <description><![CDATA[ Cloudflare for SaaS offers a suite of Cloudflare products and add-ons to improve the security, performance, and reliability of SaaS providers. Now, the Cloudflare for SaaS documentation outlines how to optimize it in order to meet your goals ]]></description>
            <content:encoded><![CDATA[ 
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2i5rqkqFn7HJrwk36od0pM/df3914b54964a9d678cda9ab0fe97968/image3-4.png" />
            
            </figure><p>As a SaaS provider, you’re juggling many challenges while building your application, whether it’s custom domain support, protection from attacks, or maintaining an origin server. In 2021, we were proud to announce <a href="/cloudflare-for-saas/">Cloudflare for SaaS for Everyone</a>, which allows anyone to use Cloudflare to cover those challenges, so they can focus on other aspects of their business. This product has a variety of potential implementations; now, we are excited to announce a new section in our <a href="https://developers.cloudflare.com/">Developer Docs</a> specifically devoted to <a href="https://developers.cloudflare.com/cloudflare-for-saas/">Cloudflare for SaaS documentation</a> to allow you take full advantage of its product suite.</p>
    <div>
      <h3>Cloudflare for SaaS solution</h3>
      <a href="#cloudflare-for-saas-solution">
        
      </a>
    </div>
    <p>You may remember, from our <a href="/cloudflare-for-saas-for-all-now-generally-available/">October 2021 blog post</a>, all the ways that Cloudflare provides solutions for SaaS providers:</p><ul><li><p>Set up an origin server</p></li><li><p>Encrypt your customers’ traffic</p></li><li><p>Keep your customers online</p></li><li><p>Boost the performance of global customers</p></li><li><p>Support custom domains</p></li><li><p>Protect against attacks and bots</p></li><li><p>Scale for growth</p></li><li><p>Provide insights and analytics</p></li></ul>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7LdxDeVHaUHAy19wdbLfVe/aaec3c62c1616d393a8af6c6daf270d0/image2-5.png" />
            
            </figure><p>However, we received feedback from customers indicating confusion around actually <i>using</i> the capabilities of Cloudflare for SaaS because there are so many features! With the existing documentation, it wasn’t 100% clear how to enhance security and performance, or how to support custom domains. Now, we want to show customers how to use Cloudflare for SaaS to its full potential by including more product integrations in the docs, as opposed to only focusing on the SSL/TLS piece.</p>
    <div>
      <h3>Bridging the gap</h3>
      <a href="#bridging-the-gap">
        
      </a>
    </div>
    <p>Cloudflare for SaaS can be overwhelming with so many possible add-ons and configurations. That’s why the new docs are organized into six main categories, housing a number of new, detailed guides (for example, WAF for SaaS and Regional Services for SaaS):</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5oI4ffuIoR47X455bljT6c/15422c36a5f9313c1113282577d913f2/image1-12.png" />
            
            </figure><p>Once you get your SaaS application up and running with the <a href="https://developers.cloudflare.com/cloudflare-for-saas/getting-started/">Get Started</a> page, you can find which configurations are best suited to your needs based on your priorities as a provider. Even if you aren’t sure what your goals are, this setup outlines the possibilities much more clearly through a number of new documents and product guides such as:</p><ul><li><p><a href="https://developers.cloudflare.com/cloudflare-for-saas/start/advanced-settings/regional-services-for-saas/">Regional Services for SaaS</a></p></li><li><p><a href="https://developers.cloudflare.com/analytics/graphql-api/tutorials/end-customer-analytics/">Querying HTTP events by hostname with GraphQL</a></p></li><li><p><a href="https://developers.cloudflare.com/cloudflare-for-saas/domain-support/migrating-custom-hostnames/">Migrating custom hostnames</a></p></li></ul><p>Instead of pondering over vague subsection titles, you can peruse with purpose in mind. The advantages and possibilities of Cloudflare for SaaS are highlighted instead of hidden.</p>
    <div>
      <h3>Possible configurations</h3>
      <a href="#possible-configurations">
        
      </a>
    </div>
    <p>This setup facilitates configurations much more easily to meet your goals as a SaaS provider.</p><p>For example, consider performance. Previously, there was no documentation surrounding reduced latency for SaaS providers. Now, the Performance section explains the automatic benefits to your performance by onboarding with Cloudflare for SaaS. Additionally, it offers three options of how to reduce latency even further through brand-new docs:</p><ul><li><p><a href="https://developers.cloudflare.com/cloudflare-for-saas/performance/early-hints-for-saas/">Early Hints for SaaS</a></p></li><li><p><a href="https://developers.cloudflare.com/cloudflare-for-saas/performance/cache-for-saas/">Cache for SaaS</a></p></li><li><p><a href="https://developers.cloudflare.com/cloudflare-for-saas/performance/argo-for-saas/">Argo Smart Routing for SaaS</a></p></li></ul><p>Similarly, the new organization offers <a href="https://developers.cloudflare.com/cloudflare-for-saas/security/waf-for-saas/">WAF for SaaS</a> as a previously hidden security solution, extending providers the ability to enable automatic protection from vulnerabilities and the flexibility to create custom rules. This is conveniently accompanied by a <a href="https://developers.cloudflare.com/cloudflare-for-saas/security/waf-for-saas/managed-rulesets/">step-by-step tutorial using Cloudflare Managed Rulesets</a>.</p>
    <div>
      <h3>What’s next</h3>
      <a href="#whats-next">
        
      </a>
    </div>
    <p>While this transition represents an improvement in the Cloudflare for SaaS docs, we’re going to expand its accessibility even more. Some tutorials, such as our <a href="https://developers.cloudflare.com/cloudflare-for-saas/security/waf-for-saas/managed-rulesets/">Managed Ruleset Tutorial</a>, are already live within the tile. However, more step-by-step guides for Cloudflare for SaaS products and add-ons will further enable our customers to take full advantage of the available product suite. In particular, keep an eye out for expanding documentation around using Workers for Platforms.</p>
    <div>
      <h3>Check it out</h3>
      <a href="#check-it-out">
        
      </a>
    </div>
    <p>Visit the new <a href="http://www.developers.cloudflare.com/cloudflare-for-saas">Cloudflare for SaaS tile</a> to see the updates. If you are a SaaS provider interested in extending Cloudflare benefits to your customers through Cloudflare for SaaS, visit our <a href="https://www.cloudflare.com/saas/">Cloudflare for SaaS overview</a> and our <a href="https://developers.cloudflare.com/cloudflare-for-saas/plans/">Plans page</a>.</p> ]]></content:encoded>
            <category><![CDATA[Technical Writing]]></category>
            <category><![CDATA[Developer Documentation]]></category>
            <category><![CDATA[Cloudflare for SaaS]]></category>
            <category><![CDATA[SSL]]></category>
            <category><![CDATA[SaaS]]></category>
            <category><![CDATA[Internship Experience]]></category>
            <guid isPermaLink="false">7cA2oDJgFIx7vyQyTY5Bk8</guid>
            <dc:creator>Mia Malden</dc:creator>
        </item>
    </channel>
</rss>