
<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:media="http://search.yahoo.com/mrss/">
    <channel>
        <title><![CDATA[ The Cloudflare Blog ]]></title>
        <description><![CDATA[ Get the latest news on how products at Cloudflare are built, technologies used, and join the teams helping to build a better Internet. ]]></description>
        <link>https://blog.cloudflare.com</link>
        <atom:link href="https://blog.cloudflare.com/" rel="self" type="application/rss+xml"/>
        <language>en-us</language>
        
        <lastBuildDate>Tue, 14 Apr 2026 21:28:45 GMT</lastBuildDate>
        <item>
            <title><![CDATA[Just landed: streaming ingestion on Cloudflare with Arroyo and Pipelines]]></title>
            <link>https://blog.cloudflare.com/cloudflare-acquires-arroyo-pipelines-streaming-ingestion-beta/</link>
            <pubDate>Thu, 10 Apr 2025 14:00:00 GMT</pubDate>
            <description><![CDATA[ We’ve just shipped our new streaming ingestion service, Pipelines — and we’ve acquired Arroyo, enabling us to bring new SQL-based, stateful transformations to Pipelines and R2. ]]></description>
            <content:encoded><![CDATA[ <p>Today, we’re launching the open beta of Pipelines, our streaming ingestion product. Pipelines allows you to ingest high volumes of structured, real-time data, and load it into our <a href="https://www.cloudflare.com/developer-platform/products/r2/"><u>object storage service, R2</u></a>. You don’t have to manage any of the underlying infrastructure, worry about scaling shards or metadata services, and you pay for the data processed (and not by the hour). Anyone on a Workers paid plan can start using it to ingest and batch data — at tens of thousands of requests per second (RPS) — directly into R2.</p><p>But this is just the tip of the iceberg: you often want to transform the data you’re ingesting, hydrate it on-the-fly from other sources, and write it to an open table format (such as Apache Iceberg), so that you can efficiently query that data once you’ve landed it in object storage.</p><p>The good news is that we’ve thought about that too, and we’re excited to announce that we’ve acquired <a href="https://www.arroyo.dev/"><u>Arroyo</u></a>, a cloud-native, distributed stream processing engine, to make that happen.</p><p>With Arroyo <i>and </i>our just announced <a href="https://blog.cloudflare.com/r2-data-catalog-public-beta/">R2 Data Catalog</a>, we’re getting increasingly serious about building a data platform that allows you to ingest data across the planet, store it at scale, and <i>run compute over it</i>. </p><p>To get started, you can dive into the <a href="http://developers.cloudflare.com/pipelines/"><u>Pipelines developer docs</u></a> or just run this <a href="https://developers.cloudflare.com/workers/wrangler/"><u>Wrangler</u></a> command to create your first pipeline:</p>
            <pre><code>$ npx wrangler@latest pipelines create my-clickstream-pipeline --r2-bucket my-bucket

...
✅ Successfully created Pipeline my-clickstream-pipeline with ID 0e00c5ff09b34d018152af98d06f5a1xv</code></pre>
            <p>… and then write your first record(s):</p>
            <pre><code>$ curl -d '[{"payload": [],"id":"abc-def"}]' 
"https://0e00c5ff09b34d018152af98d06f5a1xvc.pipelines.cloudflarestorage.com/"</code></pre>
            <p>However, the true power comes from the processing of data streams between ingestion and when they’re written to sinks like R2. Being able to write SQL that acts on windows of data <i>as it’s being ingested</i>, that can transform &amp; aggregate it, and even extract insights from the data in real-time, turns out to be extremely powerful.</p><p>This is where Arroyo comes in, and we’re going to be bringing the best parts of Arroyo into Pipelines and deeply integrate it with Workers, R2, and the rest of our Developer Platform.</p>
    <div>
      <h2>The Arroyo origin story </h2>
      <a href="#the-arroyo-origin-story">
        
      </a>
    </div>
    <p><i>(By Micah Wylde, founder of Arroyo)</i></p><p>We started Arroyo in 2023 to bring real-time (<i>stream</i>) processing to everyone who works with data. Modern companies rely on data pipelines to power their applications and businesses — from user customization, recommendations, and anti-fraud, to the emerging world of AI agents.</p><p>But today, most of these pipelines operate in batch, running once per hour, day, or even month. After spending many years working on stream processing at companies like Lyft and Splunk, it was no mystery why: it was just too hard for developers and data scientists to build correct, performant, and reliable pipelines. Large tech companies hire streaming experts to build and operate these systems, but everyone else is stuck waiting for batches to arrive. </p><p>When we started, the dominant solution for streaming pipelines — and what we ran at Lyft and Splunk — was Apache Flink. Flink was the first system that successfully combined a fault-tolerant (able to recover consistently from failures), distributed (across multiple machines), stateful (and remember data about past events) dataflow with a graph-construction API. This combination of features meant that we could finally build powerful real-time data applications, with capabilities like windows, aggregations, and joins. But while Flink had the necessary power, in practice the API proved too hard and low-level for non-expert users, and the stateful nature of the resulting services required endless operations.</p><p>We realized we would need to build a new streaming engine — one with the power of Flink, but designed for product engineers and data scientists and to run on modern cloud infrastructure. We started with SQL as our API because it’s easy to use, widely known, and declarative. We built it in Rust for speed and operational simplicity (no JVM tuning required!). We constructed an object-storage-native state backend, simplifying the challenge of running stateful pipelines — which each are like a weird, specialized database. And then in the summer of 2023, we open-sourced it. Today, dozens of companies are running Arroyo pipelines with use cases including data ingestion, anti-fraud, IoT observability, and financial trading. </p><p>But we always knew that the engine was just one piece of the puzzle. To make streaming as easy as batch, users need to be able to develop and test query logic, backfill on historical data, and deploy serverlessly without having to worry about cluster sizing or ongoing operations. Democratizing streaming ultimately meant building a complete data platform. And when we started talking with Cloudflare, we realized they already had all of the pieces in place: R2 provides object storage for state and data at rest, Cloudflare <a href="https://developers.cloudflare.com/queues/"><u>Queues</u></a> for data in transit, and Workers to safely and efficiently run user code. And Cloudflare, uniquely, allows us to push these systems all the way to the edge, enabling a new paradigm of local stream processing that will be key for a future of data sovereignty and AI.</p><p>That’s why we’re incredibly excited to join with the Cloudflare team to make this vision a reality.</p>
    <div>
      <h2>Ingestion at scale</h2>
      <a href="#ingestion-at-scale">
        
      </a>
    </div>
    <p>While transformations and a streaming SQL API are on the way for Pipelines, it already solves two critical parts of the data journey: globally distributed, high-throughput ingestion and efficient loading into object storage. </p><p>Creating a pipeline is as simple as running one command: </p>
            <pre><code>$ npx wrangler@latest pipelines create my-clickstream-pipeline --r2-bucket my-bucket

🌀 Creating pipeline named "my-clickstream-pipeline"
✅ Successfully created pipeline my-clickstream-pipeline with ID 
0e00c5ff09b34d018152af98d06f5a1xvc

Id:    0e00c5ff09b34d018152af98d06f5a1xvc
Name:  my-clickstream-pipeline
Sources:
  HTTP:
    Endpoint:        https://0e00c5ff09b34d018152af98d06f5a1xvc.pipelines.cloudflare.com/
    Authentication:  off
    Format:          JSON
  Worker:
    Format:  JSON
Destination:
  Type:         R2
  Bucket:       my-bucket
  Format:       newline-delimited JSON
  Compression:  GZIP
Batch hints:
  Max bytes:     100 MB
  Max duration:  300 seconds
  Max records:   100,000

🎉 You can now send data to your pipeline!

Send data to your pipeline's HTTP endpoint:
curl "https://0e00c5ff09b34d018152af98d06f5a1xvc.pipelines.cloudflare.com/" -d '[{ ...JSON_DATA... }]'</code></pre>
            <p>By default, a pipeline can ingest data from two sources – Workers and an HTTP endpoint – and load batched events into an R2 bucket. This gives you an out-of-the-box solution for streaming raw event data into object storage. If the defaults don’t work, you can configure pipelines during creation or anytime after. Options include: adding authentication to the HTTP endpoint, configuring CORS to allow browsers to make cross-origin requests, and specifying output file compression and batch settings.</p><p>We’ve built Pipelines for high ingestion volumes from day 1. Each pipeline can scale to ~100,000 records per second (and we’re just getting started here). Once records are written to a Pipeline, they are then durably stored, batched, and written out as files in an R2 bucket. Batching is critical here: if you’re going to act on and query that data, you don’t want your query engine querying millions (or tens of millions) of tiny files. It’s slow (per-file &amp; request overheads), inefficient (more files to read), and costly (more operations). Instead, you want to find the right balance between batch size for your query engine and latency (not waiting too long for a batch): Pipelines allows you to configure this.</p><p>To further optimize queries, output files are partitioned by date and time, using the standard Hive partitioning scheme. This can optimize queries even further, because your query engine can just skip data that is irrelevant to the query you’re running. The output in your R2 bucket might look like this:</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7q63u2kRoYBAZJtgfcF874/2a7341e1cba6e371e0eed311e89fec6a/image1.png" />
          </figure><p><sup><i>Hive-partioned files from Pipelines in an R2 bucket</i></sup></p><p>Output files are stored as new-line delimited JSON (NDJSON) — which makes it easy to materialize a stream from these files (hint: in the future you’ll be able to use R2 as a pipeline source too). Finally, the file names are <a href="https://github.com/ulid/spec"><u>ULIDs</u></a> - so they’re sorted by time by default.</p>
    <div>
      <h2>First you shard, then you shard some more</h2>
      <a href="#first-you-shard-then-you-shard-some-more">
        
      </a>
    </div>
    <p>What makes Pipelines so horizontally scalable <i>and</i> able to acknowledge writes quickly is how we built it: we use Durable Objects and the <a href="https://blog.cloudflare.com/sqlite-in-durable-objects/"><u>embedded, zero-latency SQLite</u></a> storage within each Durable Object to immediately persist data as it’s written, before then processing it and writing it to R2.</p><p>For example: imagine you’re an e-commerce or SaaS site and need to ingest website usage data (known as <i>clickstream data</i>), and make it available to your data science team to query. The infrastructure which handles this workload has to be resilient to several failure scenarios. The ingestion service needs to maintain high availability in the face of bursts in traffic. Once ingested, the data needs to be buffered, to minimize downstream invocations and thus downstream cost. Finally, the buffered data needs to be delivered to a sink, with appropriate retry &amp; failure handling if the sink is unavailable. Each step of this process needs to signal backpressure upstream when overloaded. It also needs to scale: up during major sales or events, and down during the quieter periods of the day.</p><p>Data engineers reading this post might be familiar with the status quo of using Kafka and the associated ecosystem to handle this. But if you’re an application engineer: you use Pipelines to build an ingestion service <i>without </i>learning about Kafka, Zookeeper, and Kafka streams.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/eRIUocbyvY2oHwEK34pzE/e2ef72b2858c02e890446cfd34accb45/image3.png" />
          </figure><p><sup><i>Pipelines horizontal sharding</i></sup></p><p>The diagram above shows how Pipelines splits the control plane, which is responsible for accounting, tracking shards, and Pipelines lifecycle events, and the data path, which is a scalable group of Durable Objects shards.</p><p>When a record (or batch of records) is written to Pipelines:</p><ol><li><p>The Pipelines Worker receives the records either through the fetch handler or worker binding.</p></li><li><p>Contacts the Coordinator, based upon the <code>pipeline_id</code> to get the execution plan: subsequent reads are cached to reduce pressure on the coordinator.</p></li><li><p>Executes the plan, which first shards to a set of Executors, while are primarily serving to scale read request handling</p></li><li><p>These then re-shard to another set of executors that are actually handling the writes, beginning with persisting to Durable Object storage, which will be replicated for durability and availability by the <a href="https://blog.cloudflare.com/sqlite-in-durable-objects/#under-the-hood-storage-relay-service"><u>Storage Relay Service</u></a> (SRS). </p></li><li><p>After SRS, we pass to any configured Transform Workers to customize the data.</p></li><li><p>The data is batched, written to output files, and compressed (if applicable).</p></li><li><p>The files are compressed, data is packaged into the final batches, and written to the configured R2 bucket.</p></li></ol><p>Each step of this pipeline can signal backpressure upstream. We do this by leveraging <a href="https://developer.mozilla.org/en-US/docs/Web/API/ReadableStream"><u>ReadableStreams</u></a> and responding with <a href="https://developer.mozilla.org/en-US/docs/Web/HTTP/Reference/Status/429"><u>429s</u></a> when the total number of bytes awaiting write exceeds a threshold. Each ReadableStream is able to cross Durable Object boundaries by using <a href="https://developers.cloudflare.com/workers/runtime-apis/rpc/"><u>JSRPC</u></a> calls between Durable Objects. To improve performance, we use RPC stubs for connection reuse between Durable Objects. Each step is also able to retry operations, to handle any temporary unavailability in the Durable Objects or R2.</p><p>We also guarantee delivery even while updating an existing pipeline. When you update an existing pipeline, we create a new deployment, including all the shards and Durable Objects described above. Requests are gracefully re-routed to the new pipeline. The old pipeline continues to write data into R2, until all the Durable Object storage is drained. We spin down the old pipeline only after all the data has been written out. This way, you won’t lose data even while updating a pipeline.</p><p>You’ll notice there’s one interesting part in here — the Transform Workers — which we haven’t yet exposed. As we work to integrate Arroyo’s streaming engine with Pipelines, this will be a key part of how we hand over data for Arroyo to process.</p>
    <div>
      <h2>So, what’s it cost?</h2>
      <a href="#so-whats-it-cost">
        
      </a>
    </div>
    <p>During the first phase of the open beta, there will be no additional charges beyond standard R2 storage and operation costs incurred when loading and accessing data. And as always, egress directly from R2 buckets is free, so you can process and query your data from any cloud or region without worrying about data transfer costs adding up.</p><p>In the future, we plan to introduce pricing based on volume of data ingested into Pipelines and delivered from Pipelines:</p><table><tr><td><p>
</p></td><td><p><b>Workers Paid ($5 / month)</b></p></td></tr><tr><td><p><b>Ingestion</b></p></td><td><p>First 50 GB per month included</p><p>\$0.02 per additional GB</p></td></tr><tr><td><p><b>Delivery to R2</b></p></td><td><p>First 50 GB per month included</p><p>\$0.02 per additional GB</p></td></tr></table><p>We’re also planning to make Pipelines available on the Workers Free plan as the beta progresses.</p><p>We’ll be sharing more as we bring transformations and additional sinks to Pipelines. We’ll provide at least 30 days notice before we make any changes or start charging for usage, which we expect to do by September 15, 2025.</p>
    <div>
      <h2>What’s next?</h2>
      <a href="#whats-next">
        
      </a>
    </div>
    <p>There’s a lot to build here, and we’re keen to build on a lot of the powerful components that Arroyo has built: integrating Workers as UDFs (User-Defined Functions), adding new sources like Kafka clients, and extending Pipelines with new sinks (beyond R2).</p><p>We’ll also be integrating Pipelines with our just-launched <a href="https://blog.cloudflare.com/r2-data-catalog-public-beta/">R2 Data Catalog</a>: enabling you ingest streams of data directly into Iceberg tables and immediately query them, without needing to rely on other systems.</p><p>In the meantime, you can:</p><ul><li><p>Get started and <a href="http://developers.cloudflare.com/pipelines/getting-started/"><u>create your first Pipeline</u></a></p></li><li><p><a href="http://developers.cloudflare.com/pipelines/"><u>Read the docs</u></a></p></li><li><p>Join the <code>#pipelines-beta</code> channel on <a href="http://discord.cloudflare.com/"><u>our Developer Discord</u></a></p></li></ul><p>… or deploy the example project directly: </p>
            <pre><code>$ npm create cloudflare@latest -- pipelines-starter 
--template="cloudflare/pipelines-starter"</code></pre>
            <p></p> ]]></content:encoded>
            <category><![CDATA[Developer Week]]></category>
            <category><![CDATA[R2]]></category>
            <category><![CDATA[Pipelines]]></category>
            <category><![CDATA[Developer Platform]]></category>
            <category><![CDATA[Cloudflare Workers]]></category>
            <guid isPermaLink="false">7rKz4iUFCDuhtjGXVbgFzl</guid>
            <dc:creator>Micah Wylde</dc:creator>
            <dc:creator>Matt Silverlock</dc:creator>
            <dc:creator>Pranshu Maheshwari</dc:creator>
        </item>
        <item>
            <title><![CDATA[Durable Objects aren't just durable, they're fast: a 10x speedup for Cloudflare Queues]]></title>
            <link>https://blog.cloudflare.com/how-we-built-cloudflare-queues/</link>
            <pubDate>Thu, 24 Oct 2024 13:00:00 GMT</pubDate>
            <description><![CDATA[ Learn how we built Cloudflare Queues using our own Developer Platform and how it evolved to a geographically-distributed, horizontally-scalable architecture built on Durable Objects. Our new architecture supports over 10x more throughput and over 3x lower latency compared to the previous version. ]]></description>
            <content:encoded><![CDATA[ <p></p><p><a href="https://www.cloudflare.com/developer-platform/products/cloudflare-queues/"><u>Cloudflare Queues</u></a> let a developer decouple their Workers into event-driven services. Producer Workers write events to a Queue, and consumer Workers are invoked to take actions on the events. For example, you can use a Queue to decouple an e-commerce website from a service which sends purchase confirmation emails to users. During 2024’s Birthday Week, we <a href="https://blog.cloudflare.com/builder-day-2024-announcements?_gl=1*18s1fwl*_gcl_au*MTgyNDA5NjE5OC4xNzI0MjgzMTQ0*_ga*OTgwZmE0YWUtZWJjMS00NmYxLTllM2QtM2RmY2I4ZjAwNzZk*_ga_SQCRB0TXZW*MTcyODkyOTU2OS4xNi4xLjE3Mjg5Mjk1NzcuNTIuMC4w/#queues-is-ga"><u>announced that Cloudflare Queues is now Generally Available</u></a>, with significant performance improvements that enable larger workloads. To accomplish this, we switched to a new architecture for Queues that enabled the following improvements:</p><ul><li><p>Median latency for sending messages has dropped from ~200ms to ~60ms</p></li><li><p>Maximum throughput for each Queue has increased over 10x, from 400 to 5000 messages per second</p></li><li><p>Maximum Consumer concurrency for each Queue has increased from 20 to 250 concurrent invocations</p></li></ul>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5PvIsHfLwwIkp2LXVUDhmG/99f286f2f89d10b2a7e359d8d66f6dba/image5.png" />
          </figure><p><sup><i>Median latency drops from ~200ms to ~60ms as Queues are migrated to the new architecture</i></sup></p><p>In this blog post, we'll share details about how we built Queues using Durable Objects and the Cloudflare Developer Platform, and how we migrated from an initial Beta architecture to a geographically-distributed, horizontally-scalable architecture for General Availability.</p>
    <div>
      <h3>v1 Beta architecture</h3>
      <a href="#v1-beta-architecture">
        
      </a>
    </div>
    <p>When initially designing Cloudflare Queues, we decided to build something simple that we could get into users' hands quickly. First, we considered leveraging an off-the-shelf messaging system such as Kafka or Pulsar. However, we decided that it would be too challenging to operate these systems at scale with the large number of isolated tenants that we wanted to support.</p><p>Instead of investing in new infrastructure, we decided to build on top of one of Cloudflare's existing developer platform building blocks: <b>Durable Objects.</b> <a href="https://www.cloudflare.com/developer-platform/durable-objects/"><u>Durable Objects</u></a> are a simple, yet powerful building block for coordination and storage in a distributed system. In our initial <i>v1 </i>architecture, each Queue was implemented using a single Durable Object. As shown below, clients would send messages to a Worker running in their region, which would be forwarded to the single Durable Object hosted in the WNAM (Western North America) region. We used a single Durable Object for simplicity, and hosted it in WNAM for proximity to our centralized configuration API service.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/yxj5Gut3usYa0mFbRddXU/881f5905f789bc2f910ee1b2dcadac92/image1.png" />
          </figure><p>One of a Queue's main responsibilities is to accept and store incoming messages. Sending a message to a <i>v1</i> Queue used the following flow:</p><ul><li><p>A client sends a POST request containing the message body to the Queues API at <code>/accounts/:accountID/queues/:queueID/messages</code></p></li><li><p>The request is handled by an instance of the <b>Queue Broker Worker</b> in a Cloudflare data center running near the client.</p></li><li><p>The Worker performs authentication, and then uses Durable Objects <code>idFromName</code> API to route the request to the <b>Queue Durable Object</b> for the given <code>queueID</code></p></li><li><p>The Queue Durable Object persists the message to storage before returning a <i>success </i>back to the client.</p></li></ul><p>Durable Objects handled most of the heavy-lifting here: we did not need to set up any new servers, storage, or service discovery infrastructure. To route requests, we simply provided a <code>queueID</code> and the platform handled the rest. To store messages, we used the Durable Object storage API to <code>put</code> each message, and the platform handled reliably storing the data redundantly.</p>
    <div>
      <h3>Consuming messages</h3>
      <a href="#consuming-messages">
        
      </a>
    </div>
    <p>The other main responsibility of a Queue is to deliver messages to a Consumer. Delivering messages in a v1 Queue used the following process:</p><ul><li><p>Each Queue Durable Object maintained an <b>alarm </b>that was always set when there were undelivered messages in storage. The alarm guaranteed that the Durable Object would reliably wake up to deliver any messages in storage, even in the presence of failures. The alarm time was configured to fire after the user's selected <i>max wait time</i><b><i>, </i></b>if only a partial batch of messages was available. Whenever one or more full batches were available in storage, the alarm was scheduled to fire immediately.</p></li><li><p>The alarm would wake the Durable Object, which continually looked for batches of messages in storage to deliver.</p></li><li><p>Each batch of messages was sent to a "Dispatcher Worker" that used <a href="https://developers.cloudflare.com/cloudflare-for-platforms/workers-for-platforms/"><u>Workers for Platforms</u></a> <a href="https://developers.cloudflare.com/cloudflare-for-platforms/workers-for-platforms/reference/how-workers-for-platforms-works/#dynamic-dispatch-worker"><i><u>dynamic dispatch</u></i></a> to pass the messages to the <code>queue()</code> function defined in a user's Consumer Worker</p></li></ul>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4vAM17x3nN49JBMGNTblPp/6af391950df5f0fbc14faeccb351e38c/image4.png" />
          </figure><p>This v1 architecture let us flesh out the initial version of the Queues Beta product and onboard users quickly. Using Durable Objects allowed us to focus on building application logic, instead of complex low-level systems challenges such as global routing and guaranteed durability for storage. Using a separate Durable Object for each Queue allowed us to host an essentially unlimited number of Queues, and provided isolation between them.</p><p>However, using <i>only</i> one Durable Object per queue had some significant limitations:</p><ul><li><p><b>Latency: </b>we created all of our v1 Queue Durable Objects in Western North America. Messages sent from distant regions incurred significant latency when traversing the globe.</p></li><li><p><b>Throughput: </b>A single Durable Object is not scalable: it is single-threaded and has a fixed capacity for how many requests per second it can process. This is where the previous 400 messages per second limit came from.</p></li><li><p><b>Consumer Concurrency: </b>Due to <a href="https://developers.cloudflare.com/workers/platform/limits/#simultaneous-open-connections"><u>concurrent subrequest limits</u></a>, a single Durable Object was limited in how many concurrent subrequests it could make to our Dispatcher Worker. This limited the number of <code>queue()</code> handler invocations that it could run simultaneously.</p></li></ul><p>To solve these issues, we created a new v2 architecture that horizontally scales across <b>multiple</b> Durable Objects to implement each single high-performance Queue.</p>
    <div>
      <h3>v2 Architecture</h3>
      <a href="#v2-architecture">
        
      </a>
    </div>
    <p>In the new v2 architecture for Queues, each Queue is implemented using multiple Durable Objects, instead of just one. Instead of a single region, we place <i>Storage Shard </i>Durable Objects in <a href="https://developers.cloudflare.com/durable-objects/reference/data-location/#supported-locations-1"><u>all available regions</u></a> to enable lower latency. Within each region, we create multiple Storage Shards and load balance incoming requests amongst them. Just like that, we’ve multiplied message throughput.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2SJTb2UO8tKGrh26ixwLrA/7272e4eaf6f7e85f086a5ae08670387e/image2.png" />
          </figure><p>Sending a message to a v2 Queue uses the following flow:</p><ul><li><p>A client sends a POST request containing the message body to the Queues API at <code>/accounts/:accountID/queues/:queueID/messages</code></p></li><li><p>The request is handled by an instance of the <b>Queue Broker Worker</b> running in a Cloudflare data center near the client.</p></li><li><p>The Worker:</p><ul><li><p>Performs authentication</p></li><li><p>Reads from Workers KV to obtain a <i>Shard Map</i> that lists available storage shards for the given <code>region</code> and <code>queueID</code></p></li><li><p>Picks one of the region's Storage Shards at random, and uses Durable Objects <code>idFromName</code> API to route the request to the chosen shard</p></li></ul></li><li><p>The Storage Shard persists the message to storage before returning a <i>success </i>back to the client.</p></li></ul><p>In this v2 architecture, messages are stored in the closest available Durable Object storage cluster near the user, greatly reducing latency since messages don't need to be shipped all the way to WNAM. Using multiple shards within each region removes the bottleneck of a single Durable Object, and allows us to scale each Queue horizontally to accept even more messages per second. <a href="https://blog.cloudflare.com/tag/cloudflare-workers-kv/"><u>Workers KV</u></a> acts as a fast metadata store: our Worker can quickly look up the shard map to perform load balancing across shards.</p><p>To improve the <i>Consumer</i> side of v2 Queues, we used a similar "scale out" approach. A single Durable Object can only perform a limited number of concurrent subrequests. In v1 Queues, this limited the number of concurrent subrequests we could make to our Dispatcher Worker. To work around this, we created a new <i>Consumer Shard</i> Durable Object class that we can scale horizontally, enabling us to execute many more concurrent instances of our users' <code>queue()</code> handlers.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2ujodUVBegDcWXi6DYJR41/5f31ba4da387df82613a496ff311f65f/image3.png" />
          </figure><p>Consumer Durable Objects in v2 Queues use the following approach:</p><ul><li><p>Each Consumer maintains an alarm that guarantees it will wake up to process any pending messages. <i>v2 </i>Consumers are notified by the Queue's <i>Coordinator </i>(introduced below) when there are messages ready for consumption. Upon notification, the Consumer sets an alarm to go off immediately.</p></li><li><p>The Consumer looks at the shard map, which contains information about the storage shards that exist for the Queue, including the number of available messages on each shard.</p></li><li><p>The Consumer picks a random storage shard with available messages, and asks for a batch.</p></li><li><p>The Consumer sends the batch to the Dispatcher Worker, just like for v1 Queues.</p></li><li><p>After processing the messages, the Consumer sends another request to the Storage Shard to either "acknowledge" or "retry" the messages.</p></li></ul><p>This scale-out approach enabled us to work around the subrequest limits of a single Durable Object, and increase the maximum supported concurrency level of a Queue from 20 to 250. </p>
    <div>
      <h3>The Coordinator and “Control Plane”</h3>
      <a href="#the-coordinator-and-control-plane">
        
      </a>
    </div>
    <p>So far, we have primarily discussed the "Data Plane" of a v2 Queue: how messages are load balanced amongst Storage Shards, and how Consumer Shards read and deliver messages. The other main piece of a v2 Queue is the "Control Plane", which handles creating and managing all the individual Durable Objects in the system. In our v2 architecture, each Queue has a single <i>Coordinator</i> Durable Object that acts as the brain of the Queue. Requests to create a Queue, or change its settings, are sent to the Queue's Coordinator.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7lYJs23oJ8ibtGgbuOk9JN/7ffa8073a4391602b67a0c6e134975bc/image7.png" />
          </figure><p>The Coordinator maintains a <i>Shard Map</i> for the Queue, which includes metadata about all the Durable Objects in the Queue (including their region, number of available messages, current estimated load, etc.). The Coordinator periodically writes a fresh copy of the Shard Map into Workers KV, as pictured in step 1 of the diagram. Placing the shard map into Workers KV ensures that it is globally cached and available for our Worker to read quickly, so that it can pick a shard to accept the message.</p><p>Every shard in the system periodically sends a heartbeat to the Coordinator as shown in steps 2 and 3 of the diagram. Both Storage Shards and Consumer Shards send heartbeats, including information like the number of messages stored locally, and the current load (requests per second) that the shard is handling. The Coordinator uses this information to perform <b><i>autoscaling. </i></b>When it detects that the shards in a particular region are overloaded, it creates additional shards in the region, and adds them to the shard map in Workers KV. Our Worker sees the updated shard map and naturally load balances messages across the freshly added shards. Similarly, the Coordinator looks at the backlog of available messages in the Queue, and decides to add more Consumer shards to increase Consumer throughput when the backlog is growing. Consumer Shards pull messages from Storage Shards for processing as shown in step 4 of the diagram.</p><p>Switching to a new scalable architecture allowed us to meet our performance goals and take Queues to GA. As a recap, this new architecture delivered these significant improvements:</p><ul><li><p>P50 latency for writing to a Queue has dropped from ~200ms to ~60ms.</p></li><li><p>Maximum throughput for a Queue has increased from 400 to 5000 messages per second.</p></li><li><p>Maximum consumer concurrency has increased from 20 to 250 invocations.	</p></li></ul>
    <div>
      <h3>What's next for Queues</h3>
      <a href="#whats-next-for-queues">
        
      </a>
    </div>
    <ul><li><p>We plan on leveraging the performance improvements in the new <a href="https://developers.cloudflare.com/durable-objects/"><u>beta version of Durable Objects</u></a> which use SQLite to continue to improve throughput/latency in Queues.</p></li><li><p>We will soon be adding message management features to Queues so that you can take actions to purge messages in a queue, pause consumption of messages, or “redrive”/move messages from one queue to another (for example messages that have been sent to a Dead Letter Queue could be “redriven” or moved back to the original queue).</p></li><li><p>Work to make Queues the "event hub" for the Cloudflare Developer Platform:</p><ul><li><p>Create a low-friction way for events emitted from other Cloudflare services with event schemas to be sent to Queues.</p></li><li><p>Build multi-Consumer support for Queues so that Queues are no longer limited to one Consumer per queue.</p></li></ul></li></ul><p>To start using Queues, head over to our <a href="https://developers.cloudflare.com/queues/get-started/"><u>Getting Started</u></a> guide. </p><p>Do distributed systems like Cloudflare Queues and Durable Objects interest you? Would you like to help build them at Cloudflare? <a href="https://boards.greenhouse.io/embed/job_app?token=5390243&amp;gh_src=b2e516a81us"><u>We're Hiring!</u></a></p> ]]></content:encoded>
            <category><![CDATA[Product News]]></category>
            <category><![CDATA[Cloudflare Queues]]></category>
            <category><![CDATA[Cloudflare Workers]]></category>
            <category><![CDATA[Durable Objects]]></category>
            <category><![CDATA[Developers]]></category>
            <category><![CDATA[Developer Platform]]></category>
            <guid isPermaLink="false">41vXJNWrB0YHsKqSz6SGDS</guid>
            <dc:creator>Josh Wheeler</dc:creator>
            <dc:creator>Siddhant Sinha</dc:creator>
            <dc:creator>Todd Mantell</dc:creator>
            <dc:creator>Pranshu Maheshwari</dc:creator>
        </item>
    </channel>
</rss>