
<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:media="http://search.yahoo.com/mrss/">
    <channel>
        <title><![CDATA[ The Cloudflare Blog ]]></title>
        <description><![CDATA[ Get the latest news on how products at Cloudflare are built, technologies used, and join the teams helping to build a better Internet. ]]></description>
        <link>https://blog.cloudflare.com</link>
        <atom:link href="https://blog.cloudflare.com/" rel="self" type="application/rss+xml"/>
        <language>en-us</language>
        
        <lastBuildDate>Tue, 14 Apr 2026 15:05:41 GMT</lastBuildDate>
        <item>
            <title><![CDATA[Making Super Slurper 5x faster with Workers, Durable Objects, and Queues]]></title>
            <link>https://blog.cloudflare.com/making-super-slurper-five-times-faster/</link>
            <pubDate>Thu, 10 Apr 2025 14:05:00 GMT</pubDate>
            <description><![CDATA[ We re-architected Super Slurper from the ground up using our Developer Platform — leveraging Cloudflare Workers, Durable Objects, and Queues — and improved transfer speeds by up to 5x. ]]></description>
            <content:encoded><![CDATA[ <p><a href="https://developers.cloudflare.com/r2/data-migration/super-slurper/"><u>Super Slurper</u></a> is Cloudflare’s data migration tool that is designed to make large-scale data transfers between cloud object storage providers and <a href="https://www.cloudflare.com/developer-platform/products/r2/"><u>Cloudflare R2</u></a> easy. Since its launch, thousands of developers have used Super Slurper to move petabytes of data from AWS S3, Google Cloud Storage, and other <a href="https://www.cloudflare.com/developer-platform/solutions/s3-compatible-object-storage/">S3-compatible services</a> to R2.</p><p>But we saw an opportunity to make it even faster. We rearchitected Super Slurper from the ground up using our Developer Platform — building on <a href="https://developers.cloudflare.com/workers/"><u>Cloudflare Workers</u></a>, <a href="https://developers.cloudflare.com/durable-objects/"><u>Durable Objects</u></a>, and <a href="https://developers.cloudflare.com/queues/"><u>Queues</u></a> — and improved transfer speeds by up to 5x. In this post, we’ll dive into the original architecture, the performance bottlenecks we identified, how we solved them, and the real-world impact of these improvements.</p>
    <div>
      <h2>Initial architecture and performance bottlenecks</h2>
      <a href="#initial-architecture-and-performance-bottlenecks">
        
      </a>
    </div>
    <p>Super Slurper originally shared its architecture with <a href="https://developers.cloudflare.com/images/upload-images/sourcing-kit/"><u>SourcingKit</u></a>, a tool built to bulk import images from AWS S3 into <a href="https://developers.cloudflare.com/images/"><u>Cloudflare Images</u></a>. SourcingKit was deployed on Kubernetes and ran alongside the <a href="https://developers.cloudflare.com/images/"><u>Images</u></a> service. When we started building Super Slurper, we split it into its own Kubernetes namespace and introduced a few new APIs to make it easier to use for the object storage use case. This setup worked well and helped thousands of developers move data to R2.</p><p>However, it wasn’t without its challenges. SourcingKit wasn’t designed to handle the scale required for large, petabytes-scale transfers. SourcingKit, and by extension Super Slurper, operated on Kubernetes clusters located in one of our core data centers, meaning it had to share compute resources and bandwidth with Cloudflare’s control plane, analytics, and other services. As the number of migrations grew, these resource constraints became a clear bottleneck.</p><p>For a service transferring data between object storage providers, the job is simple: list objects from the source, copy them to the destination, and repeat. This is exactly how the original Super Slurper worked. We listed objects from the source bucket, pushed that list to a Postgres-based queue (<code>pg_queue</code>), and then pulled from this queue at a steady pace to copy objects over. Given the scale of object storage migrations, bandwidth usage was inevitably going to be high. This made it challenging to scale.</p><p>To address the bandwidth constraints operating solely in our core data center, we introduced <a href="https://developers.cloudflare.com/workers/"><u>Cloudflare Workers</u></a> into the mix. Instead of handling the copying of data in our core data center, we started calling out to a Worker to do the actual copying:</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1EgtILMnu88y3VzUvYLlPl/479e2f99a62155f7bd8047f98a2a9cd2/1_.png" />
          </figure><p>As Super Slurper’s usage grew, so did our Kubernetes resource consumption. A significant amount of time during data transfers was spent waiting on network I/O or storage, and not actually doing compute-intensive tasks. So we didn’t need more memory or more CPU, we needed more concurrency.</p><p>To keep up with demand, we kept increasing the replica count. But eventually, we hit a wall. We were dealing with scalability challenges when running on the order of tens of pods when we wanted multiple orders of magnitude more.</p><p>We decided to rethink the entire approach from first principles, instead of leaning on the architecture we had inherited. In about a week, we built a rough proof of concept using <a href="https://developers.cloudflare.com/workers/"><u>Cloudflare Workers</u></a>, <a href="https://developers.cloudflare.com/durable-objects/"><u>Durable Objects</u></a>, and <a href="https://developers.cloudflare.com/queues/"><u>Queues</u></a>. We listed objects from the source bucket, pushed them into a queue, and then consumed messages from the queue to initiate transfers. Although this sounds very similar to what we did in the original implementation, building on our Developer Platform allowed us to automatically scale an order of magnitude higher than before.</p><ul><li><p><b>Cloudflare Queues</b>: Enables asynchronous object transfers and auto-scales to meet the number of objects being migrated.</p></li><li><p><b>Cloudflare Workers</b>: Runs lightweight compute tasks without the overhead of Kubernetes and optimizes where in the world each part of the process runs<b> </b>for lower latency and better performance.</p></li><li><p><b>SQLite-backed Durable Objects (DOs)</b>: Acts as a fully distributed database, eliminating the limitations of a single PostgreSQL instance.</p></li><li><p><b>Hyperdrive</b>: Provides fast access to historical job data from the original PostgreSQL database, keeping it as an archive store.</p></li></ul><p>We ran a few tests and found that our proof of concept was slower than the original implementation for small transfers (a few hundred objects), but it matched and eventually exceeded the performance of the original as transfers scaled into the millions of objects. That was the signal we needed to invest the time to take our proof of concept to production.</p><p>We removed our proof of concept hacks, worked on stability, and found new ways to make transfers scale to even higher concurrency. After a few iterations, we landed on something we were happy with.</p>
    <div>
      <h2>New architecture: Workers, Queues, and Durable Objects</h2>
      <a href="#new-architecture-workers-queues-and-durable-objects">
        
      </a>
    </div>
    
    <div>
      <h4>Processing layer: managing the flow of migration</h4>
      <a href="#processing-layer-managing-the-flow-of-migration">
        
      </a>
    </div>
    
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/ieLgJoWErEYEEa90QaXLC/81470021a99486a974753301d2d2f809/2.png" />
          </figure><p>At the heart of our processing layer are <b>queues, consumers, and workers</b>. Here’s what the process looks like:</p>
    <div>
      <h4>Kicking off a migration</h4>
      <a href="#kicking-off-a-migration">
        
      </a>
    </div>
    <p>When a client triggers a migration, it starts with a request sent to our <b>API Worker</b>. This worker takes the details of the migration, stores them in the database, and adds a message to the <b>List Queue</b> to start the process.</p>
    <div>
      <h4>Listing source bucket objects</h4>
      <a href="#listing-source-bucket-objects">
        
      </a>
    </div>
    <p>The <b>List Queue Consumer</b> is where things start to pick up. It pulls messages from the queue, retrieves object listings from the source bucket, applies any necessary filters, and stores important metadata in the database. Then, it creates new tasks by enqueuing object transfer messages into the <b>Transfer Queue</b>.</p><p>We immediately queue new batches of work, maximizing concurrency. A built-in throttling mechanism prevents us from adding more messages to our queues when unexpected failures occur, such as dependent systems going down. This helps maintain stability and prevents overload during disruptions.</p>
    <div>
      <h4>Efficient object transfers</h4>
      <a href="#efficient-object-transfers">
        
      </a>
    </div>
    <p>The <b>Transfer Queue Consumer</b> Workers pull object transfer messages from the queue, ensuring that each object is processed only once by locking the object key in the database. When the transfer finishes, the object is unlocked. For larger objects, we break them into manageable chunks and transfer them as multipart uploads.</p>
    <div>
      <h4>Handling failures gracefully</h4>
      <a href="#handling-failures-gracefully">
        
      </a>
    </div>
    <p>Failures are inevitable in any distributed system, and we had to make sure we accounted for that. We implemented automatic retries for transient failures, so issues don’t interrupt the flow of the migration. But if something can’t be resolved with retries, the message goes into the <b>Dead Letter Queue (DLQ)</b>, where it is logged for later review and resolution.</p>
    <div>
      <h4>Job completion &amp; lifecycle management</h4>
      <a href="#job-completion-lifecycle-management">
        
      </a>
    </div>
    <p>Once all the objects are listed and the transfers are in progress, the <b>Lifecycle Queue Consumer</b> keeps an eye on everything. It monitors the ongoing transfers, ensuring that no object is left behind. When all the transfers are complete, the job is marked as finished and the migration process wraps up.</p>
    <div>
      <h3>Database layer: durable storage &amp; legacy data retrieval</h3>
      <a href="#database-layer-durable-storage-legacy-data-retrieval">
        
      </a>
    </div>
    
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4OhENQndBrRkVLNmWQ4mWP/815173a64ec1943b7b626b02247d4887/3.png" />
          </figure><p>When building our new architecture, we knew we needed a robust solution to handle massive datasets while ensuring retrieval of historical job data. That's where our combination of <b>Durable Objects (DOs)</b> and <b>Hyperdrive</b> came in.</p>
    <div>
      <h4>Durable Objects</h4>
      <a href="#durable-objects">
        
      </a>
    </div>
    <p>We gave each account a dedicated Durable Object to track migration jobs. Each <b>job’s DO</b> stores vital details, such as bucket names, user options, and job state. This ensured everything stayed organized and easy to manage. To support large migrations, we also added a <b>Batch DO</b> that manages all the objects queued for transfer, storing their transfer state, object keys, and any extra metadata.</p><p>As migrations scaled up to <b>billions of objects</b>, we had to get creative with storage. We implemented a sharding strategy to distribute request loads, preventing bottlenecks and working around <b>SQLite DO’s 10 GB</b> storage limit. As objects are transferred, we clean up their details, optimizing storage space along the way. It’s surprising how much storage a billion object keys can require!</p>
    <div>
      <h4>Hyperdrive</h4>
      <a href="#hyperdrive">
        
      </a>
    </div>
    <p>Since we were rebuilding a system with years of migration history, we needed a way to preserve and access every past migration detail. Hyperdrive serves as a bridge to our legacy systems, enabling seamless retrieval of historical job data from our core <b>PostgreSQL</b> database. It's not just a data retrieval mechanism, but an archive for complex migration scenarios.</p>
    <div>
      <h2>Results: Super Slurper now transfers data to R2 up to 5x faster</h2>
      <a href="#results-super-slurper-now-transfers-data-to-r2-up-to-5x-faster">
        
      </a>
    </div>
    <p>So, after all of that, did we actually achieve our goal of making transfers faster?</p><p>We ran a test migration of 75,000 objects from AWS S3 to R2. With the original implementation, the transfer took 15 minutes and 30 seconds. After our performance improvements, the same migration completed in just 3 minutes and 25 seconds.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/57Pmt9tVNGYWvmRQQyvYE9/43443656bc81743485c3bb0f7d65b134/4.png" />
          </figure><p>When production migrations started using the new service in February, we saw even greater improvements in some cases, especially depending on the distribution of object sizes. Super Slurper has been around <a href="https://blog.cloudflare.com/r2-super-slurper-ga/"><u>for about two years</u></a>. But the improved performance has led to it being able to move much more data — 35% of all objects copied by Super Slurper happened just in the last two months.</p>
    <div>
      <h2>Challenges</h2>
      <a href="#challenges">
        
      </a>
    </div>
    <p>One of the biggest challenges we faced with the new architecture was handling duplicate messages. There were a couple of ways duplicates could occur:</p><ul><li><p>Queues provides at-least-once delivery, which means consumers may receive the same message more than once to guarantee delivery.</p></li><li><p>Failures and retries could also create apparent duplicates. For example, if a request to a Durable Object fails after the object has already been transferred, the retry could reprocess the same object.</p></li></ul><p>If not handled correctly, this could result in the same object being transferred multiple times. To solve this, we implemented several strategies to ensure each object was accurately accounted for and only transferred once:</p><ol><li><p>Since listing is sequential (e.g., to get object 2, you need the continuation token from listing object 1), we assign a sequence ID to each listing operation. This allows us to detect duplicate listings and prevent multiple processes from starting simultaneously. This is particularly useful because we don’t wait for database and queue operations to complete before listing the next batch. If listing 2 fails, we can retry it, and if listing 3 has already started, we can short-circuit unnecessary retries.</p></li><li><p>Each object is locked when its transfer begins, preventing parallel transfers of the same object. Once successfully transferred, the object is unlocked by deleting its key from the database. If a message for that object reappears later, we can safely assume it has already been transferred if the key no longer exists.</p></li><li><p>We rely on database transactions to keep our counts accurate. If an object fails to unlock, its count remains unchanged. Similarly, if an object key fails to be added to the database, the count isn’t updated, and the operation will be retried later.</p></li><li><p>As a last failsafe, we check whether the object already exists in the target bucket and was published after the start of our migration. If so, we assume it was transferred by our process (or another) and safely skip it.</p></li></ol>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/17zkULEDjrPDlG6mNIpomw/5c95bde32595daf0684a558729ee055a/5.png" />
          </figure>
    <div>
      <h2>What’s next for Super Slurper?</h2>
      <a href="#whats-next-for-super-slurper">
        
      </a>
    </div>
    <p>We’re always exploring ways to make Super Slurper faster, more scalable, and even easier to use — this is just the beginning.</p><ul><li><p>We recently launched the ability to migrate from any <a href="https://developers.cloudflare.com/changelog/2025-02-24-r2-super-slurper-s3-compatible-support/"><u>S3 compatible storage provider</u></a>!</p></li><li><p>Data migrations are still currently limited to 3 concurrent migrations per account, but we want to increase that limit. This will allow object prefixes to be split up into separate migrations and run in parallel, drastically increasing the speed at which a bucket can be migrated. For more information on Super Slurper and how to migrate data from existing object storage to R2, refer to our <a href="https://developers.cloudflare.com/r2/data-migration/super-slurper/"><u>documentation</u></a>.</p></li></ul><p>P.S. As part of this update, we made the API much simpler to interact with, so migrations can now be <a href="https://developers.cloudflare.com/api/resources/r2/subresources/super_slurper/"><u>managed programmatically</u></a>!</p> ]]></content:encoded>
            <category><![CDATA[Developer Week]]></category>
            <category><![CDATA[R2 Super Slurper]]></category>
            <category><![CDATA[Cloudflare Workers]]></category>
            <category><![CDATA[Durable Objects]]></category>
            <category><![CDATA[Cloudflare Queues]]></category>
            <category><![CDATA[Queues]]></category>
            <category><![CDATA[R2]]></category>
            <guid isPermaLink="false">12YmRoxQrsnW1ZVtEKBdht</guid>
            <dc:creator>Connor Maddox</dc:creator>
            <dc:creator>Siddhant Sinha</dc:creator>
            <dc:creator>Prasanna Sai Puvvada</dc:creator>
        </item>
        <item>
            <title><![CDATA[Builder Day 2024: 18 big updates to the Workers platform]]></title>
            <link>https://blog.cloudflare.com/builder-day-2024-announcements/</link>
            <pubDate>Thu, 26 Sep 2024 21:00:00 GMT</pubDate>
            <description><![CDATA[ To celebrate Builder Day 2024, we’re shipping 18 updates inspired by direct feedback from developers building on Cloudflare. This includes new capabilities, like running evals with AI Gateway, beta  ]]></description>
            <content:encoded><![CDATA[ <p>To celebrate <a href="https://builderday.pages.dev/"><u>Builder Day 2024</u></a>, we’re shipping 18 updates inspired by direct feedback from developers building on Cloudflare. Choosing a platform isn't just about current technologies and services — it's about betting on a partner that will evolve with your needs as your project grows and the tech landscape shifts. We’re in it for the long haul with you.</p>
    <div>
      <h3>Watch on Cloudflare TV</h3>
      <a href="#watch-on-cloudflare-tv">
        
      </a>
    </div>
    <div>
  
</div><p><b>Starting today, you can:</b></p><ul><li><p><a href="#logs-for-every-worker">Persist logs from your Worker and query them directly on the Cloudflare dashboard</a></p></li><li><p><a href="#connect-to-private-databases-from-workers">Connect your Worker to private databases (isolated in VPCs) using Hyperdrive</a></p></li><li><p><a href="#improved-node.js-compatibility-is-now-ga">Use a wider set of NPM packages on Cloudflare Workers, via improved Node.js compatibility</a></p></li><li><p><a href="#cloudflare-joins-opennext">Deploy Next.js apps that use the Node.js runtime to Cloudflare, via OpenNext</a></p></li><li><p><a href="https://blog.cloudflare.com/workers-ai-bigger-better-faster/">Run Evals with AI Gateway, now in Open Beta</a></p></li><li><p><a href="https://blog.cloudflare.com/sqlite-in-durable-objects">Read from and write to SQLite with zero-latency from every Durable Object</a></p></li></ul><p><b>We’ve brought key features from </b><a href="https://blog.cloudflare.com/pages-and-workers-are-converging-into-one-experience/"><b><u>Pages to Workers</u></b></a><b>, allowing you to: </b></p><ul><li><p><a href="#static-asset-hosting">Upload and serve static assets as part of your Worker, and use popular frameworks with Workers</a></p></li><li><p><a href="#continuous-integration-and-delivery">Automatically build and deploy each pull request to your Worker’s git repository</a></p></li><li><p><a href="#workers-preview-urls">Get back a preview deployment URL for each version of your Worker</a></p></li></ul><p><b>Four things are going GA and are officially production-ready:</b></p><ul><li><p><a href="#gradual-deployments">Gradual Deployments</a>: Deploy changes to your Worker gradually, on a percentage basis of traffic</p></li><li><p><a href="#queues-is-ga">Cloudflare Queues</a><b>:</b> Now with much higher throughput and concurrency limits</p></li><li><p><a href="#event-notifications-for-r2-is-now-ga">R2 Event Notifications</a><b>:</b> Tightly integrated with Queues for event-driven applications</p></li><li><p><a href="https://blog.cloudflare.com/workers-ai-bigger-better-faster/">Vectorize</a>: Globally distributed vector database, now faster, with larger indexes, and new pricing</p></li></ul><p><b>The Workers platform is getting faster:</b></p><ul><li><p><a href="https://blog.cloudflare.com/faster-workers-kv">We made Workers KV up to 3x faster.</a> Which makes serving static assets from Workers and Pages faster!</p></li><li><p><a href="https://blog.cloudflare.com/making-workers-ai-faster/">Workers AI now has much faster Time-to-First-Token (TTFT)</a>, backed by more powerful GPUs</p></li></ul><p><b>And we’re lowering the cost of building on Cloudflare:</b></p><ul><li><p><a href="#removing-serverless-microservices-tax">Requests made through Service Bindings and to Tail Workers are now free</a></p></li><li><p><a href="#image-optimization-free-for-everyone">Cloudflare Images is introducing a free tier for everyone with a Cloudflare account</a></p></li><li><p>We’ve <a href="https://blog.cloudflare.com/workers-ai-bigger-better-faster">simplified Workers AI pricing</a> to use industry standard units of measure</p></li></ul><p>Everything in this post is available for you to use today. Keep reading to learn more, and watch the <a href="https://cloudflare.tv/event/builder-day-live-stream/xvm4qdgm"><u>Builder Day Live Stream</u></a> for demos and more.</p><h2>Persistent Logs for every Worker</h2><p>Starting today in open beta, you can automatically retain logs from your Worker, with full search, query, and filtering capabilities available directly within the Cloudflare dashboard. All newly created Workers will have this setting automatically enabled. This marks the first step in the development of our observability platform, following <a href="https://blog.cloudflare.com/cloudflare-acquires-baselime-expands-observability-capabilities/"><u>Cloudflare’s acquisition of Baselime</u></a>.</p><p>Getting started is easy – just add two lines to your Worker’s wrangler.toml and redeploy:</p>
            <pre><code>[observability]
enabled = true
</code></pre>
            <p>Workers Logs allow you to view all logs emitted from your Worker. When enabled, each <code>console.log</code> message, error, and exception is published as a separate event. Every Worker invocation (i.e. requests, alarms, rpc, etc.) also publishes an enriched execution log that contains invocation metadata. You can view logs in the <code>Logs</code> tab of your Worker in the dashboard, where you can filter on any event field, such as time, error code, message, or your own custom field.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3rPKtYlXEgN1u8utUuXxJR/c2fc4dcff2a7574d8ad9f92edbe867fe/image2.png" />
          </figure><p>If you’ve ever had to piece together the puzzle of unusual metrics, such as a spike in errors or latency, you know how frustrating it is to connect metrics to traces and logs that often live in independent data silos. Workers Logs is the first piece of a new observability platform we are building that helps you easily correlate telemetry data, and surfaces insights to help you <i>understand</i>. We’ll structure your telemetry data so you have the full context to ask the right questions, and can quickly and easily analyze the behavior of your applications. This is just the beginning for observability tools for Workers. We are already working on automatically emitting distributed traces from Workers, with real time errors and wide, high dimensionality events coming soon as well. </p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/XiRuQjqzVEld2eCIVVHPh/7c8938479e1f254699487dfe23caade4/Screenshot_2024-09-25_at_3.06.00_PM.png" />
          </figure><p>Starting November 1, 2024, Workers Logs will cost $0.60 per million log lines written after the included volume, as shown in the table below. Querying your logs is free. This makes it easy to estimate and forecast your costs — we think you shouldn’t have to calculate the number of ‘Gigabytes Ingested’ to understand what you’ll pay.</p>
<div><table><thead>
  <tr>
    <th></th>
    <th><span>Workers Free</span></th>
    <th><span>Workers Paid</span></th>
  </tr></thead>
<tbody>
  <tr>
    <td><span>Included Volume</span></td>
    <td><span>200,000 logs per day</span></td>
    <td><span>20,000,000 logs per month</span></td>
  </tr>
  <tr>
    <td><span>Additional Events</span></td>
    <td><span>N/A</span></td>
    <td><span>$0.60 per million logs</span></td>
  </tr>
  <tr>
    <td><span>Retention</span></td>
    <td><span>3 days</span></td>
    <td><span>7 days</span></td>
  </tr>
</tbody></table></div><p>Try out Workers Logs today. You can learn more from our <a href="https://developers.cloudflare.com/workers/observability/logs/workers-logs/"><u>developer documentation</u></a>, and give us feedback directly in the #workers-observability channel on <a href="https://discord.cloudflare.com/"><u>Discord</u></a>.</p><h2>Connect to private databases from Workers</h2><p>Starting today, you can now use <a href="https://developers.cloudflare.com/hyperdrive/"><u>Hyperdrive</u></a>, <a href="https://www.cloudflare.com/en-ca/products/tunnel/"><u>Cloudflare Tunnels</u></a> and <a href="https://www.cloudflare.com/zero-trust/products/access/"><u>Access</u></a> together to securely connect to databases that are isolated in a private network. </p><p><a href="https://developers.cloudflare.com/hyperdrive/"><u>Hyperdrive</u></a> enables you to build on Workers with your existing regional databases. It accelerates database queries using Cloudflare’s network, caching data close to end users and pooling connections close to the database. But there’s been a major blocker preventing you from building with Hyperdrive: network isolation.</p><p>The majority of databases today aren’t publicly accessible on the Internet. Data is highly sensitive and placing databases within private networks like a <a href="https://www.cloudflare.com/learning/cloud/what-is-a-virtual-private-cloud/"><u>virtual private cloud (VPC)</u></a> keeps data secure. But to date, that has also meant that your data is held captive within your cloud provider, preventing you from building on Workers. </p><p>Today, we’re enabling Hyperdrive to <a href="https://developers.cloudflare.com/hyperdrive/configuration/connect-to-private-database/"><u>securely connect to private databases</u></a> using <a href="https://www.cloudflare.com/en-ca/products/tunnel/"><u>Cloudflare Tunnels</u></a> and <a href="https://www.cloudflare.com/zero-trust/products/access/"><u>Cloudflare Access</u></a>. With a Cloudflare Tunnel running in your private network, Hyperdrive can securely connect to your database and start speeding up your queries.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3ozsfXdsWFJlfRhhulMClT/61ec772a843880370e81eeec190000fa/BLOG-2517_4.png" />
          </figure><p>With this update, Hyperdrive makes it possible for you to build full-stack applications on Workers with your existing databases, network-isolated or not. Whether you’re using <a href="https://developers.cloudflare.com/hyperdrive/examples/aws-rds-aurora/"><u>Amazon RDS</u></a>, <a href="https://developers.cloudflare.com/hyperdrive/examples/aws-rds-aurora/"><u>Amazon Aurora</u></a>, <a href="https://developers.cloudflare.com/hyperdrive/examples/google-cloud-sql/"><u>Google Cloud SQL</u></a>, <a href="https://azure.microsoft.com/en-gb/products/category/databases"><u>Azure Database</u></a>, or any other provider, Hyperdrive can connect to your databases and optimize your database connections to provide the fast performance you’ve come to expect with building on Workers.</p><h2>Improved Node.js compatibility is now GA</h2><p>Earlier this month, we <a href="https://blog.cloudflare.com/more-npm-packages-on-cloudflare-workers-combining-polyfills-and-native-code/"><u>overhauled our support for Node.js APIs in the Workers runtime</u></a>. With <a href="https://workers-nodejs-compat-matrix.pages.dev/"><u>twice as many Node APIs</u></a> now supported on Workers, you can now use a wider set of NPM packages to build a broader range of applications. Today, we’re happy to announce that improved Node.js compatibility is GA.</p><p>To give it a try, enable the nodejs_compat compatibility flag, and set your compatibility date to on or after 2024-09-23:</p>
            <pre><code>compatibility_flags = ["nodejs_compat"]
compatibility_date = "2024-09-23"
</code></pre>
            <p>Read the <a href="https://developers.cloudflare.com/workers/runtime-apis/nodejs/"><u>developer documentation</u></a> to learn more about how to opt-in your Workers to try it today. If you encounter any bugs or want to report feedback, <a href="https://github.com/cloudflare/workers-sdk/issues/new?assignees=&amp;labels=bug&amp;projects=&amp;template=bug-template.yaml&amp;title=%F0%9F%90%9B+BUG%3A"><u>open an issue</u></a>.</p><h2>Build frontend applications on Workers with Static Asset Hosting</h2><p>Starting today in open beta, you now can upload and serve HTML, CSS, and client-side JavaScript directly as part of your Worker. This means you can build dynamic, server-side rendered applications on Workers using popular frameworks such as Astro, Remix, Next.js and Svelte (full list <a href="https://developers.cloudflare.com/workers/frameworks"><u>here</u></a>), with more coming soon.</p><p>You can now deploy applications to Workers that previously could only be deployed to Cloudflare Pages and use features that are not yet supported in Pages, including <a href="https://developers.cloudflare.com/workers/observability/logging/logpush/"><u>Logpush</u></a>, <a href="https://developers.cloudflare.com/hyperdrive/#_top"><u>Hyperdrive</u></a>, <a href="https://developers.cloudflare.com/workers/configuration/cron-triggers/"><u>Cron Triggers</u></a>, <a href="https://developers.cloudflare.com/queues/configuration/configure-queues/#consumer"><u>Queue Consumers</u></a>, and <a href="https://developers.cloudflare.com/workers/configuration/versions-and-deployments/"><u>Gradual Deployments</u></a>. </p><p>To get started, create a new project with <a href="https://developers.cloudflare.com/workers/frameworks"><u>create-cloudflare</u></a>. For example, to create a new Astro project:  </p>
            <pre><code>npm create cloudflare@latest -- my-astro-app --framework=astro --experimental
</code></pre>
            <p>Visit our <a href="https://developers.cloudflare.com/workers/static-assets/"><u>developer documentation</u></a> to learn more about setting up a new front-end application on Workers and watch a <a href="https://youtu.be/W45MIi_t_go"><u>quick demo</u></a> to learn about how you can deploy an existing application to Workers. Static assets aren’t just for Workers written in JavaScript! You can serve static assets from <a href="https://developers.cloudflare.com/workers/languages/python/"><u>Workers written in Python</u></a> or even <a href="https://github.com/cloudflare/workers-rs/tree/main/templates/leptos/README.md"><u>deploy a Leptos app using workers-rs</u></a>.</p><p>If you’re wondering “<i>What about Pages?” </i>— rest assured, Pages will remain fully supported. We’ve heard from developers that as we’ve added new features to Workers and Pages, the choice of which product to use has become challenging. We’re closing this gap by bringing asset hosting, CI/CD and Preview URLs to Workers this Birthday Week.</p><p>To make the upfront choice Cloudflare Workers and Pages more transparent, we’ve created a <a href="https://developers.cloudflare.com/workers/static-assets/compatibility-matrix/"><u>compatibility matrix</u></a>. Looking ahead, we plan to bridge the remaining gaps between Workers and Pages and provide ways to migrate your Pages projects to Workers.</p><h2>Cloudflare joins OpenNext to deploy Next.js apps to Workers</h2><p>Starting today, as an early developer preview, you can use <a href="https://opennext.js.org//cloudflare"><u>OpenNext</u></a> to deploy Next.js apps to Cloudflare Workers via <a href="https://npmjs.org/@opennextjs/cloudflare"><u>@opennextjs/cloudflare</u></a>, a new npm package that lets you use the <a href="https://nextjs.org/docs/app/building-your-application/rendering/edge-and-nodejs-runtimes"><u>Node.js “runtime” in Next.js</u></a> on Workers.</p><p>This new adapter is powered by our <a href="https://blog.cloudflare.com/more-npm-packages-on-cloudflare-workers-combining-polyfills-and-native-code/"><u>new Node.js compatibility layer</u></a>, newly introduced <a href="#static-asset-hosting"><u>Static Assets for Workers</u></a>, and Workers KV, which is <a href="https://blog.cloudflare.com/faster-workers-kv"><u>now up to 3x faster</u></a>. It unlocks support for <a href="https://nextjs.org/docs/app/building-your-application/data-fetching/incremental-static-regeneration"><u>Incremental Static Regeneration (ISR)</u></a>, <a href="https://nextjs.org/docs/pages/building-your-application/routing/custom-error"><u>custom error pages</u></a>, and other Next.js features that our previous adapter, <a href="https://developers.cloudflare.com/pages/framework-guides/nextjs/ssr/get-started/"><u>@cloudflare/next-on-pages</u></a>, could not support, as it was only compatible with the Edge “runtime” in Next.js.</p><p><a href="https://blog.cloudflare.com/aws-egregious-egress/"><u>Cloud providers shouldn’t lock you in</u></a>. Like cloud compute and storage, open source frameworks should be portable — you should be able to deploy them to different cloud providers. The goal of the OpenNext project is to make sure you can deploy Next.js apps to any cloud platform, originally to AWS, and now Cloudflare. We’re excited to contribute to the OpenNext community, and give developers the freedom to run on the cloud that fits their applications needs (and <a href="https://blog.cloudflare.com/workers-pricing-scale-to-zero/"><u>budget</u></a>) best.</p><p>To get started by reading the <a href="https://opennext.js.org//cloudflare/get-started"><u>OpenNext docs</u></a>, which provide examples and a guide on how to add <a href="https://npmjs.org/@opennextjs/cloudflare"><u>@opennextjs/cloudflare</u></a> to your Next.js app.</p><p>We want your feedback! Report issues and contribute code at <a href="https://github.com/opennextjs/opennextjs-cloudflare/"><u>opennextjs/opennextjs-cloudflare on GitHub</u></a>, and join the discussion on the <a href="https://discord.gg/WUNsBM69"><u>OpenNext Discord</u></a>.</p>
            <pre><code>npm create cloudflare@latest -- my-next-app --framework=next --experimental
</code></pre>
            <h2>Continuous Integration &amp; Delivery (CI/CD) with Workers Builds</h2><p>Now in open beta, you can connect a GitHub or GitLab repository to a Worker, and Cloudflare will automatically build and deploy your changes each time you push a commit. Workers Builds provides an integrated CI/CD workflow you can use to build and deploy everything from full-stack applications built with the most popular frameworks to simple static websites. Just add your build command and let Workers Builds take care of the rest. </p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1K9izEbBxIlA0nXbNKJ1Od/55ecf9e56ecbc33aeb88df7ede1afddc/BLOG-2517_5.png" />
          </figure><p>While in open beta, Workers Builds is free to use, with a limit of one concurrent build per account, and unlimited build minutes per month. Once Workers Builds is Generally Available in early 2025, you will be billed based on the number of build minutes you use each month, and have a higher number of concurrent builds.</p>
<div><table><thead>
  <tr>
    <th></th>
    <th><span>Workers Free</span></th>
    <th><span>Workers Paid</span></th>
  </tr></thead>
<tbody>
  <tr>
    <td><span>Build minutes, </span><span>open beta</span></td>
    <td><span>Unlimited</span></td>
    <td><span>Unlimited</span></td>
  </tr>
  <tr>
    <td><span>Concurrent builds, </span><span>open beta</span></td>
    <td><span>1</span></td>
    <td><span>1</span></td>
  </tr>
  <tr>
    <td><span>Build minutes, </span><span>general availability</span></td>
    <td><span>3,000 minutes included per month</span></td>
    <td><span>6,000 minutes included per month </span><br /><span>+$0.005 per additional build minute</span></td>
  </tr>
  <tr>
    <td><span>Concurrent builds, </span><span>general availability</span></td>
    <td><span>1</span></td>
    <td><span>6</span></td>
  </tr>
</tbody></table></div><p><a href="https://developers.cloudflare.com/workers/ci-cd/builds/"><u>Read the docs</u></a> to learn more about how to deploy your first project with Workers Builds.</p><h2>Workers preview URLs</h2><p>Each newly uploaded version of a Worker now automatically generates a preview URL. Preview URLs make it easier for you to collaborate with your team during development, and can be used to test and identify issues in a preview environment before they are deployed to production.</p><p>When you upload a version of your Worker via the Wrangler CLI, Wrangler will display the preview URL once your upload succeeds. You can also find preview URLs for each version of your Worker in the Cloudflare dashboard:</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/29iDm0x8QQex5ryatk23e1/ecfdba5b98b6e0c22350087a6035442d/BLOG-2517_6.png" />
          </figure><p>Preview URLs for Workers are similar to Pages <a href="https://developers.cloudflare.com/pages/configuration/preview-deployments/"><u>preview deployments</u></a> — they run on your Worker’s <code>workers.dev</code> subdomain and allow you to view changes applied on a new version of your application before the changes are deployed.</p><p>Learn more about preview URLs by visiting our <a href="https://developers.cloudflare.com/workers/configuration/previews"><u>developer documentation</u></a>. </p><h2>Safely release to production with Gradual Deployments</h2><p>At Developer Week, we launched <a href="https://developers.cloudflare.com/workers/configuration/versions-and-deployments/gradual-deployments/#_top"><u>Gradual Deployments</u></a> for Workers and Durable Objects to make it safer and easier to deploy changes to your applications. Gradual Deployments is now GA — we have been using it ourselves at Cloudflare for mission-critical services built on Workers since early 2024.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2FOHnaYqTyhuJRZVdERWdh/52df3d29622ccca9118d1cb49de19ae8/BLOG-2517_7.png" />
          </figure><p>Gradual deployments can help you stay on top of availability SLAs and minimize application downtime by surfacing issues early. Internally at Cloudflare, every single service built on Workers uses gradual deployments to roll out new changes. Each new version gets released in stages —– 0.05%, 0.5%, 3%, 10%, 25%, 50%, 75% and 100% with time to soak between each stage. Throughout the roll-out, we keep an eye on metrics (which are often instrumented with <a href="https://developers.cloudflare.com/analytics/analytics-engine/"><u>Workers Analytics Engine</u></a>!) and we <a href="https://developers.cloudflare.com/workers/configuration/versions-and-deployments/rollbacks/"><u>roll back</u></a> if we encounter issues. </p><p>Using gradual deployments is as simple as swapping out the <a href="https://developers.cloudflare.com/workers/wrangler/commands/#versions"><u>wrangler commands</u></a>, <a href="https://developers.cloudflare.com/api/operations/worker-versions-upload-version"><u>API endpoints</u></a>, and/or using “Save version” in the code editor that is built into the Workers dashboard. Read the <a href="https://developers.cloudflare.com/workers/configuration/versions-and-deployments/"><u>developer documentation</u></a> to learn more and get started. </p><h2>Queues is GA, with higher throughput and concurrency limits</h2><p><a href="https://developers.cloudflare.com/queues/"><u>Cloudflare Queues</u></a> is now generally available with higher limits. </p><p>Queues let a developer decouple their Workers into event driven services. <i>Producer </i>Workers write events to a Queue, and <i>consumer </i>Workers are invoked to take actions on the events. For example, you can use a Queue to decouple an e-commerce website from a service which sends purchase confirmation emails to users.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3cpkxgIQSYLrwbfhDSL2A5/97818131c1f4f7d2e8b8d76dcc8c7f9a/BLOG-2517_8.png" />
          </figure><p>Throughput and concurrency limits for Queues are now significantly higher, which means you can push more messages through a Queue, and consume them faster.</p><ul><li><p><b>Throughput:</b> Each queue can now process 5000 messages per second (previously 400 per second).</p></li><li><p><b>Concurrency:</b> Each queue can now have up to 250 <a href="https://developers.cloudflare.com/queues/configuration/consumer-concurrency/"><u>concurrent consumers</u></a> (previously 20 concurrent consumers). </p></li></ul><p>Since we <a href="https://blog.cloudflare.com/introducing-cloudflare-queues/"><u>announced Queues in beta</u></a>, we’ve added the following functionality:</p><ul><li><p><a href="https://developers.cloudflare.com/queues/configuration/batching-retries/#batching"><u>Batch sizes can be customized</u></a>, to reduce the number of consumer Worker invocations and thus reduce cost.</p></li><li><p><a href="https://developers.cloudflare.com/queues/configuration/batching-retries/#delay-messages"><u>Individual messages can be delayed</u></a>, so you can back off due to external API rate limits.</p></li><li><p><a href="https://developers.cloudflare.com/queues/configuration/pull-consumers/"><u>HTTP Pull consumers</u></a> allow messages to be consumed outside Workers, with zero data egress costs.</p></li></ul><p>Queues can be used by any developer on a Workers Paid plan. Head over to our <a href="https://developers.cloudflare.com/queues/get-started/"><u>getting started</u><i><u> </u></i><u>guide</u></a> to start building with Queues.</p><h2>Event notifications for R2 is now GA</h2><p>We’re excited to announce that event notifications for R2 is now generally available. Whether it’s kicking off image processing after a user uploads a file or triggering a sync to an external data warehouse when new analytics data is generated, many applications need to be able to reliably respond when events happen. <a href="https://blog.cloudflare.com/r2-events-gcs-migration-infrequent-access/#event-notifications-open-beta"><u>Event notifications</u></a> for <a href="https://developers.cloudflare.com/r2/"><u>Cloudflare R2</u></a> give you the ability to build event-driven applications and workflows that react to changes in your data.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/73t1PtQg576iv7m95HHjGL/26cd028004f5b669e41a89a7265c5a14/BLOG-2517_9.png" />
          </figure><p>Here’s how it works: When data in your R2 bucket changes, event notifications are sent to your queue. You can consume these notifications with a <a href="https://developers.cloudflare.com/queues/reference/how-queues-works/#create-a-consumer-worker"><u>consumer Worker </u></a>or <a href="https://developers.cloudflare.com/queues/configuration/pull-consumers/"><u>pull them over HTTP</u></a> from outside of Cloudflare Workers.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4NSN5r40rmXy0FMGOKvdAd/7d0c0637ccc478881528339304942948/BLOG-2517_10.png" />
          </figure><p>Since we introduced event notifications in <a href="https://blog.cloudflare.com/r2-events-gcs-migration-infrequent-access/#event-notifications-open-beta"><u>open beta</u></a> earlier this year, we’ve made significant improvements based on your feedback:</p><ul><li><p>We increased reliability of event notifications with throughput improvements from Queues. R2 event notifications can now scale to thousands of writes per second.</p></li><li><p>You can now configure event notifications directly from the Cloudflare dashboard (in addition to <a href="https://developers.cloudflare.com/workers/wrangler/commands/#notification-create"><u>Wrangler</u></a>).</p></li><li><p>There is now support for receiving notifications triggered by <a href="https://developers.cloudflare.com/r2/buckets/object-lifecycles/"><u>object lifecycle deletes</u></a>.</p></li><li><p>You can now set up multiple notification rules for a single queue on a bucket.</p></li></ul><p>Visit <a href="https://developers.cloudflare.com/r2/buckets/event-notifications/"><u>our documentation</u></a> to learn about how to set up event notifications for your R2 buckets.</p><h2>Removing the serverless microservices tax: No more request fees for Service Bindings and Tail Workers</h2><p>Earlier this year, we quietly changed Workers pricing to lower your costs. As of July 2024, you are no longer charged for requests between Workers on your account made via <a href="https://developers.cloudflare.com/workers/runtime-apis/bindings/service-bindings/"><u>Service Bindings</u></a>, or for invocations of <a href="https://developers.cloudflare.com/workers/observability/logging/tail-workers/"><u>Tail Workers.</u></a> For example, let’s say you have the following chain of Workers: </p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1PTgWu9XiGJNoWrHTduQdB/84e1f6ee0788f99684440a9db7b4e6c1/BLOG-2517_11.png" />
          </figure><p>Each request from a client results in three Workers invocations. Previously, we charged you for each of these invocations, plus the CPU time for each of these Workers. With this change, we only charge you for the first request from the client, plus the CPU time used by each Worker.</p><p>This eliminates the additional cost of breaking a monolithic serverless app into microservices. In 2023, we introduced new <a href="https://blog.cloudflare.com/workers-pricing-scale-to-zero/"><u>pricing based on CPU time</u></a>, rather than duration, so you don’t have to worry about being billed for time spent waiting on I/O. This includes I/O to other Workers. With this change, you’re only billed for the first request in the chain, eliminating the other additional cost of using multiple Workers.</p><p>When you build microservices on Workers, you face fewer trade offs than on other compute platforms. Service bindings have <a href="https://developers.cloudflare.com/workers/runtime-apis/bindings/service-bindings/"><u>zero network overhead</u></a> by default, a built-in <a href="https://blog.cloudflare.com/javascript-native-rpc/"><u>JavaScript RPC system</u></a>, and a security model with <a href="https://blog.cloudflare.com/workers-environment-live-object-bindings/"><u>fewer footguns and simpler configuration</u></a>. We’re excited to improve this further with this pricing change.</p><h2>Image optimization is available to everyone for free — no subscription needed</h2><p>Starting today, you can use <a href="https://developers.cloudflare.com/images/transform-images/transform-via-url/"><u>Cloudflare Images</u></a> for free to optimize your images with up to 5,000 transformations per month.</p><p>Large, oversized images can throttle your application speed and page load times. We built <a href="https://developers.cloudflare.com/images/"><u>Cloudflare Images</u></a> to let you dynamically optimize images in the correct dimensions and formats for each use case, all while storing only the original image.</p><p>In the spirit of Birthday Week, we’re making image optimization available to everyone with a Cloudflare account, no subscription needed. You’ll be able to use Images to transform images that are stored outside of Images, such as in R2.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/49UPZRpeAp79qugqstqbT7/5e23fb4c7a458f5d00b401383bd6e777/BLOG-2517_12.png" />
          </figure><p>Transformations are served from your zone through a specially formatted URL with parameters that specify how an image should be optimized. For example, the transformation URL below uses the <code>format</code> parameter to automatically serve the image in the most optimal format for the requesting browser:</p>
            <pre><code>https://example.com/cdn-cgi/image/format=auto/thumbnail.png</code></pre>
            <p>This means that the original PNG image may be served as AVIF to one user and WebP to another. Without a subscription, transforming images from remote sources is free up to 5,000 unique transformations per month. Once you exceed this limit, any already cached transformations will continue to be served, but you’ll need a <a href="https://dash.cloudflare.com/?to=/:account/images"><u>paid Images plan</u></a> to request new transformations or to purchase storage within Images.</p><p>To get started, navigate to <a href="https://dash.cloudflare.com/?to=/:account/images"><u>Images in the dashboard</u></a> to enable transformations on your zone.</p>
    <div>
      <h2>Dive deep into more announcements from Builder Day</h2>
      <a href="#dive-deep-into-more-announcements-from-builder-day">
        
      </a>
    </div>
    <p>We shipped so much that we couldn’t possibly fit it all in one blog post. These posts dive into the technical details of what we’re announcing at Builder Day:</p><ul><li><p><a href="https://blog.cloudflare.com/workers-ai-bigger-better-faster"><u>Cloudflare’s Bigger, Better, Faster AI platform</u></a></p></li><li><p><a href="https://blog.cloudflare.com/making-workers-ai-faster"><u>Making Workers AI faster with KV cache compression, speculative decoding, and upgraded hardware</u></a></p></li><li><p><a href="https://blog.cloudflare.com/faster-workers-kv"><u>We made Workers KV up to 3x faster — here’s the data</u></a></p></li><li><p><a href="https://blog.cloudflare.com/sqlite-in-durable-objects"><u>Zero-latency SQLite storage in every Durable Object</u></a></p></li></ul>
    <div>
      <h2>Build the next big thing on Cloudflare</h2>
      <a href="#build-the-next-big-thing-on-cloudflare">
        
      </a>
    </div>
    <p>Cloudflare is for builders, and everything we’re announcing at Builder Day, you can start building with right away. We’re now offering <a href="http://blog.cloudflare.com/startup-program-250k-credits"><u>$250,000 in credits to use on our Developer Platform to qualified startups</u></a>, so that you can get going even faster, and become the next company to reach hypergrowth scale with a small team, and not waste time provisioning infrastructure and doing undifferentiated heavy lifting. Focus on shipping, and we’ll take care of the rest.</p><p>Apply to the startup program <a href="https://www.cloudflare.com/forstartups/"><u>here</u></a>, or stop by and say hello in the <a href="https://discord.cloudflare.com/"><u>Cloudflare Developers Discord</u></a>.</p> ]]></content:encoded>
            <category><![CDATA[Birthday Week]]></category>
            <category><![CDATA[Developer Platform]]></category>
            <category><![CDATA[Developers]]></category>
            <category><![CDATA[Cloudflare Workers]]></category>
            <category><![CDATA[undefined]]></category>
            <category><![CDATA[Queues]]></category>
            <category><![CDATA[Cloudflare Images]]></category>
            <guid isPermaLink="false">6ct91ZmJYzPu9n9pt8sNBm</guid>
            <dc:creator>Tanushree Sharma</dc:creator>
            <dc:creator>Rohin Lohe</dc:creator>
            <dc:creator>Anni Wang</dc:creator>
            <dc:creator>Nevi Shah</dc:creator>
        </item>
        <item>
            <title><![CDATA[Making state easy with D1 GA, Hyperdrive, Queues and Workers Analytics Engine updates]]></title>
            <link>https://blog.cloudflare.com/making-full-stack-easier-d1-ga-hyperdrive-queues/</link>
            <pubDate>Mon, 01 Apr 2024 13:00:06 GMT</pubDate>
            <description><![CDATA[ We kick off the week with announcements that help developers build stateful applications on top of Cloudflare, including making D1, our SQL database and Hyperdrive, our database accelerating service, generally available ]]></description>
            <content:encoded><![CDATA[ <p></p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4BKrpfqvHnl6yaHdXXsCoc/70280206c43fc4ecfa026968440f52f0/image4-31.png" />
            
            </figure>
    <div>
      <h3>Making full-stack easier</h3>
      <a href="#making-full-stack-easier">
        
      </a>
    </div>
    <p>Today might be April Fools, and while we like to have fun as much as anyone else, we like to use this day for serious announcements. In fact, as of today, there are over 2 million developers building on top of Cloudflare’s platform — that’s no joke!</p><p>To kick off this Developer Week, we’re flipping the big “production ready” switch on three products: <a href="https://developers.cloudflare.com/d1/">D1, our serverless SQL database</a>; <a href="https://developers.cloudflare.com/hyperdrive/">Hyperdrive</a>, which makes your <i>existing</i> databases feel like they’re distributed (and faster!); and <a href="https://developers.cloudflare.com/analytics/analytics-engine/">Workers Analytics Engine</a>, our time-series database.</p><p>We’ve been on a mission to allow developers to bring their entire stack to Cloudflare for some time, but what might an application built on Cloudflare look like?</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5D3F21rYXhLv0bI6FID3Kc/4b0ca6dfc52e168a852599345e111a02/image6-11.png" />
            
            </figure><p>The diagram itself shouldn’t look too different from the tools you’re already familiar with: you want a <a href="https://developers.cloudflare.com/d1/">database</a> for your core user data. <a href="https://www.cloudflare.com/learning/cloud/what-is-object-storage/">Object storage</a> for assets and user content. Maybe a <a href="https://developers.cloudflare.com/queues/">queue</a> for background tasks, like email or upload processing. A <a href="https://developers.cloudflare.com/kv/">fast key-value store</a> for runtime configuration. Maybe even a <a href="https://developers.cloudflare.com/analytics/analytics-engine/">time-series database</a> for aggregating user events and/or performance data. And that’s before we get to <a href="https://developers.cloudflare.com/workers-ai/">AI</a>, which is increasingly becoming a core part of many applications in search, recommendation and/or image analysis tasks (at the very least!).</p><p>Yet, without having to think about it, this architecture runs on Region: Earth, which means it’s scalable, reliable and fast — all out of the box.</p>
    <div>
      <h3>D1 GA: Production Ready</h3>
      <a href="#d1-ga-production-ready">
        
      </a>
    </div>
    
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6FBwcKFjSHTCL2LcJRtNCo/46c6a403e7f8c743ac8a4dff252d85e4/image2-35.png" />
            
            </figure><p>Your core database is one of the most critical pieces of your infrastructure. It needs to be ultra-reliable. It can’t lose data. It needs to scale. And so we’ve been heads down over the last year getting the pieces into place to make sure D1 is production-ready, and we’re extremely excited to say that D1 — our <a href="https://www.cloudflare.com/developer-platform/products/d1/">global, serverless SQL database</a> — is now Generally Available.</p><p>The GA for D1 lands some of the most asked-for features, including:</p><ul><li><p>Support for 10GB databases — and 50,000 databases per account;</p></li><li><p>New data export capabilities; and</p></li><li><p>Enhanced query debugging (we call it “D1 Insights”) — that allows you to understand what queries are consuming the most time, cost, or that are just plain inefficient…  </p></li></ul><p>… to empower developers to build production-ready applications with D1 to meet all their relational SQL needs. And importantly, in an era where the concept of a “<a href="https://www.cloudflare.com/plans/free/">free plan</a>” or “hobby plan” is seemingly at risk, we have no intention of removing the free tier for D1 or reducing the <i>25 billion row reads</i> included in the $5/mo Workers Paid plan:</p><table><colgroup><col></col><col></col><col></col><col></col></colgroup><tbody><tr><td><p><span>Plan</span></p></td><td><p><span>Rows Read</span></p></td><td><p><span>Rows Written</span></p></td><td><p><span>Storage</span></p></td></tr><tr><td><p><span>Workers</span><span> </span><span>Paid</span></p></td><td><p><span>First 25 billion / month included</span><span><br /></span><span><br /></span><span>+ $0.001 / million rows</span></p></td><td><p><span>First 50 million / month included</span><span><br /></span><span><br /></span><span>+ $1.00 / million rows</span></p></td><td><p><span>First 5 GB included</span></p><br /><p><span>+ $0.75 / GB-mo</span></p></td></tr><tr><td><p><span>Workers Free</span></p></td><td><p><span>5 million / day</span></p></td><td><p><span>100,000 / day</span><span><span>	</span></span></p></td><td><p><span>5 GB (total)</span></p></td></tr></tbody></table><p><i>For those who’ve been following D1 since the start: this is the same pricing we announced at </i><a href="/d1-open-beta-is-here"><i>open beta</i></a></p><p>But things don’t just stop at GA: we have some major new features lined up for D1, including global read replication, even larger databases, more <a href="https://developers.cloudflare.com/d1/reference/time-travel/">Time Travel</a> capabilities that will allow you to branch your database, and new APIs for dynamically querying and/or creating new databases-on-the-fly from within a Worker.</p><p>D1’s read replication will automatically deploy read replicas as needed to get data closer to your users: and without you having to spin up, manage scaling, or run into consistency (replication lag) issues. Here’s a sneak preview of what D1’s upcoming Replication API looks like:</p>
            <pre><code>export default {
  async fetch(request: Request, env: Env) {
    const {pathname} = new URL(request.url);
    let resp = null;
    let session = env.DB.withSession(token); // An optional commit token or mode

    // Handle requests within the session.
    if (pathname === "/api/orders/list") {
      // This statement is a read query, so it will work against any
      // replica that has a commit equal or later than `token`.
      const { results } = await session.prepare("SELECT * FROM Orders");
      resp = Response.json(results);
    } else if (pathname === "/api/orders/add") {
      order = await request.json();

      // This statement is a write query, so D1 will send the query to
      // the primary, which always has the latest commit token.
      await session.prepare("INSERT INTO Orders VALUES (?, ?, ?)")
        .bind(order.orderName, order.customer, order.value);
        .run();

      // In order for the application to be correct, this SELECT
      // statement must see the results of the INSERT statement above.
      //
      // D1's new Session API keeps track of commit tokens for queries
      // within the session and will ensure that we won't execute this
      // query until whatever replica we're using has seen the results
      // of the INSERT.
      const { results } = await session.prepare("SELECT COUNT(*) FROM Orders")
        .run();
      resp = Response.json(results);
    }

    // Set the token so we can continue the session in another request.
    resp.headers.set("x-d1-token", session.latestCommitToken);
    return resp;
  }
}</code></pre>
            <p>Importantly, we will give developers the ability to maintain session-based consistency, so that users still see their own changes reflected, whilst still benefiting from the performance and latency gains that replication can bring.</p><p>You can learn more about how D1’s read replication works under the hood <a href="/building-d1-a-global-database/">in our deep-dive post</a>, and if you want to start building on D1 today, <a href="https://developers.cloudflare.com/d1/">head to our developer docs</a> to create your first database.</p>
    <div>
      <h3>Hyperdrive: GA</h3>
      <a href="#hyperdrive-ga">
        
      </a>
    </div>
    
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/47WBGHvqFpRkza2ldA5RBi/7f7f47055e1f4f066e213b88e9e98737/image1-37.png" />
            
            </figure><p>We launched Hyperdrive into open beta <a href="/hyperdrive-making-regional-databases-feel-distributed">last September during Birthday Week</a>, and it’s now Generally Available — or in other words, battle-tested and production-ready.</p><p>If you’re not caught up on what Hyperdrive is, it’s designed to make the centralized databases you already have feel like they’re global. We use our <a href="https://www.cloudflare.com/network/">global network</a> to get faster routes to your database, keep connection pools primed, and cache your most frequently run queries as close to users as possible.</p><p>Importantly, Hyperdrive supports the most popular drivers and ORM (Object Relational Mapper) libraries out of the box, so you don’t have to re-learn or re-write your queries:</p>
            <pre><code>// Use the popular 'pg' driver? Easy. Hyperdrive just exposes a connection string
// to your Worker.
const client = new Client({ connectionString: env.HYPERDRIVE.connectionString });
await client.connect();

// Prefer using an ORM like Drizzle? Use it with Hyperdrive too.
// https://orm.drizzle.team/docs/get-started-postgresql#node-postgres
const client = new Client({ connectionString: env.HYPERDRIVE.connectionString });
await client.connect();
const db = drizzle(client);</code></pre>
            <p>But the work on Hyperdrive doesn’t stop just because it’s now “GA”. Over the next few months, we’ll be bringing support for the <i>other</i> most widely deployed database engine there is: MySQL. We’ll also be bringing support for connecting to databases inside private networks (including cloud VPC networks) via <a href="https://developers.cloudflare.com/cloudflare-one/connections/connect-networks/">Cloudflare Tunnel</a> and <a href="https://developers.cloudflare.com/magic-wan/">Magic WAN</a> On top of that, we plan to bring more configurability around invalidation and caching strategies, so that you can make more fine-grained decisions around performance vs. data freshness.</p><p>As we thought about how we wanted to price Hyperdrive, we realized that it just didn’t seem right to charge for it. After all, the performance benefits from Hyperdrive are not only significant, but essential to connecting to traditional database engines. Without Hyperdrive, paying the latency overhead of 6+ round-trips to connect &amp; query your database per request just isn’t right.</p><p>And so we’re happy to announce that <b>for any developer on a Workers Paid plan, Hyperdrive is free</b>. That includes both query caching and connection pooling, as well as the ability to create multiple Hyperdrives — to separate different applications, prod vs. staging, or to provide different configurations (cached vs. uncached, for example).</p><table><colgroup><col></col><col></col><col></col></colgroup><tbody><tr><td><p><span>Plan</span></p></td><td><p><span>Price per query</span></p></td><td><p><span>Connection Pooling</span></p></td></tr><tr><td><p><span>Workers</span><span> </span><span>Paid</span></p></td><td><p><span>$0 </span></p></td><td><p><span>$0</span></p></td></tr></tbody></table><p>To get started with Hyperdrive, <a href="https://developers.cloudflare.com/hyperdrive/">head over to the docs</a> to learn how to connect your existing database and start querying it from your Workers.</p>
    <div>
      <h3>Queues: Pull From Anywhere</h3>
      <a href="#queues-pull-from-anywhere">
        
      </a>
    </div>
    <p>The task queue is an increasingly critical part of building a modern, full-stack application, and this is what we had in mind when we <a href="/cloudflare-queues-open-beta">originally announced</a> the open beta of <a href="https://developers.cloudflare.com/queues/">Queues</a>. We’ve since been working on several major Queues features, and we’re launching two of them this week: pull-based consumers and new message delivery controls.</p><p>Any HTTP-speaking client <a href="https://developers.cloudflare.com/queues/reference/pull-consumers/">can now pull messages from a queue</a>: call the new /pull endpoint on a queue to request a batch of messages, and call the /ack endpoint to acknowledge each message (or batch of messages) as you successfully process them:</p>
            <pre><code>// Pull and acknowledge messages from a Queue using any HTTP client
$  curl "https://api.cloudflare.com/client/v4/accounts/${CF_ACCOUNT_ID}/queues/${QUEUE_ID}/messages/pull" -X POST --data '{"visibilityTimeout":10000,"batchSize":100}}' \
     -H "Authorization: Bearer ${QUEUES_TOKEN}" \
     -H "Content-Type:application/json"

// Ack the messages you processed successfully; mark others to be retried.
$ curl "https://api.cloudflare.com/client/v4/accounts/${CF_ACCOUNT_ID}/queues/${QUEUE_ID}/messages/ack" -X POST --data '{"acks":["lease-id-1", "lease-id-2"],"retries":["lease-id-100"]}' \
     -H "Authorization: Bearer ${QUEUES_TOKEN}" \
     -H "Content-Type:application/json"</code></pre>
            <p>A pull-based consumer can run anywhere, allowing you to run queue consumers alongside your existing legacy cloud infrastructure. Teams inside Cloudflare adopted this early on, with one use-case focused on writing device telemetry to a queue from our <a href="https://www.cloudflare.com/network/">310+ data centers</a> and consuming within some of our back-of-house infrastructure running on Kubernetes. Importantly, our globally distributed queue infrastructure means that messages are retained within the queue until the consumer is ready to process them.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2UUkrE3bqqIdQiemV49Hal/496c2d539b366a794d58479c99b1c9ec/image5-19.png" />
            
            </figure><p>Queues also <a href="https://developers.cloudflare.com/queues/reference/batching-retries/#delay-messages">now supports delaying messages</a>, both when sending to a queue, as well as when marking a message for retry. This can be useful to queue (pun intended) tasks for the future, as well apply a backoff mechanism if an upstream API or infrastructure has rate limits that require you to pace how quickly you are processing messages.</p>
            <pre><code>// Apply a delay to a message when sending it
await env.YOUR_QUEUE.send(msg, { delaySeconds: 3600 })

// Delay a message (or a batch of messages) when marking it for retry
for (const msg of batch.messages) {
	msg.retry({delaySeconds: 300})
} </code></pre>
            <p>We’ll also be bringing substantially increased per-queue throughput over the coming months on the path to getting Queues to GA. It’s important to us that Queues is <i>extremely</i> reliable: lost or dropped messages means that a user doesn’t get their order confirmation email, that password reset notification, and/or their uploads processed — each of those are user-impacting and hard to recover from.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1RhxWjKGRmoJtgQ4toybvY/57469d1ee721096a3c2b7551bbd277a4/image3-35.png" />
            
            </figure>
    <div>
      <h3>Workers Analytics Engine is GA</h3>
      <a href="#workers-analytics-engine-is-ga">
        
      </a>
    </div>
    <p><a href="https://developers.cloudflare.com/analytics/analytics-engine/">Workers Analytics Engine</a> provides unlimited-cardinality analytics at scale, via a built-in API to write data points from Workers, and a SQL API to query that data.</p><p>Workers Analytics Engine is backed by the same ClickHouse-based system we have depended on for years at Cloudflare. We use it ourselves to observe the health of our own services, to capture product usage data for billing, and to answer questions about specific customers’ usage patterns. At least one data point is written to this system on nearly every request to Cloudflare’s network. Workers Analytics Engine lets you build your own custom analytics using this same infrastructure, while we manage the hard parts for you.</p><p>Since <a href="/workers-analytics-engine">launching in beta</a>, developers have started depending on Workers Analytics Engine for these same use cases and more, from large enterprises to open-source projects like <a href="https://github.com/benvinegar/counterscale/">Counterscale</a>. Workers Analytics Engine has been operating at production scale with mission-critical workloads for years — but we hadn’t shared anything about pricing, until today.</p><p>We are keeping Workers Analytics Engine pricing simple, and based on two metrics:</p><ol><li><p><b>Data points written</b> — every time you call <a href="https://developers.cloudflare.com/analytics/analytics-engine/get-started/#3-write-data-from-your-worker">writeDataPoint()</a> in a Worker, this counts as one data point written. Every data point costs the same amount — unlike other platforms, there is no penalty for adding dimensions or cardinality, and no need to predict what the size and cost of a compressed data point might be.</p></li><li><p><b>Read queries</b> — every time you post to the Workers Analytics Engine <a href="https://developers.cloudflare.com/analytics/analytics-engine/sql-api/">SQL API</a>, this counts as one read query. Every query costs the same amount — unlike other platforms, there is no penalty for query complexity, and no need to reason about the number of rows of data that will be read by each query.</p></li></ol><p>Both the Workers Free and Workers Paid plans will include an allocation of data points written and read queries, with pricing for additional usage as follows:</p><table><colgroup><col></col><col></col><col></col></colgroup><tbody><tr><td><p><span>Plan</span></p></td><td><p><span>Data points written</span></p></td><td><p><span>Read queries</span></p></td></tr><tr><td><p><span>Workers</span><span> </span><span>Paid</span></p></td><td><p><span>10 million included per month</span></p><p><span><br /></span><span>+$0.25 per additional million</span></p></td><td><p><span>1 million included per month</span></p><p><span><br /></span><span>+$1.00 per additional million</span></p></td></tr><tr><td><p><span>Workers Free</span></p></td><td><p><span>100,000 included per day</span></p></td><td><p><span>10,000 included per day</span></p></td></tr></tbody></table><p>With this pricing, you can answer, “how much will Workers Analytics Engine cost me?” by counting the number of times you call a function in your Worker, and how many times you make a request to a HTTP API endpoint. Napkin math, rather than spreadsheet math.</p><p>This pricing will be made available to everyone in coming months. Between now and then, Workers Analytics Engine continues to be available at no cost. You can <a href="https://developers.cloudflare.com/analytics/analytics-engine/get-started/#limits">start writing data points from your Worker today</a> — it takes just a few minutes and less than 10 lines of code to start capturing data. We’d love to hear what you think.</p>
    <div>
      <h3>The week is just getting started</h3>
      <a href="#the-week-is-just-getting-started">
        
      </a>
    </div>
    <p>Tune in to what we have in store for you tomorrow on our second day of Developer Week. If you have questions or want to show off something cool you already built, please join our developer <a href="https://discord.cloudflare.com/"><i>Discord</i></a>.</p> ]]></content:encoded>
            <category><![CDATA[Developer Week]]></category>
            <category><![CDATA[Developers]]></category>
            <category><![CDATA[Developer Platform]]></category>
            <category><![CDATA[D1]]></category>
            <category><![CDATA[Hyperdrive]]></category>
            <category><![CDATA[Queues]]></category>
            <category><![CDATA[Cloudflare Workers]]></category>
            <guid isPermaLink="false">5O8kPvrc2dyHIwmf2c0shv</guid>
            <dc:creator>Rita Kozlov</dc:creator>
            <dc:creator>Matt Silverlock</dc:creator>
        </item>
        <item>
            <title><![CDATA[Debug Queues from the dash: send, list, and ack messages]]></title>
            <link>https://blog.cloudflare.com/debug-queues-from-dash/</link>
            <pubDate>Fri, 11 Aug 2023 13:01:19 GMT</pubDate>
            <description><![CDATA[ Today, we are thrilled to share that customers can now debug their Queues directly from the Cloudflare dashboard. This new feature allows customers to send, list, and acknowledge messages in their Queues from the Cloudflare dashboard, enabling a more user-friendly way to interact with their Queues ]]></description>
            <content:encoded><![CDATA[ <p></p><p>Today, August 11, 2023, we are excited to announce a new debugging workflow for Cloudflare Queues. Customers using Cloudflare Queues can now send, list, and acknowledge messages directly from the Cloudflare dashboard, enabling a more user-friendly way to interact with Queues. Though it can be difficult to debug asynchronous systems, it’s now easy to examine a queue’s state and test the full flow of information through a queue.</p><p>With <a href="https://developers.cloudflare.com/queues/learning/delivery-guarantees/">guaranteed delivery</a>, <a href="https://developers.cloudflare.com/queues/learning/batching-retries/">message batching</a>, <a href="https://developers.cloudflare.com/queues/learning/consumer-concurrency/">consumer concurrency</a>, and more, Cloudflare Queues is a powerful tool to connect services reliably and efficiently. Queues integrate deeply with the existing Cloudflare Workers ecosystem, so developers can also leverage our many other products and services. Queues can be bound to producer Workers, which allow Workers to send messages to a queue, and to consumer Workers, which pull messages from the queue.</p><p>We’ve received feedback that while Queues are effective and performant, customers find it hard to debug them. After a message is sent to a queue from a producer worker, there’s no way to inspect the queue’s contents without a consumer worker. The limited transparency was frustrating, and the need to write a skeleton worker just to debug a queue was high-friction.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7zNFeogvcAV8R2ErH6SUg9/2c26577d3665a795f5973bf126990728/image7.png" />
            
            </figure><p>Now, with the addition of new features to send, list, and acknowledge messages in the Cloudflare dashboard, we’ve unlocked a much simpler debugging workflow. You can send messages from the Cloudflare dashboard to check if their consumer worker is processing messages as expected, and verify their producer worker’s output by previewing messages from the Cloudflare dashboard.</p><p>The pipeline of messages through a queue is now more open and easily examined. Users just getting started with Cloudflare Queues also no longer have to write code to send their first message: it’s as easy as clicking a button in the Cloudflare dashboard.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4ZBzVg52jwP0bmfPVpQ6ix/09b46e8e51c4cfa88c14db169b97d7a7/image6.png" />
            
            </figure>
    <div>
      <h3>Sending messages</h3>
      <a href="#sending-messages">
        
      </a>
    </div>
    <p>Both features are located in a new <b><b>Messages</b></b> tab on any queue’s page. Scroll to <b><b>Send message</b></b> to open the message editor.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/UFuyA4fJD8kmg0yxK9Y3n/ec6aa19f26c5be9924ef23a55f270530/image1-4.png" />
            
            </figure><p>From here, you can write a message and click <b>Send message</b> to send it to your queue. You can also choose to send JSON, which opens a JSON editor with syntax highlighting and formatting. If you’ve saved your message as a file locally, you can drag-and-drop the file over the textbox or click <b>Upload a file</b> to send it as well.</p><p>This feature makes testing changes in a queue’s consumer worker much easier. Instead of modifying an existing producer worker or creating a new one, you can send one-off messages. You can also easily verify if your queue consumer settings are behaving as expected: send a few messages from the Cloudflare dashboard to check that messages are batched as desired.</p><p>Behind the scenes, this feature leverages the same pipeline that Cloudflare Workers uses to send messages, so you can be confident that your message will be processed as if sent via a Worker.</p>
    <div>
      <h3>Listing messages</h3>
      <a href="#listing-messages">
        
      </a>
    </div>
    <p>On the same page, you can also inspect the messages you just sent from the Cloudflare dashboard. On any queue’s page, open the <b>Messages</b> tab and scroll to <b>Queued messages</b>.</p><p>If you have a consumer attached to your queue, you’ll fetch a batch of messages of the same size as configured in your queue consumer settings by default, to provide a realistic view of what would be sent to your consumer worker. You can change this value to preview messages one-at-a-time or even in much larger batches than would be normally sent to your consumer.</p><p>After fetching a batch of messages, you can preview the message’s body, even if you’ve sent raw bytes or a JavaScript object supported by the <a href="https://developer.mozilla.org/en-US/docs/Web/API/Web_Workers_API/Structured_clone_algorithm">structured clone</a> algorithm. You can also check the message’s timestamp; number of retries; producer source, such as a Worker or the Cloudflare dashboard; and type, such as text or JSON. This information can help you debug the queue’s current state and inspect where and when messages originated from.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/359eKsDTJEyl5nhFaKi7ZT/3d9a43b9de9a2f7bd94694f3bde18791/image5-2.png" />
            
            </figure><p>The batch of messages that’s returned is the same batch that would be sent to your consumer Worker on its next run. Messages are even guaranteed to be in the same order on the UI as sent to your consumer. This feature grants you a looking glass view into your queue, matching the exact behavior of a consumer worker. This works especially well for debugging messages sent by producer workers and verifying queue consumer settings.</p><p>Listing messages from the Cloudflare dashboard also doesn’t interfere with an existing connected consumer. Messages that are previewed from the Cloudflare dashboard stay in the queue and do not have their number of retries affected.</p><p>This ‘peek’ functionality is unique to Cloudflare Queues: Amazon SQS bumps the number of retries when a message is viewed, and RabbitMQ retries the message, forcing it to the back of the queue. Cloudflare Queues’ approach means that previewing messages does not have any unintended side effects on your queue and your consumer. If you ever need to debug queues used in production, don’t worry - listing messages is entirely safe.</p><p>As well, you can now remove messages from your queue from the Cloudflare dashboard. If you’d like to remove a message or clear the full batch from the queue, you can select messages to acknowledge. This is useful for preventing buggy messages from being repeatedly retried without having to write a dummy consumer.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5jJBUVFIiKOQu3yVKb8wA8/d5aff1ecba3687cdb1ab89812e507524/image4-1.png" />
            
            </figure>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/slOwPKMoZM7VaEhodubH2/a1a7c32768cd3d3bb5abd248b277a6e3/image2-3.png" />
            
            </figure><p>You might have noticed that this message preview feature operates similarly to another popular feature request for an HTTP API to pull batches of messages from a queue. Customers will be able to make a request to the API endpoint to receive a batch of messages, then acknowledge the batch to remove the messages from the queue. Under the hood, both listing messages from the Cloudflare dashboard and HTTP Pull/Ack use a common infrastructure, and HTTP Pull/Ack is coming very soon!</p><p>These debugging features have already been invaluable for testing example applications we’ve built on Cloudflare Queues. At an internal hack week event, we built a web crawler with Queues as an example use-case (check out the tutorial <a href="https://developers.cloudflare.com/queues/examples/web-crawler-with-browser-rendering/">here</a>!). During development, we took advantage of this user-friendly way to send messages to quickly iterate on a consumer worker before we built a producer worker. As well, when we encountered bugs in our consumer worker, the message previews were handy to realize we were sending malformed messages, and the message acknowledgement feature gave us an easy way to remove them from the queue.</p>
    <div>
      <h3>New Queues debugging features — available today!</h3>
      <a href="#new-queues-debugging-features-available-today">
        
      </a>
    </div>
    <p>The Cloudflare dashboard features announced today provide more transparency into your application and enable more user-friendly debugging.</p><p>All Cloudflare Queues customers now have access to these new debugging tools. And if you’re not already using Queues, you can join the Queues Open Beta by <a href="https://dash.cloudflare.com/?to=/:account/workers/queues">enabling Cloudflare Queues here</a>.Get started on Cloudflare Queues with our <a href="https://developers.cloudflare.com/queues/get-started/">guide</a> and create your next app with us today! Your first message is a single click away.</p> ]]></content:encoded>
            <category><![CDATA[Internship Experience]]></category>
            <category><![CDATA[Queues]]></category>
            <category><![CDATA[Cloudflare Workers]]></category>
            <category><![CDATA[Product News]]></category>
            <category><![CDATA[Developers]]></category>
            <category><![CDATA[Developer Platform]]></category>
            <guid isPermaLink="false">3f9TXSoqcNe3MQAyFACPdw</guid>
            <dc:creator>Emilie Ma</dc:creator>
        </item>
        <item>
            <title><![CDATA[Cloudflare Queues: messages at your speed with consumer concurrency and explicit acknowledgement]]></title>
            <link>https://blog.cloudflare.com/messages-at-your-speed-with-concurrency-and-explicit-acknowledgement/</link>
            <pubDate>Fri, 19 May 2023 13:00:33 GMT</pubDate>
            <description><![CDATA[ Queues is faster than ever before! Now queues will automatically scale up your consumers, clearing out backlogs in a flash. Explicit Acknowledgement allows developers to acknowledge or retry individual messages in a batch, preventing work from being repeated. ]]></description>
            <content:encoded><![CDATA[ <p></p><p>Communicating between systems can be a balancing act that has a major impact on your business. <a href="https://www.cloudflare.com/learning/security/api/what-is-an-api/">APIs</a> have limits, billing frequently depends on usage, and end-users are always looking for more speed in the services they use. With so many conflicting considerations, it can feel like a challenge to get it just right. Cloudflare Queues is a tool to make this balancing act simple. With our latest features like consumer concurrency and explicit acknowledgment, it’s easier than ever for developers to focus on writing great code, rather than worrying about the fees and rate limits of the systems they work with.</p><p>Queues is a messaging service, enabling developers to send and receive messages across systems asynchronously with guaranteed delivery. It integrates directly with Cloudflare Workers, making for easy message production and consumption working with the many products and services we offer.</p>
    <div>
      <h2>What’s new in Queues?</h2>
      <a href="#whats-new-in-queues">
        
      </a>
    </div>
    
    <div>
      <h3>Consumer concurrency</h3>
      <a href="#consumer-concurrency">
        
      </a>
    </div>
    <p>Oftentimes, the systems we pull data from can produce information faster than other systems can consume them. This can occur when consumption involves processing information, storing it, or sending and receiving information to a third party system. The result of which is that sometimes, a queue can fall behind where it should be. At Cloudflare, a queue shouldn't be a quagmire. That’s why we’ve introduced Consumer Concurrency.</p><p>With Concurrency, we automatically scale up the amount of consumers needed to match the speed of information coming into any given queue. In this way, customers no longer have to worry about an ever-growing backlog of information bogging down their system.</p>
    <div>
      <h3>How it works</h3>
      <a href="#how-it-works">
        
      </a>
    </div>
    <p>When setting up a queue, developers can set a Cloudflare Workers script as a target to send messages to. With concurrency enabled, Cloudflare will invoke multiple instances of the selected Worker script to keep the messages in the queue moving effectively. This feature is enabled by default for every queue and set to automatically scale.</p><p>Autoscaling considers the following factors when spinning up consumers:  the number of messages in a queue, the rate of new messages, and successful vs. unsuccessful consumption attempts.</p><p>If a queue has enough messages in it, concurrency will increase each time a message batch is successfully processed. Concurrency is decreased when message batches encounter errors. Customers can set a <code>max_concurrency</code> value in the Dashboard or via Wrangler, which caps out how many consumers can be automatically created to perform processing for a given queue.</p><p>Setting the <code>max_concurrency</code> value manually can be helpful in the following situations where producer data is provided in bursts, the datasource API is rate limited, and datasource API has higher costs with more usage.</p><p>Setting a max concurrency value manually allows customers to optimize their workflows for other factors beyond speed.</p>
            <pre><code>// in your wrangler.toml file


[[queues.consumers]]
  queue = "my-queue"

//max concurrency can be set to a number between 1 and 10
//this defines the total amount of consumers running simultaneously

max_concurrency = 7</code></pre>
            <p>To learn more about concurrency you can check out our developer documentation <a href="https://developers.cloudflare.com/queues/learning/consumer-concurrency/">here</a>.</p>
    <div>
      <h3>Concurrency in practice</h3>
      <a href="#concurrency-in-practice">
        
      </a>
    </div>
    <p>It’s baseball season in the US, and for many of us that means fantasy baseball is back! This year is the year we finally write a program that uses data and statistics to pick a winning team, as opposed to picking players based on “feelings” and “vibes”. We’re engineers after all, and baseball is a game of rules. If the Oakland A’s can do it, so can we!</p><p>So how do we put this together? We’ll need a few things:</p><ol><li><p>A list of potential players</p></li><li><p>An API to pull historical game statistics from</p></li><li><p>A queue to send this data to its consumer</p></li><li><p>A Worker script to crunch the numbers and generate a score</p></li></ol><p>A developer can pull from a baseball reference API into a Workers script, and from that worker pass this information to a queue. Historical data is… historical, so we can pull data into our queue as fast as the baseball API will allow us. For our list of potential players, we pull statistics for each game they’ve played. This includes everything from batting averages, to balls caught, to game day weather. Score!</p>
            <pre><code>//get data from a third party API and pass it along to a queue


const response = await fetch("http://example.com/baseball-stats.json");
const gamesPlayedJSON = await response.json();

for (game in gamesPlayedJSON){
//send JSON to your queue defined in your workers environment
env.baseballqueue.send(jsonData)
}</code></pre>
            <p>Our producer Workers script then passes these statistics onto the queue. As each game contains quite a bit of data, this results in hundreds of thousands of “game data” messages waiting to be processed in our queue. Without concurrency, we would have to wait for each batch of messages to be processed one at a time, taking minutes if not longer. But, with Consumer Concurrency enabled, we watch as multiple instances of our worker script invoked to process this information in no time!</p><p>Our Worker script would then take these statistics, apply a heuristic, and store the player name and a corresponding quality score into a database like a Workers KV store for easy access by your application presenting the data.</p>
    <div>
      <h3>Explicit Acknowledgment</h3>
      <a href="#explicit-acknowledgment">
        
      </a>
    </div>
    <p>In Queues previously, a failure of a single message in a batch would result in the whole batch being resent to the consumer to be reprocessed. This resulted in extra cycles being spent on messages that were processed successfully, in addition to the failed message attempt. This hurts both customers and developers, slowing processing time, increasing complexity, and increasing costs.</p><p>With Explicit Acknowledgment, we give developers the precision and flexibility to handle each message individually in their consumer, negating the need to reprocess entire batches of messages. Developers can now tell their queue whether their consumer has properly processed each message, or alternatively if a specific message has failed and needs to be retried.</p><p>An acknowledgment of a message means that that message will not be retried if the batch fails. Only messages that were not acknowledged will be retried. Inversely, a message that is explicitly retried, will be sent again from the queue to be reprocessed without impacting the processing of the rest of the messages currently being processed.</p>
    <div>
      <h3>How it works</h3>
      <a href="#how-it-works">
        
      </a>
    </div>
    <p>In your consumer, there are 4 new methods you can call to explicitly acknowledge a given message: .ack(), .retry(), .ackAll(), .retryAll().</p><p>Both ack() and retry() can be called on individual messages. ack() tells a queue that the message has been processed successfully and that it can be deleted from the queue, whereas retry() tells the queue that this message should be put back on the queue and delivered in another batch.</p>
            <pre><code>async queue(batch, env, ctx) {
    for (const msg of batch.messages) {
	try {
//send our data to a 3rd party for processing
await fetch('https://thirdpartyAPI.example.com/stats', {
	method: 'POST',
	body: msg, 
	headers: {
		'Content-type': 'application/json'
}
});
//acknowledge if successful
msg.ack();
// We don't have to re-process this if subsequent messages fail!
}
catch (error) {
	//send message back to queue for a retry if there's an error
      msg.retry();
		console.log("Error processing", msg, error);
}
    }
  }</code></pre>
            <p>ackAll() and retryAll() work similarly, but act on the entire batch of messages instead of individual messages.</p><p>For more details check out our developer documentation <a href="https://developers.cloudflare.com/queues/learning/batching-retries/">here</a>.</p>
    <div>
      <h3>Explicit Acknowledgment in practice</h3>
      <a href="#explicit-acknowledgment-in-practice">
        
      </a>
    </div>
    <p>In the course of making our Fantasy Baseball team picker, we notice that data isn’t always sent correctly from the baseball reference API. This results in data not being correctly parsed and rejected from our player heuristics.</p><p>Without Explicit Acknowledgment, the entire batch of baseball statistics would need to be retried. Thankfully, we can use Explicit Acknowledgment to avoid that, and tell our queue which messages were parsed successfully and which were not.</p>
            <pre><code>import heuristic from "baseball-heuristic";
export default {
  async queue(batch: MessageBatch, env: Env, ctx: ExecutionContext) {
    for (const msg of batch.messages) {
      try {
        // Calculate the score based on the game stats
        heuristic.generateScore(msg)
        // Explicitly acknowledge results 
        msg.ack()
      } catch (err) {
        console.log(err)
        // Retry just this message
        msg.retry()
      } 
    }
  },
};</code></pre>
            
    <div>
      <h3>Higher throughput</h3>
      <a href="#higher-throughput">
        
      </a>
    </div>
    <p>Under the hood, we’ve been working on improvements to further increase the amount of messages per second each queue can handle. In the last few months, that number has quadrupled, improving from 100 to over 400 messages per second.</p><p>Scalability can be an essential factor when deciding which services to use to power your application. You want a service that can grow with your business. We are always aiming to improve our message throughput and hope to see this number quadruple again over the next year. We want to grow with you.</p>
    <div>
      <h2>What’s next?</h2>
      <a href="#whats-next">
        
      </a>
    </div>
    <p>As our service grows, we want to provide our customers with more ways to interact with our service beyond the traditional Cloudflare Workers workflow. We know our customers’ infrastructure is often complex, spanning across multiple services. With that in mind, our focus will be on enabling easy connection to services both within the Cloudflare ecosystem and beyond.</p>
    <div>
      <h3>R2 as a consumer</h3>
      <a href="#r2-as-a-consumer">
        
      </a>
    </div>
    <p>Today, the only type of consumer you can configure for a queue is a Workers script. While Workers are incredibly powerful, we want to take it a step further and give customers a chance to write directly to other services, starting with <a href="https://www.cloudflare.com/developer-platform/r2/">R2</a>. Coming soon, customers will be able to select an R2 bucket in the Cloudflare Dashboard for a Queue to write to directly, no code required. This will save valuable developer time by avoiding the initial setup in a Workers script, and any maintenance that is required as services evolve. With R2 as a first party consumer in Queues, customers can simply select their bucket, and let Cloudflare handle the rest.</p>
    <div>
      <h3>HTTP pull</h3>
      <a href="#http-pull">
        
      </a>
    </div>
    <p>We're also working to allow you to consume messages from existing infrastructure you might have outside of Cloudflare. Cloudflare Queues will provide an HTTP API for each queue from which any consumer can pull batches of messages for processing. Customers simply make a request to the API endpoint for their queue, receive data they requested, then send an acknowledgment that they have received the data, so the queue can continue working on the next batch.</p>
    <div>
      <h3>Always working to be faster</h3>
      <a href="#always-working-to-be-faster">
        
      </a>
    </div>
    <p>For the Queues team, speed is always our focus, as we understand our customers don't want bottlenecks in the performance of their applications. With this in mind the team will be continuing to look for ways to increase the velocity through which developers can build best in class applications on our developer platform. Whether it's reducing message processing time, the amount of code you need to manage, or giving developers control over their application pipeline, we will continue to implement solutions to allow you to focus on just the important things, while we handle the rest.</p><p>Cloudflare Queues is currently in Open Beta and ready to power your most complex applications.</p><p>Check out our getting started <a href="https://developers.cloudflare.com/queues/learning/how-queues-works/">guide</a> and build your service with us today!</p>
    <div>
      <h3>Watch on Cloudflare TV</h3>
      <a href="#watch-on-cloudflare-tv">
        
      </a>
    </div>
    <div></div> ]]></content:encoded>
            <category><![CDATA[Developer Week]]></category>
            <category><![CDATA[Developers]]></category>
            <category><![CDATA[Cloudflare Workers]]></category>
            <category><![CDATA[Queues]]></category>
            <category><![CDATA[Beta]]></category>
            <category><![CDATA[Developer Platform]]></category>
            <guid isPermaLink="false">6sSF3GnFonTy4zGf6McmFv</guid>
            <dc:creator>Charles Burnett</dc:creator>
            <dc:creator>Josh Wheeler</dc:creator>
        </item>
        <item>
            <title><![CDATA[How we built an open-source SEO tool using Workers, D1, and Queues]]></title>
            <link>https://blog.cloudflare.com/how-we-built-an-open-source-seo-tool-using-workers-d1-and-queues/</link>
            <pubDate>Thu, 02 Mar 2023 15:03:54 GMT</pubDate>
            <description><![CDATA[ In this blog post, I’m excited to show off some of the new tools in Cloudflare’s developer arsenal, D1 and Queues, to prototype and ship an internal tool for our SEO experts at Cloudflare. ]]></description>
            <content:encoded><![CDATA[ <p></p><p>Building applications on Cloudflare Workers has always been fun. Workers applications have low latency response times by default, and easy developer ergonomics thanks to Wrangler. It's no surprise that for years now, developers have been going from idea to production with Workers in just a few minutes.</p><p>Internally, we're no different. When a member of our team has a project idea, we often reach for Workers first, and not just for the MVP stage, but in production, too. Workers have been a secret ingredient to Cloudflare’s innovation for some time now, allowing us to build products like Access, Stream and Workers KV. Even better, when we have new ideas <i>and</i> we can use new Cloudflare products to build them, it's a great way to give feedback on those products.</p><p>We've discussed this in the past on the Cloudflare blog - in May last year, <a href="/new-dev-docs/">I wrote how we rebuilt Cloudflare's developer documentation</a> using many of the tools that had recently been released in the Workers ecosystem: Cloudflare Pages for hosting, and Bulk Redirects for the redirect rules. In November, <a href="/building-a-better-developer-experience-through-api-documentation/">we released a new version of our API documentation</a>, which again used Pages for hosting, and Pages functions for intelligent caching and transformation of our API schema.</p><p>In this blog post, I’m excited to show off some of the new tools in Cloudflare’s developer arsenal, <a href="https://www.cloudflare.com/developer-platform/products/d1/">D1</a> and <a href="/introducing-cloudflare-queues/">Queues</a>, to prototype and ship an internal tool for our SEO experts at Cloudflare. We've made this project, which we're calling Prospector, open-source too - check it out in our <code>[cloudflare/templates](https://github.com/cloudflare/workers-sdk/tree/main/templates/worker-prospector)</code> repo on GitHub. Whether you're a developer looking to understand how to use multiple parts of Cloudflare's developer stack together, or an SEO specialist who may want to deploy the tool in production, we've made it incredibly easy to get up and running.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6AbvyVpEkBfnOITizlfkhT/73156db0a0fe274ead622a677b6eb959/image1.png" />
            
            </figure>
    <div>
      <h2>What we're building</h2>
      <a href="#what-were-building">
        
      </a>
    </div>
    <p>Prospector is a tool that allows Cloudflare's SEO experts to monitor our blog and marketing site for specific keywords. When a keyword is matched on a page, Prospector will notify an email address. This allows our SEO experts to stay informed of any changes to our website, and take action accordingly.</p><p><a href="/sending-email-from-workers-with-mailchannels/">Using MailChannels' integration with Workers</a>, we can quickly and easily send emails from our application using a single API call. This allows us to focus on the core functionality of the application, and not worry about the details of sending emails.</p><p>Prospector uses Cloudflare Workers as the user-facing API for the application. It uses D1 to store and retrieve data in real-time, and Queues to handle the fetching of all URLs and the notification process. We've also included an intuitive user interface for the application, which is built with HTML, CSS, and JavaScript.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/339I5LA7CAciIdEOyfKkl3/fc05a5f4b5d41ef794d638df5e41d1fb/image3-1.png" />
            
            </figure>
    <div>
      <h2>Why we built it</h2>
      <a href="#why-we-built-it">
        
      </a>
    </div>
    <p>It is widely known in SEO that both internal and external links help Google and other search engines understand what a website is about, which impacts keyword rankings. Not only do these links guide readers to additional helpful information, they also allow <a href="https://www.cloudflare.com/learning/bots/what-is-a-web-crawler/">web crawlers</a> for search engines to discover and index content on the site.</p><p>Acquiring external links is often a time-consuming process and at the discretion of third parties, whereas website owners typically have much more control over internal links. As a result, internal linking is one of the most useful levers available in SEO.</p><p>In an ideal world, every piece of content would be fully formed upon publication, replete with helpful internal links throughout the piece. However, this is often not the case. Many times, content is edited after the fact or additional pieces of relevant content come along after initial publication. These situations result in missed opportunities for internal linking.</p><p>Like other large organizations, Cloudflare has published thousands of blogs and web pages over the years. We share new content every time a product/technology is introduced and improved. Ultimately, that also means it's become more challenging to identify opportunities for internal linking in a timely, automated fashion. We needed a tool that would allow us to identify internal linking opportunities as they appear, and speed up the time it takes to identify new internal linking opportunities.</p><p>Although we tested several tools that might solve this problem, we found that they were limited in several ways. First, some tools only scanned the first 2,000 characters of a web page. Any opportunities found beyond that limit would not be detected. Next, some tools did not allow us to limit searches to certain areas of the site and resulted in many false positives. Finally, other potential solutions required manual operation, leaving the process at the mercy of human memory.</p><p>To solve our problem (and ultimately, improve our SEO), we needed an automated tool that could discover and notify us of new instances of targeted phrases on a specified range of pages.</p>
    <div>
      <h2>How it works</h2>
      <a href="#how-it-works">
        
      </a>
    </div>
    
    <div>
      <h3>Data model</h3>
      <a href="#data-model">
        
      </a>
    </div>
    <p>First, let's explore the data model for Prospector. We have two main tables: <code>notifiers</code> and <code>urls</code>. The <code>notifiers</code> table stores the email address and keyword that we want to monitor. The <code>urls</code> table stores the URL and sitemap that we want to scrape. The <code>notifiers</code> table has a one-to-many relationship with the <code>urls</code> table, meaning that each notifier can have many URLs associated with it.</p><p>In addition, we have a <code>sitemaps</code> table that stores the sitemap URLs that we've scraped. Many larger websites don't just have a single sitemap: the Cloudflare blog, for instance, has a primary sitemap that contains four sub-sitemaps. When the application is deployed, a primary sitemap is provided as configuration, and Prospector will parse it to find all of the sub-sitemaps.</p><p>Finally, <code>notifier_matches</code> is a table that stores the matches between a notifier and a URL. This allows us to keep track of which URLs have already been matched, and which ones still need to be processed. When a match has been found, the <code>notifier_matches</code> table is updated to reflect that, and "matches" for a keyword are no longer processed. This saves our SEO experts from a crowded inbox, and allows them to focus and act on new matches.</p><p><b>Connecting the pieces with Cloudflare Queues</b>Cloudflare Queues acts as the work queue for Prospector. When a new notifier is added, a new job is created for it and added to the queue. Behind the scenes, Queues will distribute the work across multiple Workers, allowing us to scale the application as needed. When a job is processed, Prospector will scrape the URL and check for matches. If a match is found, Prospector will send an email to the notifier's email address.</p><p>Using the Cron Triggers functionality in Workers, we can schedule the scraping process to run at a regular interval - by default, once a day. This allows us to keep our data up-to-date, and ensures that we're always notified of any changes to our website. It also allows the end-user to configure when they receive emails in case they want to receive them more or less frequently, or at the beginning of their workday.</p><p>The Module Workers syntax for Workers makes accessing the application bindings - the constants available in the application for querying D1, Queues, and other services - incredibly easy. <code>src/index.ts</code>, the entrypoint for the application, looks like this:</p>
            <pre><code>import { DBUrl, Env } from './types'

import {
  handleQueuedUrl,
  scheduled,
} from './functions';

import h from './api'

export default {
  async fetch(
	request: Request,
	env: Env,
	ctx: ExecutionContext
  ): Promise&lt;Response&gt; {
	return h.fetch(request, env, ctx)
  },

  async queue(
	batch: MessageBatch&lt;Error&gt;,
	env: Env
  ): Promise&lt;void&gt; {
	for (const message of batch.messages) {
  	const url: DBUrl = JSON.parse(message.body)
  	await handleQueuedUrl(url, env.DB)
	}
  },

  async scheduled(
	env: Env,
  ): Promise&lt;void&gt; {
	await scheduled({
  	authToken: env.AUTH_TOKEN,
  	db: env.DB,
  	queue: env.QUEUE,
  	sitemapUrl: env.SITEMAP_URL,
	})
  }
};</code></pre>
            <p>With this syntax, we can see where the various events incoming to the application - the <code>fetch</code> event, the <code>queue</code> event, and the <code>scheduled</code> event - are handled. The <code>fetch</code> event is the main entrypoint for the application, and is where we handle all of the API routes. The <code>queue</code> event is where we handle the work that's been added to the queue, and the <code>scheduled</code> event is where we handle the scheduled scraping process.</p><p>Central to the application, of course, is Workers - acting as the API gateway and coordinator. We've elected to use the popular open-source framework <a href="https://honojs.dev/">Hono</a>, an Express-style API for Workers, in Prospector. With Hono, we can quickly map out a REST API in just a few lines of code. Here's an example of a few API routes and how they're defined with Hono:</p>
            <pre><code>const app = new Hono()

app.get("/", (context) =&gt; {
  return context.html(index)
})

app.post("/notifiers", async context =&gt; {
  try {
	const { keyword, email } = await context.req.parseBody()
	await context.env.DB.prepare(
  	"insert into notifiers (keyword, email) values (?, ?)"
	).bind(keyword, email).run()
	return context.redirect('/')
  } catch (err) {
	context.status(500)
	return context.text("Something went wrong")
  }
})

app.get('/sitemaps', async (context) =&gt; {
  const query = await context.env.DB.prepare(
	"select * from sitemaps"
  ).all();
  const sitemaps: Array&lt;DBSitemap&gt; = query.results
  return context.json(sitemaps)
})</code></pre>
            <p>Crucial to the development of Prospector are the improved TypeScript bindings for Workers. <a href="/improving-workers-types/">As announced in November of last year</a>, TypeScript bindings for Workers are now automatically generated based on <a href="/workerd-open-source-workers-runtime/">our open source runtime, <code>workerd</code></a>. This means that whenever we use the types provided from the <a href="https://github.com/cloudflare/workers-types"><code>@cloudflare/workers-types</code> package</a> in our application, we can be sure that the types are always up-to-date.</p><p>With these bindings, we can define the types for our environment variables, and use them in our application. Here's an example of the <code>Env</code> type, which defines the environment variables that we use in the application:</p>
            <pre><code>export interface Env {
  AUTH_TOKEN: string
  DB: D1Database
  QUEUE: Queue
  SITEMAP_URL: string
}</code></pre>
            <p>Notice the types of the <code>DB</code> and <code>QUEUE</code> bindings - <code>D1Database</code> and <code>Queue</code>, respectively. These types are automatically generated, complete with type signatures for each method inside of the D1 and Queue APIs. This means that we can be sure that we're using the correct methods, and that we're passing the correct arguments to them, directly from our text editor - without having to refer to the documentation.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6uqW1v9MEpdsEMieihxKFk/e865da1397167301f051616d61f83a1a/image4.png" />
            
            </figure>
    <div>
      <h2>How to use it</h2>
      <a href="#how-to-use-it">
        
      </a>
    </div>
    <p>One of my favorite things about Workers is that deploying applications is quick and easy. Using `wrangler.toml` and some simple build scripts, we can deploy a fully-functional application in just a few minutes. Prospector is no different. With just a few commands, we can create the necessary D1 database and Queues instance, and deploy the application to our account.</p><p>First, you'll need to clone the repository from our cloudflare/templates repository:</p><p><code>git clone $URL</code></p><p>If you haven't installed wrangler yet, you can do so by running:</p><p><code>npm install @cloudflare/wrangler -g</code></p><p>With Wrangler installed, you can login to your account by running:</p><p><code>wrangler login</code></p><p>After you've done that, you'll need to create a new D1 database, as well as a Queues instance. You can do this by running the following commands:</p><p><code>wrangler d1 create $DATABASE_NAMEwrangler queues create $QUEUE_NAME</code></p><p>Configure your <code>wrangler.toml</code> with the appropriate bindings (see [the README](URL) for an example):</p>
            <pre><code>[[ d1_databases ]]
binding = "DB"
database_name = "keyword-tracker-db"
database_id = "ab4828aa-723b-4a77-a3f2-a2e6a21c4f87"
preview_database_id = "8a77a074-8631-48ca-ba41-a00d0206de32"
	
[[queues.producers]]
  queue = "queue"
  binding = "QUEUE"

[[queues.consumers]]
  queue = "queue"
  max_batch_size = 10
  max_batch_timeout = 30
  max_retries = 10
  dead_letter_queue = "queue-dlq"</code></pre>
            <p>Next, you can run the <code>bin/migrate</code> script to create the tables in your database:</p><p><code>bin/migrate</code></p><p>This will create all the needed tables in your database, both in development (locally) and in production. Note that you'll even see the creation of a honest-to-goodness <code>.sqlite3</code> file in your project directory - this is the local development database, which you can connect to directly using the same SQLite CLI that you're used to:</p><p><code>$ sqlite3 .wrangler/state/d1/DB.sqlite3sqlite&gt; .tables notifier_matches  notifiers      sitemaps       urls</code></p><p>Finally, you can deploy the application to your account:</p><p><code>npm run deploy</code></p><p>With a deployed application, you can visit your Workers URL to see the user interface. From there, you can add new notifiers and URLs, and see the results of your scraping process. When a new keyword match is found, you’ll receive an email with the details of the match instantly:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2P59U5wQRoysgE8nWLbwLD/b1e7240ddd90dd36676163b201998cb3/image2-1.png" />
            
            </figure>
    <div>
      <h2>Conclusion</h2>
      <a href="#conclusion">
        
      </a>
    </div>
    <p>For some time, there have been a great deal of applications that were hard to build on Workers without relational data or background task tooling. Now, with D1 and Queues, we can build applications that seamlessly integrate between real-time user interfaces, geographically distributed data, background processing, and more, all using the same developer ergonomics and low latency that Workers is known for.</p><p>D1 has been crucial for building this application. On larger sites, the number of URLs that need to be scraped can be quite large. If we were to use Workers KV, our key-value store, for storing this data, we would quickly struggle with how to model, retrieve, and update the data needed for this use-case. With D1, we can build relational data models and quickly query <i>just</i> the data we need for each queued processing task.</p><p>Using these tools, developers can build internal tools and applications for their companies that are more powerful and more scalable than ever before. With the integration of Cloudflare's Zero Trust suite, developers can make these applications secure by default, and deploy them to Cloudflare's global network. This allows developers to build applications that are fast, secure, and reliable, all without having to worry about the underlying infrastructure.</p><p>Prospector is a great example of how easy it is to build applications on Cloudflare Workers. With the recent addition of D1 and Queues, we've been able to build fully-functional applications that require real-time data and background processing in just a few hours. We're excited to share the open-source code for Prospector, and we'd love to hear your feedback on the project.</p><p>If you have any questions, feel free to reach out to us on Twitter at <a href="https://twitter.com/cloudflaredev">@cloudflaredev</a>, or join us in the Cloudflare Workers Discord community, which recently hit 20k members and is a great place to ask questions and get help from other developers.</p> ]]></content:encoded>
            <category><![CDATA[Developers]]></category>
            <category><![CDATA[Serverless]]></category>
            <category><![CDATA[Storage]]></category>
            <category><![CDATA[Cloudflare Workers]]></category>
            <category><![CDATA[D1]]></category>
            <category><![CDATA[Queues]]></category>
            <category><![CDATA[Developer Platform]]></category>
            <guid isPermaLink="false">3Ye7OiZdwDby0AGqA7LQAh</guid>
            <dc:creator>Kristian Freeman</dc:creator>
            <dc:creator>Neal Kindschi</dc:creator>
        </item>
        <item>
            <title><![CDATA[Build applications of any size on Cloudflare with the Queues open beta]]></title>
            <link>https://blog.cloudflare.com/cloudflare-queues-open-beta/</link>
            <pubDate>Mon, 14 Nov 2022 14:00:00 GMT</pubDate>
            <description><![CDATA[ Cloudflare Queues enables developers to build performant and resilient distributed applications that span the globe. Any developer with a paid Workers plan can enroll in the Open Beta and start building today! ]]></description>
            <content:encoded><![CDATA[ <p></p><p>Message queues are a fundamental building block of cloud applications—and today the <a href="https://developers.cloudflare.com/queues/">Cloudflare Queues</a> open beta brings queues to every developer building for Region: Earth. Cloudflare Queues follows <a href="https://www.cloudflare.com/products/workers/">Cloudflare Workers</a> and <a href="https://www.cloudflare.com/products/r2/">Cloudflare R2</a> in a long line of <a href="https://www.cloudflare.com/application-services/">innovative application services</a> built for the Workers Developer Platform, enabling developers to build more complex applications without configuring networks, choosing regions, or estimating capacity. Best of all, like many other Cloudflare services, there are no egregious egress charges!</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7L0APU7xTJNF6ByoHcvXpY/581bfdcb37394d7e30ce5bad852b2c58/image8.png" />
            
            </figure><p>If you’ve ever purchased something online and seen a message like “you will receive confirmation of your order shortly,” you’ve interacted with a queue. When you completed your order, your shopping cart and information were stored and the order was placed into a queue. At some later point, the order fulfillment service picks and packs your items and hands it off to the shipping service—again, via a queue. Your order may sit for only a minute, or much longer if an item is out of stock or a warehouse is busy, and queues enable all of this functionality.</p><p>Message queues are great at decoupling components of applications, like the checkout and order fulfillment services for an <a href="https://www.cloudflare.com/ecommerce/">ecommerce site</a>. Decoupled services are easier to reason about, deploy, and implement, allowing you to ship features that delight your customers without worrying about synchronizing complex deployments.</p><p>Queues also allow you to batch and buffer calls to downstream services and <a href="https://www.cloudflare.com/learning/security/api/what-is-an-api/">APIs</a>. This post shows you how to enroll in the open beta, walks you through a practical example of using Queues to build a log sink, and tells you how we built Queues using other Cloudflare services. You’ll also learn a bit about the roadmap for the open beta.</p>
    <div>
      <h2>Getting started</h2>
      <a href="#getting-started">
        
      </a>
    </div>
    
    <div>
      <h3>Enrolling in the open beta</h3>
      <a href="#enrolling-in-the-open-beta">
        
      </a>
    </div>
    <p>Open the <a href="https://dash.cloudflare.com">Cloudflare dashboard</a> and navigate to the <i>Workers</i> section. Select <i>Queues</i> from the <i>Workers</i> navigation menu and choose <i>Enable Queues Beta</i>.</p><p>Review your order and choose <i>Proceed to Payment Details</i>.</p><p><b>Note: If you are not already subscribed to a</b> <a href="https://www.cloudflare.com/plans/developer-platform/#overview"><b>Workers Paid Plan</b></a><b>, one will be added to your order automatically.</b></p><p>Enter your payment details and choose <i>Complete Purchase</i>. That’s it - you’re enrolled in the open beta! Choose <i>Return to Queues</i> on the confirmation page to return to the Cloudflare Queues home page.</p>
    <div>
      <h3>Creating your first queue</h3>
      <a href="#creating-your-first-queue">
        
      </a>
    </div>
    <p>After enabling the open beta, open the Queues home page and choose <i>Create Queue</i>. Name your queue `my-first-queue` and choose <i>Create queue</i>. That’s all there is to it!</p><p>The dash displays a confirmation message along with a list of all the queues in your account.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/8jcg7ga1WIbqJplp8XRsP/cbc7cdf9420a4d51c714aa8b4b96bd80/image1-18.png" />
            
            </figure><p><b>Note: As of the writing of this blog post each account is limited to ten queues. We intend to raise this limit as we build towards general availability.</b></p>
    <div>
      <h3>Managing your queues with Wrangler</h3>
      <a href="#managing-your-queues-with-wrangler">
        
      </a>
    </div>
    <p>You can also manage your queues from the command line using <a href="https://github.com/cloudflare/wrangler2">Wrangler</a>, the CLI for Cloudflare Workers. In this section, you build a simple but complete application implementing a log aggregator or sink to learn how to integrate Workers, Queues, and R2.</p><p><b>Setting up resources</b>To create this application, you need access to a Cloudflare Workers account with a subscription plan, access to the Queues open beta, and an R2 plan.</p><p><a href="https://developers.cloudflare.com/workers/get-started/guide/#1-install-wrangler-workers-cli">Install</a> and <a href="https://developers.cloudflare.com/workers/get-started/guide/#2-authenticate-wrangler">authenticate</a> Wrangler then run <code>wrangler queues create log-sink</code> from the command line to create a queue for your application.</p><p>Run <code>wrangler queues list</code> and note that Wrangler displays your new queue.</p><p><b>Note: The following screenshots use the</b> <a href="https://stedolan.github.io/jq/"><b>jq</b></a> <b>utility to format the JSON output of wrangler commands. You</b> <b><i>do not</i></b> <b>need to install jq to complete this application.</b></p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5DzA0apSCcSq7lDHPCaFwt/d44c7290077d4ddc9e3283614ea7bbb9/8-create-and-list-a-queue.png" />
            
            </figure><p>Finally, run <code>wrangler r2 bucket create log-sink</code> to create an R2 bucket to store your aggregated logs. After the bucket is created, run <code>wrangler r2 bucket list</code> to see your new bucket.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5QT0a9c73sO2l5pTUbROs9/ab6b9e7858c8f2e90149839c275c7aa6/9-create-and-list-a-bucket.png" />
            
            </figure><p><b>Creating your Worker</b>Next, create a Workers application with two handlers: a <code>fetch()</code> handler to receive individual incoming log lines and a <code>queue()</code> handler to aggregate a batch of logs and write the batch to R2.</p><p>In an empty directory, run <code>wrangler init</code> to create a new Cloudflare Workers application. When prompted:</p><ul><li><p>Choose “y” to create a new <i>package.json</i></p></li><li><p>Choose “y” to use TypeScript</p></li><li><p>Choose “Fetch handler” to create a new Worker at <i>src/index.ts</i></p></li></ul>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3PiZnMr0Q4S0fzsLDnHg2G/002b64768711ba224e8cb3a9618607d2/10-wrangler-init.png" />
            
            </figure><p>Open <i>wrangler.toml</i> and replace the contents with the following:</p>
    <div>
      <h4><i><b>wrangler.toml</b></i></h4>
      <a href="#wrangler-toml">
        
      </a>
    </div>
    
            <pre><code>name = "queues-open-beta"
main = "src/index.ts"
compatibility_date = "2022-11-03"
 
 
[[queues.producers]]
 queue = "log-sink"
 binding = "BUFFER"
 
[[queues.consumers]]
 queue = "log-sink"
 max_batch_size = 100
 max_batch_timeout = 30
 
[[r2_buckets]]
 bucket_name = "log-sink"
 binding = "LOG_BUCKET"</code></pre>
            <p>The <code>[[queues.producers]]</code> section creates a <i>producer</i> binding for the Worker at <i>src/index.ts</i> called <code>BUFFER</code> that refers to the <i>log-sink</i> queue. This Worker can place messages onto the <i>log-sink</i> queue by calling <code>await env.BUFFER.send(log);</code></p><p>The <code>[[queues.consumers]]</code> section creates a <i>consumer</i> binding for the <i>log-sink</i> queue for your Worker. Once the <i>log-sink</i> queue has a batch ready to be processed (or <i>consumed</i>), the Workers runtime will look for the <code>queue()</code> event handler in <i>src/index.ts</i> and invoke it, passing the batch as an argument. The <i>queue()</i> function signature looks as follows:</p><p><code>async queue(batch: MessageBatch&lt;Error&gt;, env: Environment): Promise&lt;void&gt; {</code></p><p>The final binding in your <i>wrangler.toml</i> creates a binding for the <i>log-sink</i> R2 bucket that makes the bucket available to your Worker via <i>env.LOG_BUCKET</i>.</p>
    <div>
      <h4><i><b>src/index.ts</b></i></h4>
      <a href="#src-index-ts">
        
      </a>
    </div>
    <p>Open <i>src/index.ts</i> and replace the contents with the following code:</p>
            <pre><code>export interface Env {
 BUFFER: Queue;
 LOG_BUCKET: R2Bucket;
}
 
export default {
 async fetch(request: Request, env: Environment): Promise&lt;Response&gt; {
   let log = await request.json();
   await env.BUFFER.send(log);
   return new Response("Success!");
 },
 async queue(batch: MessageBatch&lt;Error&gt;, env: Environment): Promise&lt;void&gt; {
   const logBatch = JSON.stringify(batch.messages);
   await env.LOG_BUCKET.put(`logs/${Date.now()}.log.json`, logBatch);
 },
};
</code></pre>
            <p>The <code>export interface Env</code> section exposes the two bindings you defined in <i>wrangler.toml</i>: a queue named <i>BUFFER</i> and an R2 bucket named <i>LOG_BUCKET</i>.</p><p>The <code>fetch()</code> handler transforms the request body into JSON, adds the body to the <i>BUFFER</i> queue, then returns an HTTP 200 response with the message <i>Success!</i></p><p>The `queue()` handler receives a batch of messages that each contain log entries, iterates through concatenating each log into a string buffer, then writes that buffer to the <i>LOG_BUCKET</i> R2 bucket using the current timestamp as the filename.</p><p><b>Publishing and running your application</b>To publish your log sink application, run <code>wrangler publish</code>. Wrangler packages your application and its dependencies and deploys it to Cloudflare’s global network.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/131yCNzIgg4qaU5LKSFrcu/eddd596bcf02e369163e057abde27944/11-wrangler-publish.png" />
            
            </figure><p>Note that the output of <code>wrangler publish</code> includes the <i>BUFFER</i> queue binding, indicating that this Worker is a <i>producer</i> and can place messages onto the queue. The final line of output also indicates that this Worker is a <i>consumer</i> for the <i>log-sink</i> queue and can read and remove messages from the queue.</p><p>Use your favorite API client, like <a href="https://curl.se/">curl</a>, <a href="https://httpie.io/">httpie</a>, or <a href="https://www.postman.com/">Postman</a>, to send JSON log entries to the published URL for your Worker via HTTP POST requests. Navigate to your <i>log-sink</i> R2 bucket in the <a href="https://dash.cloudflare.com">Cloudflare dashboard</a> and note that the <i>logs</i> prefix is now populated with aggregated logs from your request.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6W4jmz2ezkbUwrs0uZi4xu/a8f2f124feb5c9a121c38fbfb8a1d1df/image4-9.png" />
            
            </figure><p>Download and open one of the logfiles to view the JSON array inside. That’s it - with fewer than 45 lines of code and config, you’ve built a log aggregator to ingest and store data in R2!</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2dw6XLyx7REjzje0Y0i7as/f154e7ed56152651c2f664b0dec9cebe/13-aggregated-log-content.png" />
            
            </figure>
    <div>
      <h2>Buffering R2 writes with Queues in the real world</h2>
      <a href="#buffering-r2-writes-with-queues-in-the-real-world">
        
      </a>
    </div>
    <p>In the previous example, you create a simple Workers application that buffers data into batches before writing the batches to R2. This reduces the number of calls to the downstream service, reducing load on the service and saving you money.</p><p><a href="https://uuid.rocks/">UUID.rocks</a>, the fastest UUIDv4-as-a-service, wanted to confirm whether their API truly generates unique IDs on every request. With 80,000 requests per day, it wasn’t trivial to find out. They decided to write every generated UUID to R2 to compare IDs across the entire population. However, writing directly to R2 at the rate UUIDs are generated is inefficient and expensive.</p><p>To reduce writes and costs, UUID.rocks introduced Cloudflare Queues into their UUID generation workflow. Each time a UUID is requested, a Worker places the value of the UUID into a queue. Once enough messages have been received, the buffered batch of JSON objects is written to R2. This avoids invoking an R2 write on every API call, saving costs and making the data easier to process later.</p><p>The <a href="https://github.com/jahands/uuid-queue">uuid-queue application</a> consists of a single Worker with three event handlers:</p><ol><li><p>A fetch handler that receives a JSON object representing the generated UUID and writes it to a Cloudflare Queue.</p></li><li><p>A queue handler that writes batches of JSON objects to R2 in CSV format.</p></li><li><p>A scheduled handler that combines batches from the previous hour into a single file for future processing.</p></li></ol><p>To view the source or deploy this application into your own account, <a href="https://github.com/jahands/uuid-queue">visit the repository on GitHub</a>.</p>
    <div>
      <h2>How we built Cloudflare Queues</h2>
      <a href="#how-we-built-cloudflare-queues">
        
      </a>
    </div>
    <p>Like many of the Cloudflare services you use and love, we built Queues by composing other Cloudflare services like Workers and Durable Objects. This enabled us to rapidly solve two difficult challenges: securely invoking your Worker from our own service and maintaining a strongly consistent state at scale. Several recent Cloudflare innovations helped us overcome these challenges.</p>
    <div>
      <h3>Securely invoking your Worker</h3>
      <a href="#securely-invoking-your-worker">
        
      </a>
    </div>
    <p>In the <i>Before Times</i> (early 2022), invoking one Worker from another Worker meant a fresh HTTP call from inside your script. This was a brittle experience, requiring you to know your downstream endpoint at deployment time. Nested invocations ran as HTTP calls, passing all the way through the Cloudflare network a second time and adding latency to your request. It also meant security was on you - if you wanted to control how that second Worker was invoked, you had to create and implement your own authentication and authorization scheme.</p><p><b>Worker to Worker requests</b>During Platform Week in May 2022, <a href="/service-bindings-ga/">Service Worker Bindings</a> entered general availability. With Service Worker Bindings, your Worker code has a binding to another Worker in your account that you invoke directly, avoiding the network penalty of a nested HTTP call. This removes the performance and security barriers discussed previously, but it still requires that you hard-code your nested Worker at compile time. You can think of this setup as “static dispatch,” where your Worker has a static reference to another Worker where it can dispatch events.</p><p><b>Dynamic dispatch</b>As Service Worker Bindings entered general availability, we also <a href="/workers-for-platforms-ga/">launched a closed beta</a> of Workers for Platforms, our tool suite to help make any product programmable. With Workers for Platforms, software as a service (SaaS) and platform providers can allow users to upload their own scripts and run them safely via Cloudflare Workers. User scripts are not known at compile time, but are <i>dynamically dispatched</i> at runtime.</p><p><a href="/workers-for-platforms-ga/">Workers for Platforms entered</a> general availability during <a href="https://www.cloudflare.com/ga-week/">GA week</a> in September 2022, and is available for all customers to build with today.</p><p>With dynamic dispatch generally available, we now have the ability to discover and invoke Workers at runtime without the performance penalty of HTTP traffic over the network. We use dynamic dispatch to invoke your queue’s consumer Worker whenever a message or batch of messages is ready to be processed.</p>
    <div>
      <h3>Consistent stateful data with Durable Objects</h3>
      <a href="#consistent-stateful-data-with-durable-objects">
        
      </a>
    </div>
    <p>Another challenge we faced was storing messages durably without sacrificing performance. We took the design goal of ensuring that all messages were persisted to disk in multiple locations before we confirmed receipt of the message to the user. Again, we turned to an existing Cloudflare product—<a href="https://developers.cloudflare.com/workers/learning/using-durable-objects/">Durable Objects</a>—which <a href="/durable-objects-ga/">entered general availability nearly one year ago today</a>.</p><p>Durable Objects are named instances of JavaScript classes that are guaranteed to be unique across Cloudflare’s entire network. Durable Objects process messages in-order and on a single-thread, allowing for coordination across messages and provide a strongly consistent storage API for key-value pairs. Offloading the hard problem of storing data durably in a distributed environment to Distributed Objects allowed us to reduce the time to build Queues and prepare it for open beta.</p>
    <div>
      <h2>Open beta roadmap</h2>
      <a href="#open-beta-roadmap">
        
      </a>
    </div>
    <p>Our open beta process empowers you to influence feature prioritization and delivery. We’ve set ambitious goals for ourselves on the path to general availability, most notably supporting unlimited throughput while maintaining 100% durability. We also have many other great features planned, like first-in first-out (FIFO) message processing and API compatibility layers to ease migrations, but we need your feedback to build what you need most, first.</p>
    <div>
      <h2>Conclusion</h2>
      <a href="#conclusion">
        
      </a>
    </div>
    <p>Cloudflare Queues is a global message queue for the Workers developer. Building with Queues makes your applications more performant, resilient, and cost-effective—but we’re not done yet. <a href="https://dash.cloudflare.com">Join the Open Beta today</a> and <a href="https://discord.gg/aTsevRH3pG">share your feedback</a> to help shape the Queues roadmap as we deliver application integration services for the next generation cloud.</p> ]]></content:encoded>
            <category><![CDATA[Developer Week]]></category>
            <category><![CDATA[Developers]]></category>
            <category><![CDATA[Serverless]]></category>
            <category><![CDATA[Storage]]></category>
            <category><![CDATA[Cloudflare Workers]]></category>
            <category><![CDATA[Queues]]></category>
            <category><![CDATA[Developer Platform]]></category>
            <guid isPermaLink="false">44r8mGfFRsG9yOGC0CDQnu</guid>
            <dc:creator>Rob Sutter</dc:creator>
        </item>
        <item>
            <title><![CDATA[Cloudflare Queues: globally distributed queues without the egress fees]]></title>
            <link>https://blog.cloudflare.com/introducing-cloudflare-queues/</link>
            <pubDate>Tue, 27 Sep 2022 13:00:00 GMT</pubDate>
            <description><![CDATA[ Cloudflare Queues is a message queuing service that allows applications to reliably send and receive messages using Cloudflare Workers ]]></description>
            <content:encoded><![CDATA[ <p></p><p>Developers continue to build more complex applications on Cloudflare’s Developer Platform. We started with Workers, which brought compute, then introduced KV, Durable Objects, R2, and soon D1, which enabled persistence. Now, as we enable developers to build larger, more sophisticated, and more reliable applications, it’s time to unlock another foundational building block: messaging.</p><p>Thus, we’re excited to announce the private beta of Cloudflare Queues, a global message queuing service that allows applications to reliably send and receive messages using <a href="https://workers.cloudflare.com/">Cloudflare Workers</a>. It offers at-least once message delivery, supports batching of messages, and charges no bandwidth egress fees. Let’s queue up the details.</p>
    <div>
      <h3>What is a Queue?</h3>
      <a href="#what-is-a-queue">
        
      </a>
    </div>
    <p>Queues enable developers to send and receive messages with a guarantee of delivery. Think of it like the postal service for the Internet. You give it a message, then it handles all the hard work to ensure the message gets delivered in a timely manner. Unlike the <i>real</i> postal service, where it’s possible for a message to get lost, Queues provide a guarantee that each message is delivered at-least once; no matter what. This lets you focus on your application, rather than worry about the chaos of transactions, retries, and backoffs to prevent data loss.</p><p>Queues also allow you to scale your application to process large volumes of data. Imagine a million people send you a package in the mail, at the same time. Instead of a million postal workers suddenly showing up at your front door, you would want them to aggregate your mail into batches, then ask you when you’re ready to receive each batch. This lets you decouple and spread load among services that have different throughput characteristics.</p>
    <div>
      <h3>How does it work?</h3>
      <a href="#how-does-it-work">
        
      </a>
    </div>
    <p>Queues are integrated into the fabric of the Cloudflare Workers runtime, with simple APIs that make it easy to send and receive messages. First, you’ll want to send messages to the Queue. You can do this by defining a Worker, referred to as a "producer," which has a binding to the Queue.</p><p>In the example below, a Worker catches JavaScript exceptions and sends them to a Queue. You might notice that any object can be sent to the Queue, including an error. That’s because messages are encoded using the standard <a href="https://developer.mozilla.org/en-US/docs/Web/API/Web_Workers_API/Structured_clone_algorithm">structuredClone()</a> algorithm.</p>
            <pre><code>export default {
  async fetch(request: Request, env: Environment) {
     try {
       return await doRequest(request);
     } catch (error) {
       await env.ERROR_QUEUE.send(error);
       return new Response(error.stack, { status: 500 });
     }
  }
}</code></pre>
            <p>Second, you’ll want to process messages in the Queue. You can do this by defining a Worker, referred to as the "consumer," which will receive messages from the Queue. To facilitate this, there is a new type of event handler, named "queue," which receives the messages sent to the consumer.</p><p>This Worker is configured to receive messages from the previous example. It appends the stack trace of each Error to a log file, then saves it to an R2 bucket.</p>
            <pre><code>export default {
  async queue(batch: MessageBatch&lt;Error&gt;, env: Environment) {
     let logs = "";
     for (const message of batch.messages) {
        logs += message.body.stack;
     }
     await env.ERROR_BUCKET.put(`errors/${Date.now()}.log`, logs);
  }
}</code></pre>
            <p>Configuration is also easy. You can change the message batch size, message retries, delivery wait time, and dead-letter queue. Here’s a snippet of the <code>wrangler.toml</code> configuration when deploying with <a href="https://github.com/cloudflare/wrangler2">wrangler</a>, our command-line interface</p>
            <pre><code>name = "my-producer"
[queues]
producers = [{ queue = "errors", binding = "ERROR_QUEUE" }]
# ---
name = "my-consumer"
[queues]
consumers = [{ queue = "errors", max_batch_size = 100, max_retries = 3 }]</code></pre>
            <p>Above are two different <code>wrangler.toml</code>s, one for a producer and another for a consumer. It is also possible for a producer and consumer to be implemented by the same Worker. To see the full list of options and examples, see the <a href="https://developers.cloudflare.com/queues">documentation</a>.</p>
    <div>
      <h3>What can you build with it?</h3>
      <a href="#what-can-you-build-with-it">
        
      </a>
    </div>
    <p>You can use Cloudflare Queues to defer tasks and guarantee they get processed, decouple load among different services, batch events and process them together, and send messages from Worker to Worker.</p><p>To demonstrate, we’ve put together a demo <a href="https://github.com/Electroid/queues-demo#cloudflare-queues-demo">application</a> that you can run on your local machine using <a href="https://github.com/cloudflare/wrangler2">wrangler</a>. It shows how Queues can batch messages and handle failures in your code, here’s a preview of it in action:</p><div></div><p>In addition to batching, here are other examples of what you can build with Queues:</p><ul><li><p>Off-load tasks from the critical path of a Workers request.</p></li><li><p>Guarantee messages get delivered to a service that talks HTTP.</p></li><li><p>Transform, filter, and fan-out messages to multiple Queues.</p></li></ul><p>Cloudflare Queues gives you the flexibility to decide where to route messages. Instead of static configuration files that define routing keys and patterns, you can use JavaScript to define custom logic for how you filter and fan-out to multiple Queues. In the next example, you can distribute work to different Queues based on the attributes of a user.</p>
            <pre><code>export default {
  async queue(batch: MessageBatch, env: Environment) {
    for (const message of batch.messages) {
      const user = message.body;
      if (isEUResident(user)) {
        await env.EU_QUEUE.send(user);
      }
      if (isForgotten(user)) {
        await env.DELETION_QUEUE.send(user);
      }
    }
  }
}</code></pre>
            <p>We will also support integrations with Cloudflare products, like R2. For example, you might configure an R2 bucket to send lifecycle events to a Queue or archive messages to a R2 bucket for long-term storage.</p>
    <div>
      <h3>How much does it cost?</h3>
      <a href="#how-much-does-it-cost">
        
      </a>
    </div>
    <p>Cloudflare Queues has a simple, transparent pricing model that’s easy to predict. It costs $0.40 per million operations, which is defined for every 64 KB chunk of data that is written, read, or deleted. There are also no <a href="/aws-egregious-egress/">egregious</a> bandwidth fees for data in <i>or</i> out -- unlike Amazon's SQS or Google’s Pub/Sub.</p><p>To effectively deliver a message, it usually takes three operations: one write, one read, and one acknowledgement. You can estimate your usage by considering the cost of messages delivered, which is $1.20 per million. (calculated as 3 x \$0.40)</p>
    <div>
      <h3>When can I try it?</h3>
      <a href="#when-can-i-try-it">
        
      </a>
    </div>
    <p>You can <a href="http://www.cloudflare.com/lp/queues">register</a> to join the waitlist as we work towards a beta launch. You’ll have an opportunity to try it out, for free. Once it’s ready, we’ll launch an open beta for everyone to try.</p><p>In the meantime, you can read the <a href="https://developers.cloudflare.com/queues">documentation</a> to view our code samples, see which features will be supported, and learn what you can build. If you’re in our developer Discord, you stay up-to-date by joining the <a href="https://discord.gg/rrZXVVcKQF">#queues-beta</a> channel. If you’re an Enterprise customer, reach out to your account team to schedule a session with our Product team.</p><p>We’re excited to see what you build with Cloudflare Queues. Let the queuing begin!</p> ]]></content:encoded>
            <category><![CDATA[Birthday Week]]></category>
            <category><![CDATA[Queues]]></category>
            <category><![CDATA[Cloudflare Workers]]></category>
            <category><![CDATA[Developers]]></category>
            <category><![CDATA[Product News]]></category>
            <category><![CDATA[Developer Platform]]></category>
            <guid isPermaLink="false">3qv2Vf3jyVruQHqBPiBzh3</guid>
            <dc:creator>Ashcon Partovi</dc:creator>
        </item>
    </channel>
</rss>