
<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:media="http://search.yahoo.com/mrss/">
    <channel>
        <title><![CDATA[ The Cloudflare Blog ]]></title>
        <description><![CDATA[ Get the latest news on how products at Cloudflare are built, technologies used, and join the teams helping to build a better Internet. ]]></description>
        <link>https://blog.cloudflare.com</link>
        <atom:link href="https://blog.cloudflare.com/" rel="self" type="application/rss+xml"/>
        <language>en-us</language>
        
        <lastBuildDate>Tue, 14 Apr 2026 18:44:08 GMT</lastBuildDate>
        <item>
            <title><![CDATA[We made Workers KV up to 3x faster — here’s the data]]></title>
            <link>https://blog.cloudflare.com/faster-workers-kv/</link>
            <pubDate>Thu, 26 Sep 2024 13:00:00 GMT</pubDate>
            <description><![CDATA[ Speed is a critical factor that dictates Internet behavior. Every additional millisecond a user spends waiting for your web page to load results in them abandoning your website.  ]]></description>
            <content:encoded><![CDATA[ <p>Speed is a critical factor that dictates Internet behavior. Every additional millisecond a user spends waiting for your web page to load results in them abandoning your website. The old adage remains as true as ever: <a href="https://www.cloudflare.com/learning/performance/more/website-performance-conversion-rates/"><u>faster websites result in higher conversion rates</u></a>. And with such outcomes tied to Internet speed, we believe a faster Internet is a better Internet.</p><p>Customers often use <a href="https://developers.cloudflare.com/kv/"><u>Workers KV</u></a> to provide <a href="https://developers.cloudflare.com/workers/"><u>Workers</u></a> with key-value data for configuration, routing, personalization, experimentation, or serving assets. Many of Cloudflare’s own products rely on KV for just this purpose: <a href="https://developers.cloudflare.com/pages"><u>Pages</u></a> stores static assets, <a href="https://developers.cloudflare.com/cloudflare-one/policies/access/"><u>Access</u></a> stores authentication credentials, <a href="https://developers.cloudflare.com/ai-gateway/"><u>AI Gateway</u></a> stores routing configuration, and <a href="https://developers.cloudflare.com/images/"><u>Images</u></a> stores configuration and assets, among others. So KV’s speed affects the latency of every request to an application, throughout the entire lifecycle of a user session. </p><p>Today, we’re announcing up to 3x faster KV hot reads, with all KV operations faster by up to 20ms. And we want to pull back the curtain and show you how we did it. </p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/67VzWOTRpMd9Dbc6ZM7M4M/cefb1d856344d9c963d4adfbe1b32fba/BLOG-2518_2.png" />
          </figure><p><sup><i>Workers KV read latency (ms) by percentile measured from Pages</i></sup></p>
    <div>
      <h2>Optimizing Workers KV’s architecture to minimize latency</h2>
      <a href="#optimizing-workers-kvs-architecture-to-minimize-latency">
        
      </a>
    </div>
    <p>At a high level, Workers KV is itself a Worker that makes requests to central storage backends, with many layers in between to properly cache and route requests across Cloudflare’s network. You can rely on Workers KV to support operations made by your Workers at any scale, and KV’s architecture will seamlessly handle your required throughput. </p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3pcw5pO2eeGJ1RriESJFSB/651fe26718f981eb741ad2ffb2f288c9/BLOG-2518_3.png" />
          </figure><p><sup><i>Sequence diagram of a Workers KV operation</i></sup></p><p>When your Worker makes a read operation to Workers KV, your Worker establishes a network connection within its Cloudflare region to KV’s Worker. The KV Worker then accesses the <a href="https://developers.cloudflare.com/workers/runtime-apis/cache/"><u>Cache API</u></a>, and in the event of a cache miss, retrieves the value from the storage backends. </p><p>Let’s look one level deeper at a simplified trace: </p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7mCpF8NRgSg70p8VNTFXu8/4cabdae18285575891f49a5cd34c9ab8/BLOG-2518_4.png" />
          </figure><p><sup><i>Simplified trace of a Workers KV operation</i></sup></p><p>From the top, here are the operations completed for a KV read operation from your Worker:</p><ol><li><p>Your Worker makes a connection to Cloudflare’s network in the same data center. This incurs ~5 ms of network latency.</p></li><li><p>Upon entering Cloudflare’s network, a service called <a href="https://blog.cloudflare.com/upgrading-one-of-the-oldest-components-in-cloudflare-software-stack/"><u>Front Line (FL)</u></a> is used to process the request. This incurs ~10 ms of operational latency.</p></li><li><p>FL proxies the request to the KV Worker. The KV Worker does a cache lookup for the key being accessed. This, once again, passes through the Front Line layer, incurring an additional ~10 ms of operational latency.</p></li><li><p>Cache is stored in various backends within each region of Cloudflare’s network. A service built upon <a href="https://blog.cloudflare.com/pingora-open-source/"><u>Pingora</u></a>, our open-sourced Rust framework for proxying HTTP requests, routes the cache lookup to the proper cache backend.</p></li><li><p>Finally, if the cache lookup is successful, the KV read operation is resolved. Otherwise, the request reaches our storage backends, where it gets its value.</p></li></ol><p>Looking at these flame graphs, it became apparent that a major opportunity presented itself to us: reducing the FL overhead (or eliminating it altogether) and reducing the cache misses across the Cloudflare network would reduce the latency for KV operations.</p>
    <div>
      <h3>Bypassing FL layers between Workers and services to save ~20ms</h3>
      <a href="#bypassing-fl-layers-between-workers-and-services-to-save-20ms">
        
      </a>
    </div>
    <p>A request from your Worker to KV doesn’t need to go through FL. Much of FL’s responsibility is to process and route requests from outside of Cloudflare — that’s more than is needed to handle a request from the KV binding to the KV Worker. So we skipped the Front Line altogether in both layers.</p><div>
  
</div>
<p><sup><i>Reducing latency in a Workers KV operation by removing FL layers</i></sup></p><p>To bypass the FL layer from the KV binding in your Worker, we modified the KV binding to connect directly to the KV Worker within the same Cloudflare location. Within the Workers host, we configured a C++ subpipeline to allow code from bindings to establish a direct connection with the proper routing configuration and authorization loaded. </p><p>The KV Worker also passes through the FL layer on its way to our internal <a href="https://blog.cloudflare.com/pingora-open-source/"><u>Pingora</u></a> service. In this case, we were able to use an internal Worker binding that allows Workers for Cloudflare services to bind directly to non-Worker services within Cloudflare’s network. With this fix, the KV Worker sets the proper cache control headers and establishes its connection to Pingora without leaving the network. </p><p>Together, both of these changes reduced latency by ~20 ms for every KV operation. </p>
    <div>
      <h3>Implementing tiered cache to minimize requests to storage backends</h3>
      <a href="#implementing-tiered-cache-to-minimize-requests-to-storage-backends">
        
      </a>
    </div>
    <p>We also optimized KV’s architecture to reduce the amount of requests that need to reach our centralized storage backends. These storage backends are further away and incur network latency, so improving the cache hit rate in regions close to your Workers significantly improves read latency.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1791aSxXPH1lgOIr3RQrLz/1685f6a57d627194e76cb657cd22ddd8/BLOG-2518_5.png" />
          </figure><p><sup><i>Workers KV uses Tiered Cache to resolve operations closer to your users</i></sup></p><p>To accomplish this, we used <a href="https://developers.cloudflare.com/cache/how-to/tiered-cache/#custom-tiered-cache"><u>Tiered Cache</u></a>, and implemented a cache topology that is fine-tuned to the usage patterns of KV. With a tiered cache, requests to KV’s storage backends are cached in regional tiers in addition to local (lower) tiers. With this architecture, KV operations that may be cache misses locally may be resolved regionally, which is especially significant if you have traffic across an entire region spanning multiple Cloudflare data centers. </p><p>This significantly reduced the amount of requests that needed to hit the storage backends, with ~30% of requests resolved in tiered cache instead of storage backends.</p>
    <div>
      <h2>KV’s new architecture</h2>
      <a href="#kvs-new-architecture">
        
      </a>
    </div>
    <p>As a result of these optimizations, KV operations are now simplified:</p><ol><li><p>When you read from KV in your Worker, the <a href="https://developers.cloudflare.com/kv/concepts/kv-bindings/"><u>KV binding</u></a> binds directly to KV’s Worker, saving 10 ms. </p></li><li><p>The KV Worker binds directly to the Tiered Cache service, saving another 10 ms. </p></li><li><p>Tiered Cache is used in front of storage backends, to resolve local cache misses regionally, closer to your users.</p></li></ol>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2cW0MsOH120GKUAlIUvDR4/7f574632ee81d3b955ed99bf87d86fa2/BLOG-2518_6.png" />
          </figure><p><sup><i>Sequence diagram of KV operations with new architecture</i></sup></p><p>In aggregate, these changes significantly reduced KV’s latency. 

The impact of the direct binding to cache is clearly seen in the wall time of the KV Worker, given this value measures the duration of a retrieval of a key-value pair from cache. The 90th percentile of all KV Worker invocations now resolve in less than 12 ms — before the direct binding to cache, that was 22 ms. That’s a 10 ms decrease in latency. </p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1UmcRB0Afk8mHig2DrThsh/d913cd33a28c1c2b093379238a90551c/BLOG-2518_7.png" />
          </figure><p><sup><i>Wall time (ms) within the KV Worker by percentile</i></sup></p><p>These KV read operations resolve quickly because the data is cached locally in the same Cloudflare location. But what about reads that aren’t resolved locally? ~30% of these resolve regionally within the tiered cache. Reads from tiered cache are up to 100 ms faster than when resolved at central storage backends, once again contributing to making KV reads faster in aggregate.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1Gz2IxlcNuDDRp3MhC4m40/ee364b710cec4332a5c307a784f34fa4/BLOG-2518_8.png" />
          </figure><p><sup><i>Wall time (ms) within the KV Worker for tiered cache vs. storage backends reads</i></sup></p><p>These graphs demonstrate the impact of direct binding from the KV binding to cache, and tiered cache. To see the impact of the direct binding from a Worker to the KV Worker, we need to look at the latencies reported by Cloudflare products that use KV.</p><p><a href="https://developers.cloudflare.com/pages/"><u>Cloudflare Pages</u></a>, which serves static assets like HTML, CSS, and scripts from KV, saw load times for fetching assets improve by up to 68%. Workers asset hosting, which we also announced as part of today’s <a href="http://blog.cloudflare.com/builder-day-2024-announcements"><u>Builder Day announcements</u></a>, gets this improved performance from day 1.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/67VzWOTRpMd9Dbc6ZM7M4M/cefb1d856344d9c963d4adfbe1b32fba/BLOG-2518_2.png" />
          </figure><p><sup><i>Workers KV read operation latency measured within Cloudflare Pages by percentile</i></sup></p><p><a href="https://developers.cloudflare.com/queues/"><u>Queues</u></a> and <a href="https://developers.cloudflare.com/cloudflare-one/applications/"><u>Access</u></a> also saw their latencies for KV operations drop, with their KV read operations now 2-5x faster. These services rely on Workers KV data for configuration and routing data, so KV’s performance improvement directly contributes to making them faster on each request. </p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1Gz2IxlcNuDDRp3MhC4m40/ee364b710cec4332a5c307a784f34fa4/BLOG-2518_8.png" />
          </figure><p><sup><i>Workers KV read operation latency measured within Cloudflare Queues by percentile</i></sup></p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1HFapaO1Gv09g9VlODrLAu/56d39207fb3dfefe258fa5e8cb8bd67b/BLOG-2518_10.png" />
          </figure><p><sup><i>Workers KV read operation latency measured within Cloudflare Access by percentile</i></sup></p><p>These are just some of the direct effects that a faster KV has had on other services. Across the board, requests are resolving faster thanks to KV’s faster response times. </p><p>And we have one more thing to make KV lightning fast. </p>
    <div>
      <h3>Optimizing KV’s hottest keys with an in-memory cache </h3>
      <a href="#optimizing-kvs-hottest-keys-with-an-in-memory-cache">
        
      </a>
    </div>
    <p>Less than 0.03% of keys account for nearly half of requests to the Workers KV service across all namespaces. These keys are read thousands of times per second, so making these faster has a disproportionate impact. Could these keys be resolved within the KV Worker without needing additional network hops?</p><p>Almost all of these keys are under 100 KB. At this size, it becomes possible to use the in-memory cache of the KV Worker — a limited amount of memory within the <a href="https://developers.cloudflare.com/workers/reference/how-workers-works/#isolates"><u>main runtime process</u></a> of a Worker sandbox. And that’s exactly what we did. For the highest throughput keys across Workers KV, reads resolve without even needing to leave the Worker runtime process.</p><div>
  
</div>
<p><sup><i>Sequence diagram of KV operations with the hottest keys resolved within an in-memory cache</i></sup></p><p>As a result of these changes, KV reads for these keys, which represent over 40% of Workers KV requests globally, resolve in under a millisecond. We’re actively testing these changes internally and expect to roll this out during October to speed up the hottest key-value pairs on Workers KV.</p>
    <div>
      <h2>A faster KV for all</h2>
      <a href="#a-faster-kv-for-all">
        
      </a>
    </div>
    <p>Most of these speed gains are already enabled with no additional action needed from customers. Your websites that are using KV are already responding to requests faster for your users, as are the other Cloudflare services using KV under the hood and the countless websites that depend upon them. </p><p>And we’re not done: we’ll continue to chase performance throughout our stack to make your websites faster. That’s how we’re going to move the needle towards a faster Internet. </p><p>To see Workers KV’s recent speed gains for your own KV namespaces, head over to your dashboard and check out the <a href="https://developers.cloudflare.com/kv/observability/metrics-analytics/"><u>new KV analytics</u></a>, with latency and cache status detailed per namespace.</p> ]]></content:encoded>
            <category><![CDATA[Birthday Week]]></category>
            <category><![CDATA[Cloudflare Workers KV]]></category>
            <category><![CDATA[Developer Platform]]></category>
            <category><![CDATA[Developers]]></category>
            <category><![CDATA[Performance]]></category>
            <guid isPermaLink="false">76i5gcdv0wbMNnwa7E17MR</guid>
            <dc:creator>Thomas Gauvin</dc:creator>
            <dc:creator>Rob Sutter</dc:creator>
            <dc:creator>Andrew Plunk</dc:creator>
        </item>
        <item>
            <title><![CDATA[Hardening Workers KV]]></title>
            <link>https://blog.cloudflare.com/workers-kv-restoring-reliability/</link>
            <pubDate>Wed, 02 Aug 2023 13:05:42 GMT</pubDate>
            <description><![CDATA[ A deep dive into the recent incidents relating to Workers KV, and how we’re going to fix them ]]></description>
            <content:encoded><![CDATA[ <p></p><p>Over the last couple of months, Workers KV has suffered from a series of incidents, culminating in three back-to-back incidents during the week of July 17th, 2023. These incidents have directly impacted customers that rely on KV — and this isn’t good enough.</p><p>We’re going to share the work we have done to understand why KV has had such a spate of incidents and, more importantly, share in depth what we’re doing to dramatically improve how we deploy changes to KV going forward.</p>
    <div>
      <h3>Workers KV?</h3>
      <a href="#workers-kv">
        
      </a>
    </div>
    <p><a href="https://www.cloudflare.com/developer-platform/workers-kv/">Workers KV</a> — or just “KV” — is a key-value service for storing data: specifically, data with high read throughput requirements. It’s especially useful for user configuration, service routing, small assets and/or authentication data.</p><p>We use KV extensively inside Cloudflare too, with <a href="https://www.cloudflare.com/zero-trust/products/access/">Cloudflare Access</a> (part of our Zero Trust suite) and <a href="https://pages.cloudflare.com/">Cloudflare Pages</a> being some of our highest profile internal customers. Both teams benefit from KV’s ability to keep regularly accessed key-value pairs close to where they’re accessed, as well its ability to scale out horizontally without any need to become an expert in operating KV.</p><p>Given Cloudflare’s extensive use of KV, it wasn’t just external customers impacted. Our own internal teams felt the pain of these incidents, too.</p>
    <div>
      <h3>The summary of the post-mortem</h3>
      <a href="#the-summary-of-the-post-mortem">
        
      </a>
    </div>
    <p>Back in June 2023, we announced the move to a new architecture for KV, which is designed to address two major points of customer feedback we’ve had around KV: high latency for infrequently accessed keys (or a key accessed in different regions), and working to ensure the upper bound on KV’s eventual consistency model for writes is 60 seconds — not “mostly 60 seconds”.</p><p>At the time of the blog, we’d already been testing this internally, including early access with our community champions and running a small % of production traffic to validate stability and performance expectations beyond what we could emulate within a staging environment.</p><p>However, in the weeks between mid-June and culminating in the series of incidents during the week of July 17th, we would continue to increase the volume of new traffic onto the new architecture. When we did this, we would encounter previously unseen problems (many of these customer-impacting) — then immediately roll back, fix bugs, and repeat. Internally, we’d begun to identify that this pattern was becoming unsustainable — each attempt to cut traffic onto the new architecture would surface errors or behaviors we hadn’t seen before and couldn’t immediately explain, and thus we would roll back and assess.</p><p>The issues at the root of this series of incidents proved to be significantly challenging to track and observe. Once identified, the two causes themselves proved to be quick to fix, but an (1) observability gap in our error reporting and (2) a mutation to local state that resulted in an unexpected mutation of global state were both hard to observe and reproduce over the days following the customer-facing impact ending.</p>
    <div>
      <h3>The detail</h3>
      <a href="#the-detail">
        
      </a>
    </div>
    <p>One important piece of context to understand before we go into detail on the post-mortem: Workers KV is composed of two separate Workers scripts – internally referred to as the Storage Gateway Worker and SuperCache. SuperCache is an optional path in the Storage Gateway Worker workflow, and is the basis for KV's new (faster) backend (refer to the blog).</p><p>Here is a timeline of events:</p>
<table>
<thead>
  <tr>
    <th><span>Time</span></th>
    <th><span>Description</span></th>
  </tr>
</thead>
<tbody>
  <tr>
    <td><span>2023-07-17 21:52 UTC</span></td>
    <td><span>Cloudflare observes alerts showing 500 HTTP status codes in the MEL01 data-center (Melbourne, AU) and begins investigating.</span><br /><span>We also begin to see a small set of customers reporting HTTP 500s being returned via multiple channels. It is not immediately clear if this is a data-center-wide issue or KV specific, as there had not been a recent KV deployment, and the issue directly correlated with three data-centers being brought back online.</span></td>
  </tr>
  <tr>
    <td><span>2023-07-18 00:09 UTC</span></td>
    <td><span>We disable the new backend for KV in MEL01 in an attempt to mitigate the issue (noting that there had not been a recent deployment or change to the % of users on the new backend).</span></td>
  </tr>
  <tr>
    <td><span>2023-07-18 05:42 UTC</span></td>
    <td><span>Investigating alerts showing 500 HTTP status codes in VIE02 (Vienna, AT) and JNB01 (Johannesburg, SA).</span></td>
  </tr>
  <tr>
    <td><span>2023-07-18 13:51 UTC</span></td>
    <td><span>The new backend is disabled globally after seeing issues in VIE02 (Vienna, AT) and JNB01 (Johannesburg, SA) data-centers, similar to MEL01. In both cases, they had also recently come back online after maintenance, but it remained unclear as to why KV was failing.</span></td>
  </tr>
  <tr>
    <td><span>2023-07-20 19:12 UTC</span></td>
    <td><span>The new backend is inadvertently re-enabled while deploying the update due to a misconfiguration in a deployment script. </span></td>
  </tr>
  <tr>
    <td><span>2023-07-20 19:33 UTC</span></td>
    <td><span>The new backend is (re-) disabled globally as HTTP 500 errors return.</span></td>
  </tr>
  <tr>
    <td><span>2023-07-20 23:46 UTC</span></td>
    <td><span>Broken Workers script pipeline deployed as part of gradual rollout due to incorrectly defined pipeline configuration in the deployment script.</span><br /><span>Metrics begin to report that a subset of traffic is being black-holed.</span></td>
  </tr>
  <tr>
    <td><span>2023-07-20 23:56 UTC</span></td>
    <td><span>Broken pipeline rolled back; errors rates return to pre-incident (normal) levels.</span></td>
  </tr>
</tbody>
</table><p><i>All timestamps referenced are in Coordinated Universal Time (UTC).</i></p><p>We initially observed alerts showing 500 HTTP status codes in the MEL01 data-center (Melbourne, AU) at 21:52 UTC on July 17th, and began investigating. We also received reports from a small set of customers reporting HTTP 500s being returned via multiple channels. This correlated with three data centers being brought back online, and it was not immediately clear if it related to the data centers or was KV-specific — especially given there had not been a recent KV deployment. On 05:42, we began investigating alerts showing 500 HTTP status codes in VIE02 (Vienna) and JNB02 (Johannesburg) data-centers; while both had recently come back online after maintenance, it was still unclear why KV was failing. At 13:51 UTC, we made the decision to disable the new backend globally.</p><p>Following the incident on July 18th, we attempted to deploy an allow-list configuration to reduce the scope of impacted accounts. However, while attempting to roll out a change for the Storage Gateway Worker at 19:12 UTC on July 20th, an older configuration was progressed causing the new backend to be enabled again, leading to the third event. As the team worked to fix this and deploy this configuration, they attempted to manually progress the deployment at 23:46 UTC, which resulted in the passing of a malformed configuration value that caused traffic to be sent to an invalid Workers script configuration.</p><p>After all deployments and the broken Workers configuration (pipeline) had been rolled back at 23:56 on the 20th July, we spent the following three days working to identify the root cause of the issue. We lacked <a href="https://www.cloudflare.com/learning/performance/what-is-observability/">observability</a> as KV's Worker script (responsible for much of KV's logic) was throwing an unhandled exception very early on in the request handling process. This was further exacerbated by prior work to disable error reporting in a disabled data-center due to the noise generated, which had previously resulted in logs being rate-limited upstream from our service.</p><p>This previous mitigation prevented us from capturing meaningful logs from the Worker, including identifying the exception itself, as an uncaught exception terminates request processing. This has raised the priority of improving how unhandled exceptions are reported and surfaced in a Worker (see Recommendations, below, for further details). This issue was exacerbated by the fact that KV's Worker script would fail to re-enter its "healthy" state when a Cloudflare data center was brought back online, as the Worker was mutating an environment variable perceived to be in request scope, but that was in global scope and persisted across requests. This effectively left the Worker “frozen” with the previous, invalid configuration for the affected locations.</p><p>Further, the introduction of a new progressive release process for Workers KV, designed to de-risk rollouts (as an action from a prior incident), prolonged the incident. We found a bug in the deployment logic that led to a broader outage due to an incorrectly defined configuration.</p><p>This configuration effectively caused us to drop a single-digit % of traffic until it was rolled back 10 minutes later. This code is untested at scale, and we need to spend more time hardening it before using it as the default path in production.</p><p>Additionally: although the root cause of the incidents was limited to three Cloudflare data-centers (Melbourne, Vienna, and Johannesburg), traffic across these regions still uses these data centers to route reads and writes to our system of record. Because these three data centers participate in KV’s new backend as regional tiers, a portion of traffic across the Oceania, Europe, and African regions was affected. Only a portion of keys from enrolled namespaces use any given data center as a regional tier in order to limit a single (regional) point of failure, so while traffic across <i>all</i> data centers in the region was impacted, nowhere was <i>all</i> traffic in a given data center affected.</p><p>We estimated the affected traffic to be 0.2-0.5% of KV's global traffic (based on our error reporting), however we observed some customers with error rates approaching 20% of their total KV operations. The impact was spread across KV namespaces and keys for customers within the scope of this incident.</p><p>Both KV’s high total traffic volume and its role as a critical dependency for many customers amplify the impact of even small error rates. In all cases, once the changes were rolled back, errors returned to normal levels and did not persist.</p>
    <div>
      <h3>Thinking about risks in building software</h3>
      <a href="#thinking-about-risks-in-building-software">
        
      </a>
    </div>
    <p>Before we dive into what we’re doing to significantly improve how we build, test, deploy and observe Workers KV going forward, we think there are lessons from the real world that can equally apply to how we improve the safety factor of the software we ship.</p><p>In traditional engineering and construction, there is an extremely common procedure known as a   “JSEA”, or <a href="https://en.wikipedia.org/wiki/Job_safety_analysis">Job Safety and Environmental Analysis</a> (sometimes just “JSA”). A JSEA is designed to help you iterate through a list of tasks, the potential hazards, and most importantly, the controls that will be applied to prevent those hazards from damaging equipment, injuring people, or worse.</p><p>One of the most critical concepts is the “hierarchy of controls” — that is, what controls should be applied to mitigate these hazards. In most practices, these are elimination, substitution, engineering, administration and personal protective equipment. Elimination and substitution are fairly self-explanatory: is there a different way to achieve this goal? Can we eliminate that task completely? Engineering and administration ask us whether there is additional engineering work, such as changing the placement of a panel, or using a horizontal boring machine to lay an underground pipe vs. opening up a trench that people can fall into.</p><p>The last and lowest on the hierarchy, is personal protective equipment (PPE). A hard hat can protect you from severe injury from something falling from above, but it’s a last resort, and it certainly isn’t guaranteed. In engineering practice, any hazard that <i>only</i> lists PPE as a mitigating factor is unsatisfactory: there must be additional controls in place. For example, instead of only wearing a hard hat, we should <i>engineer</i> the floor of scaffolding so that large objects (such as a wrench) cannot fall through in the first place. Further, if we require that all tools are attached to the wearer, then it significantly reduces the chance the tool can be dropped in the first place. These controls ensure that there are multiple degrees of mitigation — defense in depth — before your hard hat has to come into play.</p><p>Coming back to software, we can draw parallels between these controls: engineering can be likened to improving automation, gradual rollouts, and detailed metrics. Similarly, personal protective equipment can be likened to code review: useful, but code review cannot be the only thing protecting you from shipping bugs or untested code. Automation with linters, more robust testing, and new metrics are all vastly <i>safer</i> ways of shipping software.</p><p>As we spent time assessing where to improve our existing controls and how to put new controls in place to mitigate risks and improve the reliability (safety) of Workers KV, we took a similar approach: eliminating unnecessary changes, engineering more resilience into our codebase, automation, deployment tooling, and only then looking at human processes.</p>
    <div>
      <h3>How we plan to get better</h3>
      <a href="#how-we-plan-to-get-better">
        
      </a>
    </div>
    <p>Cloudflare is undertaking a larger, more structured review of KV's observability tooling, release infrastructure and processes to mitigate not only the contributing factors to the incidents within this report, but recent incidents related to KV. Critically, we see tooling and automation as the most powerful mechanisms for preventing incidents, with process improvements designed to provide an additional layer of protection. Process improvements alone cannot be the only mitigation.</p><p>Specifically, we have identified and prioritized the below efforts as the most important next steps towards meeting our own availability SLOs, and (above all) make KV a service that customers building on Workers can rely on for storing configuration and service data in the hot path of their traffic:</p><ul><li><p>Substantially improve the existing observability tooling for unhandled exceptions, both for internal teams and customers building on Workers. This is especially critical for high-volume services, where traditional logging alone can be too noisy (and not specific enough) to aid in tracking down these cases. The existing ongoing work to land this will be prioritized further. In the meantime, we have directly addressed the specific uncaught exception with KV's primary Worker script.</p></li><li><p>Improve the safety around the mutation of environmental variables in a Worker, which currently operate at "global" (per-isolate) scope, but can appear to be per-request. Mutating an environmental variable in request scope mutates the value for all requests transiting that same isolate (in a given location), which can be unexpected. Changes here will need to take backwards compatibility in mind.</p></li><li><p>Continue to expand KV’s test coverage to better address the above issues, in parallel with the aforementioned observability and tooling improvements, as an additional layer of defense. This includes allowing our test infrastructure to simulate traffic from any source data-center, which would have allowed us to more quickly reproduce the issue and identify a root cause.</p></li><li><p>Improvements to our release process, including how KV changes and releases are reviewed and approved, going forward. We will enforce a higher level of scrutiny for future changes, and where possible, reduce the number of changes deployed at once. This includes taking on new infrastructure dependencies, which will have a higher bar for both design and testing.</p></li><li><p>Additional logging improvements, including sampling, throughout our request handling process to improve troubleshooting &amp; debugging. A significant amount of the challenge related to these incidents was due to the lack of logging around specific requests (especially non-2xx requests)</p></li><li><p>Review and, where applicable, improve alerting thresholds surrounding error rates. As mentioned previously in this report, sub-% error rates at a global scale can have severe negative impact on specific users and/or locations: ensuring that errors are caught and not lost in the noise is an ongoing effort.</p></li><li><p>Address maturity issues with our progressive deployment tooling for Workers, which is net-new (and will eventually be exposed to customers directly).</p></li></ul><p>This is not an exhaustive list: we're continuing to expand on preventative measures associated with these and other incidents. These changes will not only improve KVs reliability, but other services across Cloudflare that KV relies on, or that rely on KV.</p><p>We recognize that KV hasn’t lived up to our customers’ expectations recently. Because we rely on KV so heavily internally, we’ve felt that pain first hand as well. The work to fix the issues that led to this cycle of incidents is already underway. That work will not only improve KV’s reliability but also improve the reliability of any software written on the Cloudflare Workers developer platform, whether by our customers or by ourselves.</p> ]]></content:encoded>
            <category><![CDATA[Cloudflare Workers]]></category>
            <category><![CDATA[Reliability]]></category>
            <category><![CDATA[Post Mortem]]></category>
            <category><![CDATA[Developers]]></category>
            <category><![CDATA[Developer Platform]]></category>
            <guid isPermaLink="false">6sRjpTRuwGjPJmHgwHlg7u</guid>
            <dc:creator>Matt Silverlock</dc:creator>
            <dc:creator>Charles Burnett</dc:creator>
            <dc:creator>Rob Sutter</dc:creator>
            <dc:creator>Kris Evans</dc:creator>
        </item>
        <item>
            <title><![CDATA[Build applications of any size on Cloudflare with the Queues open beta]]></title>
            <link>https://blog.cloudflare.com/cloudflare-queues-open-beta/</link>
            <pubDate>Mon, 14 Nov 2022 14:00:00 GMT</pubDate>
            <description><![CDATA[ Cloudflare Queues enables developers to build performant and resilient distributed applications that span the globe. Any developer with a paid Workers plan can enroll in the Open Beta and start building today! ]]></description>
            <content:encoded><![CDATA[ <p></p><p>Message queues are a fundamental building block of cloud applications—and today the <a href="https://developers.cloudflare.com/queues/">Cloudflare Queues</a> open beta brings queues to every developer building for Region: Earth. Cloudflare Queues follows <a href="https://www.cloudflare.com/products/workers/">Cloudflare Workers</a> and <a href="https://www.cloudflare.com/products/r2/">Cloudflare R2</a> in a long line of <a href="https://www.cloudflare.com/application-services/">innovative application services</a> built for the Workers Developer Platform, enabling developers to build more complex applications without configuring networks, choosing regions, or estimating capacity. Best of all, like many other Cloudflare services, there are no egregious egress charges!</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7L0APU7xTJNF6ByoHcvXpY/581bfdcb37394d7e30ce5bad852b2c58/image8.png" />
            
            </figure><p>If you’ve ever purchased something online and seen a message like “you will receive confirmation of your order shortly,” you’ve interacted with a queue. When you completed your order, your shopping cart and information were stored and the order was placed into a queue. At some later point, the order fulfillment service picks and packs your items and hands it off to the shipping service—again, via a queue. Your order may sit for only a minute, or much longer if an item is out of stock or a warehouse is busy, and queues enable all of this functionality.</p><p>Message queues are great at decoupling components of applications, like the checkout and order fulfillment services for an <a href="https://www.cloudflare.com/ecommerce/">ecommerce site</a>. Decoupled services are easier to reason about, deploy, and implement, allowing you to ship features that delight your customers without worrying about synchronizing complex deployments.</p><p>Queues also allow you to batch and buffer calls to downstream services and <a href="https://www.cloudflare.com/learning/security/api/what-is-an-api/">APIs</a>. This post shows you how to enroll in the open beta, walks you through a practical example of using Queues to build a log sink, and tells you how we built Queues using other Cloudflare services. You’ll also learn a bit about the roadmap for the open beta.</p>
    <div>
      <h2>Getting started</h2>
      <a href="#getting-started">
        
      </a>
    </div>
    
    <div>
      <h3>Enrolling in the open beta</h3>
      <a href="#enrolling-in-the-open-beta">
        
      </a>
    </div>
    <p>Open the <a href="https://dash.cloudflare.com">Cloudflare dashboard</a> and navigate to the <i>Workers</i> section. Select <i>Queues</i> from the <i>Workers</i> navigation menu and choose <i>Enable Queues Beta</i>.</p><p>Review your order and choose <i>Proceed to Payment Details</i>.</p><p><b>Note: If you are not already subscribed to a</b> <a href="https://www.cloudflare.com/plans/developer-platform/#overview"><b>Workers Paid Plan</b></a><b>, one will be added to your order automatically.</b></p><p>Enter your payment details and choose <i>Complete Purchase</i>. That’s it - you’re enrolled in the open beta! Choose <i>Return to Queues</i> on the confirmation page to return to the Cloudflare Queues home page.</p>
    <div>
      <h3>Creating your first queue</h3>
      <a href="#creating-your-first-queue">
        
      </a>
    </div>
    <p>After enabling the open beta, open the Queues home page and choose <i>Create Queue</i>. Name your queue `my-first-queue` and choose <i>Create queue</i>. That’s all there is to it!</p><p>The dash displays a confirmation message along with a list of all the queues in your account.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/8jcg7ga1WIbqJplp8XRsP/cbc7cdf9420a4d51c714aa8b4b96bd80/image1-18.png" />
            
            </figure><p><b>Note: As of the writing of this blog post each account is limited to ten queues. We intend to raise this limit as we build towards general availability.</b></p>
    <div>
      <h3>Managing your queues with Wrangler</h3>
      <a href="#managing-your-queues-with-wrangler">
        
      </a>
    </div>
    <p>You can also manage your queues from the command line using <a href="https://github.com/cloudflare/wrangler2">Wrangler</a>, the CLI for Cloudflare Workers. In this section, you build a simple but complete application implementing a log aggregator or sink to learn how to integrate Workers, Queues, and R2.</p><p><b>Setting up resources</b>To create this application, you need access to a Cloudflare Workers account with a subscription plan, access to the Queues open beta, and an R2 plan.</p><p><a href="https://developers.cloudflare.com/workers/get-started/guide/#1-install-wrangler-workers-cli">Install</a> and <a href="https://developers.cloudflare.com/workers/get-started/guide/#2-authenticate-wrangler">authenticate</a> Wrangler then run <code>wrangler queues create log-sink</code> from the command line to create a queue for your application.</p><p>Run <code>wrangler queues list</code> and note that Wrangler displays your new queue.</p><p><b>Note: The following screenshots use the</b> <a href="https://stedolan.github.io/jq/"><b>jq</b></a> <b>utility to format the JSON output of wrangler commands. You</b> <b><i>do not</i></b> <b>need to install jq to complete this application.</b></p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5DzA0apSCcSq7lDHPCaFwt/d44c7290077d4ddc9e3283614ea7bbb9/8-create-and-list-a-queue.png" />
            
            </figure><p>Finally, run <code>wrangler r2 bucket create log-sink</code> to create an R2 bucket to store your aggregated logs. After the bucket is created, run <code>wrangler r2 bucket list</code> to see your new bucket.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5QT0a9c73sO2l5pTUbROs9/ab6b9e7858c8f2e90149839c275c7aa6/9-create-and-list-a-bucket.png" />
            
            </figure><p><b>Creating your Worker</b>Next, create a Workers application with two handlers: a <code>fetch()</code> handler to receive individual incoming log lines and a <code>queue()</code> handler to aggregate a batch of logs and write the batch to R2.</p><p>In an empty directory, run <code>wrangler init</code> to create a new Cloudflare Workers application. When prompted:</p><ul><li><p>Choose “y” to create a new <i>package.json</i></p></li><li><p>Choose “y” to use TypeScript</p></li><li><p>Choose “Fetch handler” to create a new Worker at <i>src/index.ts</i></p></li></ul>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3PiZnMr0Q4S0fzsLDnHg2G/002b64768711ba224e8cb3a9618607d2/10-wrangler-init.png" />
            
            </figure><p>Open <i>wrangler.toml</i> and replace the contents with the following:</p>
    <div>
      <h4><i><b>wrangler.toml</b></i></h4>
      <a href="#wrangler-toml">
        
      </a>
    </div>
    
            <pre><code>name = "queues-open-beta"
main = "src/index.ts"
compatibility_date = "2022-11-03"
 
 
[[queues.producers]]
 queue = "log-sink"
 binding = "BUFFER"
 
[[queues.consumers]]
 queue = "log-sink"
 max_batch_size = 100
 max_batch_timeout = 30
 
[[r2_buckets]]
 bucket_name = "log-sink"
 binding = "LOG_BUCKET"</code></pre>
            <p>The <code>[[queues.producers]]</code> section creates a <i>producer</i> binding for the Worker at <i>src/index.ts</i> called <code>BUFFER</code> that refers to the <i>log-sink</i> queue. This Worker can place messages onto the <i>log-sink</i> queue by calling <code>await env.BUFFER.send(log);</code></p><p>The <code>[[queues.consumers]]</code> section creates a <i>consumer</i> binding for the <i>log-sink</i> queue for your Worker. Once the <i>log-sink</i> queue has a batch ready to be processed (or <i>consumed</i>), the Workers runtime will look for the <code>queue()</code> event handler in <i>src/index.ts</i> and invoke it, passing the batch as an argument. The <i>queue()</i> function signature looks as follows:</p><p><code>async queue(batch: MessageBatch&lt;Error&gt;, env: Environment): Promise&lt;void&gt; {</code></p><p>The final binding in your <i>wrangler.toml</i> creates a binding for the <i>log-sink</i> R2 bucket that makes the bucket available to your Worker via <i>env.LOG_BUCKET</i>.</p>
    <div>
      <h4><i><b>src/index.ts</b></i></h4>
      <a href="#src-index-ts">
        
      </a>
    </div>
    <p>Open <i>src/index.ts</i> and replace the contents with the following code:</p>
            <pre><code>export interface Env {
 BUFFER: Queue;
 LOG_BUCKET: R2Bucket;
}
 
export default {
 async fetch(request: Request, env: Environment): Promise&lt;Response&gt; {
   let log = await request.json();
   await env.BUFFER.send(log);
   return new Response("Success!");
 },
 async queue(batch: MessageBatch&lt;Error&gt;, env: Environment): Promise&lt;void&gt; {
   const logBatch = JSON.stringify(batch.messages);
   await env.LOG_BUCKET.put(`logs/${Date.now()}.log.json`, logBatch);
 },
};
</code></pre>
            <p>The <code>export interface Env</code> section exposes the two bindings you defined in <i>wrangler.toml</i>: a queue named <i>BUFFER</i> and an R2 bucket named <i>LOG_BUCKET</i>.</p><p>The <code>fetch()</code> handler transforms the request body into JSON, adds the body to the <i>BUFFER</i> queue, then returns an HTTP 200 response with the message <i>Success!</i></p><p>The `queue()` handler receives a batch of messages that each contain log entries, iterates through concatenating each log into a string buffer, then writes that buffer to the <i>LOG_BUCKET</i> R2 bucket using the current timestamp as the filename.</p><p><b>Publishing and running your application</b>To publish your log sink application, run <code>wrangler publish</code>. Wrangler packages your application and its dependencies and deploys it to Cloudflare’s global network.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/131yCNzIgg4qaU5LKSFrcu/eddd596bcf02e369163e057abde27944/11-wrangler-publish.png" />
            
            </figure><p>Note that the output of <code>wrangler publish</code> includes the <i>BUFFER</i> queue binding, indicating that this Worker is a <i>producer</i> and can place messages onto the queue. The final line of output also indicates that this Worker is a <i>consumer</i> for the <i>log-sink</i> queue and can read and remove messages from the queue.</p><p>Use your favorite API client, like <a href="https://curl.se/">curl</a>, <a href="https://httpie.io/">httpie</a>, or <a href="https://www.postman.com/">Postman</a>, to send JSON log entries to the published URL for your Worker via HTTP POST requests. Navigate to your <i>log-sink</i> R2 bucket in the <a href="https://dash.cloudflare.com">Cloudflare dashboard</a> and note that the <i>logs</i> prefix is now populated with aggregated logs from your request.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6W4jmz2ezkbUwrs0uZi4xu/a8f2f124feb5c9a121c38fbfb8a1d1df/image4-9.png" />
            
            </figure><p>Download and open one of the logfiles to view the JSON array inside. That’s it - with fewer than 45 lines of code and config, you’ve built a log aggregator to ingest and store data in R2!</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2dw6XLyx7REjzje0Y0i7as/f154e7ed56152651c2f664b0dec9cebe/13-aggregated-log-content.png" />
            
            </figure>
    <div>
      <h2>Buffering R2 writes with Queues in the real world</h2>
      <a href="#buffering-r2-writes-with-queues-in-the-real-world">
        
      </a>
    </div>
    <p>In the previous example, you create a simple Workers application that buffers data into batches before writing the batches to R2. This reduces the number of calls to the downstream service, reducing load on the service and saving you money.</p><p><a href="https://uuid.rocks/">UUID.rocks</a>, the fastest UUIDv4-as-a-service, wanted to confirm whether their API truly generates unique IDs on every request. With 80,000 requests per day, it wasn’t trivial to find out. They decided to write every generated UUID to R2 to compare IDs across the entire population. However, writing directly to R2 at the rate UUIDs are generated is inefficient and expensive.</p><p>To reduce writes and costs, UUID.rocks introduced Cloudflare Queues into their UUID generation workflow. Each time a UUID is requested, a Worker places the value of the UUID into a queue. Once enough messages have been received, the buffered batch of JSON objects is written to R2. This avoids invoking an R2 write on every API call, saving costs and making the data easier to process later.</p><p>The <a href="https://github.com/jahands/uuid-queue">uuid-queue application</a> consists of a single Worker with three event handlers:</p><ol><li><p>A fetch handler that receives a JSON object representing the generated UUID and writes it to a Cloudflare Queue.</p></li><li><p>A queue handler that writes batches of JSON objects to R2 in CSV format.</p></li><li><p>A scheduled handler that combines batches from the previous hour into a single file for future processing.</p></li></ol><p>To view the source or deploy this application into your own account, <a href="https://github.com/jahands/uuid-queue">visit the repository on GitHub</a>.</p>
    <div>
      <h2>How we built Cloudflare Queues</h2>
      <a href="#how-we-built-cloudflare-queues">
        
      </a>
    </div>
    <p>Like many of the Cloudflare services you use and love, we built Queues by composing other Cloudflare services like Workers and Durable Objects. This enabled us to rapidly solve two difficult challenges: securely invoking your Worker from our own service and maintaining a strongly consistent state at scale. Several recent Cloudflare innovations helped us overcome these challenges.</p>
    <div>
      <h3>Securely invoking your Worker</h3>
      <a href="#securely-invoking-your-worker">
        
      </a>
    </div>
    <p>In the <i>Before Times</i> (early 2022), invoking one Worker from another Worker meant a fresh HTTP call from inside your script. This was a brittle experience, requiring you to know your downstream endpoint at deployment time. Nested invocations ran as HTTP calls, passing all the way through the Cloudflare network a second time and adding latency to your request. It also meant security was on you - if you wanted to control how that second Worker was invoked, you had to create and implement your own authentication and authorization scheme.</p><p><b>Worker to Worker requests</b>During Platform Week in May 2022, <a href="/service-bindings-ga/">Service Worker Bindings</a> entered general availability. With Service Worker Bindings, your Worker code has a binding to another Worker in your account that you invoke directly, avoiding the network penalty of a nested HTTP call. This removes the performance and security barriers discussed previously, but it still requires that you hard-code your nested Worker at compile time. You can think of this setup as “static dispatch,” where your Worker has a static reference to another Worker where it can dispatch events.</p><p><b>Dynamic dispatch</b>As Service Worker Bindings entered general availability, we also <a href="/workers-for-platforms-ga/">launched a closed beta</a> of Workers for Platforms, our tool suite to help make any product programmable. With Workers for Platforms, software as a service (SaaS) and platform providers can allow users to upload their own scripts and run them safely via Cloudflare Workers. User scripts are not known at compile time, but are <i>dynamically dispatched</i> at runtime.</p><p><a href="/workers-for-platforms-ga/">Workers for Platforms entered</a> general availability during <a href="https://www.cloudflare.com/ga-week/">GA week</a> in September 2022, and is available for all customers to build with today.</p><p>With dynamic dispatch generally available, we now have the ability to discover and invoke Workers at runtime without the performance penalty of HTTP traffic over the network. We use dynamic dispatch to invoke your queue’s consumer Worker whenever a message or batch of messages is ready to be processed.</p>
    <div>
      <h3>Consistent stateful data with Durable Objects</h3>
      <a href="#consistent-stateful-data-with-durable-objects">
        
      </a>
    </div>
    <p>Another challenge we faced was storing messages durably without sacrificing performance. We took the design goal of ensuring that all messages were persisted to disk in multiple locations before we confirmed receipt of the message to the user. Again, we turned to an existing Cloudflare product—<a href="https://developers.cloudflare.com/workers/learning/using-durable-objects/">Durable Objects</a>—which <a href="/durable-objects-ga/">entered general availability nearly one year ago today</a>.</p><p>Durable Objects are named instances of JavaScript classes that are guaranteed to be unique across Cloudflare’s entire network. Durable Objects process messages in-order and on a single-thread, allowing for coordination across messages and provide a strongly consistent storage API for key-value pairs. Offloading the hard problem of storing data durably in a distributed environment to Distributed Objects allowed us to reduce the time to build Queues and prepare it for open beta.</p>
    <div>
      <h2>Open beta roadmap</h2>
      <a href="#open-beta-roadmap">
        
      </a>
    </div>
    <p>Our open beta process empowers you to influence feature prioritization and delivery. We’ve set ambitious goals for ourselves on the path to general availability, most notably supporting unlimited throughput while maintaining 100% durability. We also have many other great features planned, like first-in first-out (FIFO) message processing and API compatibility layers to ease migrations, but we need your feedback to build what you need most, first.</p>
    <div>
      <h2>Conclusion</h2>
      <a href="#conclusion">
        
      </a>
    </div>
    <p>Cloudflare Queues is a global message queue for the Workers developer. Building with Queues makes your applications more performant, resilient, and cost-effective—but we’re not done yet. <a href="https://dash.cloudflare.com">Join the Open Beta today</a> and <a href="https://discord.gg/aTsevRH3pG">share your feedback</a> to help shape the Queues roadmap as we deliver application integration services for the next generation cloud.</p> ]]></content:encoded>
            <category><![CDATA[Developer Week]]></category>
            <category><![CDATA[Developers]]></category>
            <category><![CDATA[Serverless]]></category>
            <category><![CDATA[Storage]]></category>
            <category><![CDATA[Cloudflare Workers]]></category>
            <category><![CDATA[Queues]]></category>
            <category><![CDATA[Developer Platform]]></category>
            <guid isPermaLink="false">44r8mGfFRsG9yOGC0CDQnu</guid>
            <dc:creator>Rob Sutter</dc:creator>
        </item>
    </channel>
</rss>