
<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:media="http://search.yahoo.com/mrss/">
    <channel>
        <title><![CDATA[ The Cloudflare Blog ]]></title>
        <description><![CDATA[ Get the latest news on how products at Cloudflare are built, technologies used, and join the teams helping to build a better Internet. ]]></description>
        <link>https://blog.cloudflare.com</link>
        <atom:link href="https://blog.cloudflare.com/" rel="self" type="application/rss+xml"/>
        <language>en-us</language>
        
        <lastBuildDate>Mon, 13 Apr 2026 18:04:24 GMT</lastBuildDate>
        <item>
            <title><![CDATA[“You get Instant Purge, and you get Instant Purge!” — all purge methods now available to all customers]]></title>
            <link>https://blog.cloudflare.com/instant-purge-for-all/</link>
            <pubDate>Tue, 01 Apr 2025 14:00:00 GMT</pubDate>
            <description><![CDATA[ Following up on having the fastest purge in the industry, we have now increased Instant Purge quotas across all Cloudflare plans.  ]]></description>
            <content:encoded><![CDATA[ <p>There's a tradition at Cloudflare of launching real products on April 1, instead of the usual joke product announcements circulating online today. In previous years, we've introduced impactful products like <a href="https://blog.cloudflare.com/announcing-1111/"><u>1.1.1.1</u></a> and <a href="https://blog.cloudflare.com/introducing-1-1-1-1-for-families/"><u>1.1.1.1 for Families</u></a>. Today, we're excited to continue this tradition by <b>making every purge method available to all customers, regardless of plan type.</b></p><p>During Birthday Week 2024, we <a href="https://blog.cloudflare.com/instant-purge/"><u>announced our intention</u></a> to bring the full suite of purge methods — including purge by URL, purge by hostname, purge by tag, purge by prefix, and purge everything — to all Cloudflare plans. Historically, methods other than "purge by URL" and "purge everything" were exclusive to Enterprise customers. However, we've been openly rebuilding our purge pipeline over the past few years (hopefully you’ve read <a href="https://blog.cloudflare.com/part1-coreless-purge/"><u>some of our</u></a> <a href="https://blog.cloudflare.com/rethinking-cache-purge-architecture/"><u>blog</u></a> <a href="https://blog.cloudflare.com/instant-purge/"><u>series</u></a>), and we're thrilled to share the results more broadly. We've spent recent months ensuring the new Instant Purge pipeline performs consistently under 150 ms, even during increased load scenarios, making it ready for every customer.  </p><p>But that's not all — we're also significantly raising the default purge rate limits for Enterprise customers, allowing even greater purge throughput thanks to the efficiency of our newly developed <a href="https://blog.cloudflare.com/instant-purge/"><u>Instant Purge</u></a> system.</p>
    <div>
      <h2>Building a better purge: a two-year journey</h2>
      <a href="#building-a-better-purge-a-two-year-journey">
        
      </a>
    </div>
    <p>Stepping back, today's announcement represents roughly two years of focused engineering. Near the end of 2022, our team went heads down rebuilding Cloudflare’s purge pipeline with a clear yet challenging goal: dramatically increase our throughput while maintaining near-instant invalidation across our global network.</p><p>Cloudflare operates <a href="https://www.cloudflare.com/network"><u>data centers in over 335 cities worldwide</u></a>. Popular cached assets can reside across all of our data centers, meaning each purge request must quickly propagate to every location caching that content. Upon receiving a purge command, each data center must efficiently locate and invalidate cached content, preventing stale responses from being served. The amount of content that must be invalidated can vary drastically, from a single file, to all cached assets associated with a particular hostname. After the content has been purged, any subsequent requests will trigger retrieval of a fresh copy from the origin server, which will be stored in Cloudflare’s cache during the response. </p><p>Ensuring consistent, rapid propagation of purge requests across a vast network introduces substantial technical challenges, especially when accounting for occasional data center outages, maintenance, or network interruptions. Maintaining consistency under these conditions requires robust distributed systems engineering.</p>
    <div>
      <h2>How did we scale purge?</h2>
      <a href="#how-did-we-scale-purge">
        
      </a>
    </div>
    <p>We've <a href="https://blog.cloudflare.com/instant-purge/"><u>previously discussed</u></a> how our new Instant Purge system was architected to achieve sub-150 ms purge times. It’s worth noting that the performance improvements were only part of what our new architecture achieved, as it also helped us solve significant scaling challenges around storage and throughput that allowed us to bring Instant Purge to all users. </p><p>Initially, our purge system scaled well, but with rapid customer growth, the storage consumption from millions of daily purge keys that needed to be stored reduced available caching space. Early attempts to manage this storage and throughput demand involved <a href="https://www.boltic.io/blog/kafka-queue"><u>queues</u></a> and batching for smoothing traffic spikes, but this introduced latency and underscored the tight coupling between increased usage and rising storage costs.</p><p>We needed to revisit our thinking on how to better store purge keys and when to remove purged content so we could reclaim space. Historically, when a customer would purge by tag, prefix or hostname, Cloudflare would mark the content as expired and allow it to be evicted later. This is known as lazy-purge because nothing is actively removed from disk. Lazy-purge is fast, but not necessarily efficient, because it consumes storage for expired but not-yet-evicted content. After examining global or data center-level indexing for purge keys, we decided that wasn't viable due to increases in system complexity and the latency those indices could bring due to our network size. So instead, we opted for per-machine indexing, integrating indices directly alongside our cache proxies. This minimized network complexity, simplified reliability, and provided predictable scaling.</p><p>After careful analysis and benchmarking, we selected <a href="https://rocksdb.org/"><u>RocksDB</u></a>, an embedded key-value store that we could optimize for our needs, which formed the basis of <a href="https://blog.cloudflare.com/instant-purge/#putting-it-all-together"><u>CacheDB</u></a>, our Rust-based service running alongside each cache proxy. CacheDB manages indexing and immediate purge execution (active purge), significantly reducing storage needs and freeing space for caching.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4FZ0bQSx5MUhx3x3hwlRuk/91a27af7db5e629cd6d5fbe692397eaf/image2.png" />
          </figure><p>Local queues within CacheDB buffer purge operations to ensure consistent throughput without latency spikes, while the cache proxies consult CacheDB to guarantee rapid, active purges. Our updated distribution pipeline broadcasts purges directly to CacheDB instances across machines, dramatically improving throughput and purge speed.</p><p>Using CacheDB, we've reduced storage requirements 10x by eliminating lazy purge storage accumulation, instantly freeing valuable disk space. The freed storage enhances cache retention, boosting cache HIT ratios and minimizing origin egress. These savings in storage and increased throughput allowed us to scale to the point where we can offer Instant Purge to more customers.</p><p>For more information on how we designed the new Instant Purge system, please see the previous <a href="https://blog.cloudflare.com/instant-purge/"><u>installment</u></a> of our Purge series blog posts. </p>
    <div>
      <h2>Striking the right balance: what to purge and when</h2>
      <a href="#striking-the-right-balance-what-to-purge-and-when">
        
      </a>
    </div>
    <p>Moving on to practical considerations of using these new purge methods, it’s important to use the right method for what you want to invalidate. Purging too aggressively can overwhelm origin servers with unnecessary requests, driving up egress costs and potentially causing downtime. Conversely, insufficient purging leaves visitors with outdated content. Balancing precision and speed is vital.</p><p>Cloudflare supports multiple targeted purge methods to help customers achieve this balance.</p><ul><li><p><a href="https://developers.cloudflare.com/cache/how-to/purge-cache/purge-everything/"><b><u>Purge Everything</u></b></a>: Clears all cached content associated with a website.</p></li><li><p><a href="https://developers.cloudflare.com/cache/how-to/purge-cache/purge_by_prefix/"><b><u>Purge by Prefix</u></b></a>: Targets URLs sharing a common prefix.</p></li><li><p><a href="https://developers.cloudflare.com/cache/how-to/purge-cache/purge-by-hostname/"><b><u>Purge by Hostname</u></b></a>: Invalidates content by specific hostnames.</p></li><li><p><a href="https://developers.cloudflare.com/cache/how-to/purge-cache/purge-by-single-file/"><b><u>Purge by URL (single-file purge</u></b></a><b>)</b>: Precisely targets individual URLs.</p></li><li><p><a href="https://developers.cloudflare.com/cache/how-to/purge-cache/purge-by-tags/"><b><u>Purge by Tag</u></b></a>: Uses <a href="https://developers.cloudflare.com/cache/how-to/purge-cache/purge-by-tags/#add-cache-tag-http-response-headers"><u>Cache-Tag</u></a> headers to invalidate grouped assets, offering flexibility for complex cache management scenarios.</p></li></ul><p>Starting today, all of these methods are available to every Cloudflare customer.    </p>
    <div>
      <h2>How to purge </h2>
      <a href="#how-to-purge">
        
      </a>
    </div>
    <p>Users can select their purge method directly in the Cloudflare dashboard, located under the Cache tab in the <a href="https://dash.cloudflare.com/?to=/:account/:zone/caching/configuration"><u>configurations section</u></a>, or via the <a href="https://developers.cloudflare.com/api/resources/cache/"><u>Cloudflare API</u></a>. Each purge request should clearly specify the targeted URLs, hostnames, prefixes, or cache tags relevant to the selected purge type (known as purge keys). For instance, a prefix purge request might specify a directory such as example.com/foo/bar. To maximize efficiency and throughput, batching multiple purge keys in a single request is recommended over sending individual purge requests each with a single key.</p>
    <div>
      <h2>How much can you purge?</h2>
      <a href="#how-much-can-you-purge">
        
      </a>
    </div>
    <p>The new rate limits for Cloudflare's purge by tag, prefix, hostname, and purge everything are different for each plan type. We use a <a href="https://en.wikipedia.org/wiki/Token_bucket"><u>token bucket</u></a> rate limit system, so each account has a token bucket with a maximum size based on plan type. When we receive a purge request we first add tokens to the account’s bucket based on the time passed since the account’s last purge request divided by the refill rate for its plan type (which can be a fraction of a token). Then we check if there’s at least one whole token in the bucket, and if so we remove it and process the purge request. If not, the purge request will be rate limited. An easy way to think about this rate limit is that the refill rate represents the consistent rate of requests a user can send in a given period while the bucket size represents the maximum burst of requests available.</p><p>For example, a free user starts with a bucket size of 25 requests and a refill rate of 5 requests per minute (one request per 12 seconds). If the user were to send 26 requests all at once, the first 25 would be processed, but the last request would be rate limited. They would need to wait 12 seconds and retry their last request for it to succeed. </p><p>The current limits are applied per <b>account</b>: </p><table><tr><td><p><b>Plan</b></p></td><td><p><b>Bucket size</b></p></td><td><p><b>Request refill rate</b></p></td><td><p><b>Max keys per request</b></p></td><td><p><b>Total keys</b></p></td></tr><tr><td><p><b>Free</b></p></td><td><p>25 requests</p></td><td><p>5 per minute</p></td><td><p>100</p></td><td><p>500 per minute</p></td></tr><tr><td><p><b>Pro</b></p></td><td><p>25 requests</p></td><td><p>5 per second</p></td><td><p>100</p></td><td><p>500 per second</p></td></tr><tr><td><p><b>Biz</b></p></td><td><p>50 requests</p></td><td><p>10 per second</p></td><td><p>100</p></td><td><p>1,000 per second</p></td></tr><tr><td><p><b>Enterprise</b></p></td><td><p>500 requests</p></td><td><p>50 per second</p></td><td><p>100</p></td><td><p>5,000 per second</p></td></tr></table><p>More detailed documentation on all purge rate limits can be found in our <a href="https://developers.cloudflare.com/cache/how-to/purge-cache/"><u>documentation</u></a>.</p>
    <div>
      <h2>What’s next?</h2>
      <a href="#whats-next">
        
      </a>
    </div>
    <p>We’ve spent a lot of time optimizing our purge platform. But we’re not done yet. Looking forward, we will continue to enhance the performance of Cloudflare’s single-file purge. The current P50 performance is around 250 ms, and we suspect that we can optimize it further to bring it under 200 ms. We will also build out our ability to allow for greater purge throughput for all of our systems, and will continue to find ways to implement filtering techniques to ensure we can continue to scale effectively and allow customers to purge whatever and whenever they choose. </p><p>We invite you to try out our new purge system today and deliver an instant, seamless experience to your visitors.</p> ]]></content:encoded>
            <category><![CDATA[Cache]]></category>
            <category><![CDATA[Speed & Reliability]]></category>
            <category><![CDATA[Performance]]></category>
            <category><![CDATA[Cache Purge]]></category>
            <guid isPermaLink="false">4LTq8Utw6K58W4ojKxsqw8</guid>
            <dc:creator>Alex Krivit</dc:creator>
            <dc:creator> Connor Harwood</dc:creator>
            <dc:creator>Zaidoon Abd Al Hadi</dc:creator>
        </item>
        <item>
            <title><![CDATA[Part 2: Rethinking cache purge with a new architecture]]></title>
            <link>https://blog.cloudflare.com/rethinking-cache-purge-architecture/</link>
            <pubDate>Wed, 21 Jun 2023 13:00:47 GMT</pubDate>
            <description><![CDATA[ In this post we’ll be talking about some of the architecture improvements we’ve made so far for Cache Purge and what we’re working on now ]]></description>
            <content:encoded><![CDATA[ 
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3EU9TDz1YS47zrHosO5j0g/ce6992e656f33e7819696b3b36f30c0a/image4-12.png" />
            
            </figure><p>In <a href="/part1-coreless-purge/">Part 1: Rethinking Cache Purge, Fast and Scalable Global Cache Invalidation</a>, we outlined the importance of cache invalidation and the difficulties of purging caches, how our existing purge system was designed and performed, and we gave a high level overview of what we wanted our new Cache Purge system to look like.</p><p>It’s been a while since we published the first blog post and it’s time for an update on what we’ve been working on. In this post we’ll be talking about some of the architecture improvements we’ve made so far and what we’re working on now.</p>
    <div>
      <h2>Cache Purge end to end</h2>
      <a href="#cache-purge-end-to-end">
        
      </a>
    </div>
    <p>We touched on the high level design of what we called the “coreless” purge system in part 1, but let’s dive deeper into what that design encompasses by following a purge request from end to end:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2wDdgC1O4K0X6npqec7hd9/11748d6419268a8efdfaaa1c24542e1a/image3-13.png" />
            
            </figure>
    <div>
      <h3>Step 1: Request received locally</h3>
      <a href="#step-1-request-received-locally">
        
      </a>
    </div>
    <p>An API request to Cloudflare is routed to the nearest Cloudflare data center and passed to an <b>API Gateway worker</b>. This worker looks at the request URL to see which service it should be sent to and forwards the request to the appropriate upstream backend. Most endpoints of the Cloudflare API are currently handled by centralized services, so the <b>API Gateway worker</b> is often just proxying requests to the nearest “core” data center which have their own gateway services to handle authentication, authorization, and further routing. But for endpoints which aren’t handled centrally the <b>API Gateway worker</b> must handle <a href="https://www.cloudflare.com/learning/access-management/what-is-authentication/">authentication</a> and route authorization, and then proxy to an appropriate upstream. For cache purge requests that upstream is a <b>Purge Ingest worker</b> in the same data center.</p>
    <div>
      <h3>Step 2: Purges tested locally</h3>
      <a href="#step-2-purges-tested-locally">
        
      </a>
    </div>
    <p>The <b>Purge Ingest worker</b> evaluates the purge request to make sure it is processible. It scans the URLs in the body of the request to see if they’re valid, then attempts to purge the URLs from the local data center’s cache. This concept of <b>local purging</b> was a new step introduced with the coreless purge system allowing us to capitalize on existing logic already used in every data center.</p><p>By leveraging the same ownership checks our data centers use to serve a zone’s normal traffic on the URLs being purged, we can determine if those URLs are even cacheable by the zone. Currently <b>more than 50%</b> of the URLs we’re asked to purge can’t be cached by the requesting zones, either because they don’t own the URLs (e.g. a customer asking us to purge <a href="https://cloudflare.com">https://cloudflare.com</a>) or because the zone’s settings for the URL prevent caching (e.g. the zone has a “bypass” cache rule that matches the URL). All such purges are superfluous and shouldn’t be processed further, so we filter them out and avoid broadcasting them to other data centers freeing up resources to process more legitimate purges.</p><p>On top of that, generating the cache key for a file isn’t free; we need to load zone configuration options that might affect the cache key, apply various transformations, et cetera. The cache key for a given file is the same in every data center though, so when we purge the file locally we now return the generated cache key to the <b>Purge Ingest worker</b> and broadcast that key to other data centers instead of making each data center generate it themselves.</p>
    <div>
      <h3>Step 3: Purges queued for broadcasting</h3>
      <a href="#step-3-purges-queued-for-broadcasting">
        
      </a>
    </div>
    
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/KE3gyiFfEntTtr8vMYODq/5c37d8a17ef2a34ad730893b21704e6b/image2-15.png" />
            
            </figure><p>Once the local purge is done the <b>Purge Ingest worker</b> forwards the purge request with the cache key obtained from the local cache to a <b>Purge Queue worker</b>. The queue worker is a <a href="https://developers.cloudflare.com/workers/learning/using-durable-objects/">Durable Object</a> worker using its persistent state to hold a queue of purges it receives and pointers to how far along in the queue each data center in our network is in processing purges.</p><p>The queue is important because it allows us to automatically recover from a number of scenarios such as connectivity issues or data centers coming back online after maintenance. Having a record of all purges since an issue arose lets us replay those purges to a data center and “catch up”.</p><p>But Durable Objects are globally unique, so having one manage all global purges would have just moved our centrality problem from a core data center to wherever that Durable Object was provisioned. Instead we have dozens of Durable Objects in each region, and the <b>Purge Ingest worker</b> looks at the load balancing pool of Durable Objects for its region and picks one (often in the same data center) to forward the request to. The Durable Object will write the purge request to its queue and immediately loop through all the data center pointers and attempt to push any outstanding purges to each.</p><p>While benchmarking our performance we found our particular workload exhibited a “goldilocks zone” of throughput to a given Durable Object. On script startup we have to load all sorts of data like network topology and data center health–then refresh it continuously in the background–and as long as the Durable Object sees steady traffic it stays active and we amortize those startup costs. But if you ask a single Durable Object to do too much at once like send or receive too many requests, the single-threaded runtime won’t keep up. Regional purge traffic fluctuates a lot depending on local time of day, so there wasn’t a static quantity of Durable Objects per region that would let us stay within the goldilocks zone of enough requests to each to keep them active but not too many to keep them efficient. So we built load monitoring into our Durable Objects, and a <b>Regional Autoscaler worker</b> to aggregate that data and adjust load balancing pools when we start approaching the upper or lower edges of our efficiency goldilocks zone.</p>
    <div>
      <h3>Step 4: Purges broadcast globally</h3>
      <a href="#step-4-purges-broadcast-globally">
        
      </a>
    </div>
    
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6wb88kH1WkS4aZjc8wWsk2/9c4ae3c695f9725f69c898bb6c8b8a2d/image1-22.png" />
            
            </figure><p>Once a purge request is queued by a <b>Purge Queue worker</b> it needs to be broadcast to the rest of Cloudflare’s data centers to be carried out by their caches. The Durable Objects will broadcast purges directly to all data centers in their region, but when broadcasting to other regions they pick a <b>Purge Fanout worker</b> per region to take care of their region’s distribution. The fanout workers manage queues of their own as well as pointers for all of their region’s data centers, and in fact they share a lot of the same logic as the <b>Purge Queue workers</b> in order to do so. One key difference is fanout workers aren’t Durable Objects; they’re normal worker scripts, and their queues are purely in memory as opposed to being backed by Durable Object state. This means not all queue worker Durable Objects are talking to the same fanout worker in each region. Fanout workers can be dropped and spun up again quickly by any metal in the data center because they aren’t canonical sources of state. They maintain queues and pointers for their region but all of that info is also sent back downstream to the Durable Objects who persist that data themselves, reliably.</p><p>But what does the fanout worker get us? Cloudflare has hundreds of <a href="https://www.cloudflare.com/learning/cdn/glossary/data-center/">data centers</a> all over the world, and as we mentioned above we benefit from keeping the number of incoming and outgoing requests for a Durable Object fairly low. Sending purges to a fanout worker per region means each Durable Object only has to make a fraction of the requests it would if it were broadcasting to every data center directly, which means it can process purges faster.</p><p>On top of that, occasionally a request will fail to get where it was going and require retransmission. When this happens between data centers in the same region it’s largely unnoticeable, but when a Durable Object in Canada has to retry a request to a data center in rural South Africa the cost of traversing that whole distance again is steep. The data centers elected to host fanout workers have the most reliable connections in their regions to the rest of our network. This minimizes the chance of inter-regional retries and limits the <a href="https://www.cloudflare.com/learning/performance/glossary/what-is-latency/">latency</a> imposed by retries to regional timescales.</p><p>The introduction of the <b>Purge Fanout worker</b> was a massive improvement to our distribution system, reducing our end-to-end purge latency by 50% on its own and increasing our throughput threefold.</p>
    <div>
      <h2>Current status of coreless purge</h2>
      <a href="#current-status-of-coreless-purge">
        
      </a>
    </div>
    <p>We are proud to say our new purge system has been in production serving <a href="https://api.cloudflare.com/#zone-purge-files-by-url">purge by URL requests</a> since July 2022, and the results in terms of latency improvements are dramatic. In addition, <a href="https://api.cloudflare.com/#zone-purge-files-by-cache-tags,-host,-or-prefix">flexible purge requests</a> (purge by tag/prefix/host and purge everything) share and benefit from the new coreless purge system’s entrypoint workers before heading to a core data center for fulfillment.</p><p>The reason flexible purge isn’t also fully coreless yet is because it’s a more complex task than “purge this object”; flexible purge requests can end up purging multiple objects–or even entire zones–from cache. They do this through an entirely different process that isn’t coreless compatible, so to make flexible purge fully coreless we would have needed to come up with an entirely new multi-purge mechanism on top of redesigning distribution. We chose instead to start with just purge by URL so we could focus purely on the most impactful improvements, revamping distribution, without reworking the logic a data center uses to actually remove an object from cache.</p><p>This is not to say that the flexible purges haven’t benefited from the coreless purge project. Our cache purge API lets users bundle single file and flexible purges in one request, so the <b>API Gateway worker</b> and <b>Purge Ingest worker</b> handle authorization, authentication and payload validation for flexible purges too. Those flexible purges get forwarded directly to our services in core data centers pre-authorized and validated which reduces load on those core data center auth services. As an added benefit, because authorization and validity checks all happen at the edge for all purge types users get much faster feedback when their requests are malformed.</p>
    <div>
      <h2>Next steps</h2>
      <a href="#next-steps">
        
      </a>
    </div>
    <p>While coreless cache purge has come a long way since the part 1 blog post, we’re not done. We continue to work on reducing end-to-end latency even more for purge by URL because we can do better. Alongside improvements to our new distribution system, we’ve also been working on the redesign of flexible purge to make it fully coreless, and we’re really excited to share the results we’re seeing soon. Flexible cache purge is an incredibly popular API and we’re giving its refresh the care and attention it deserves.</p> ]]></content:encoded>
            <category><![CDATA[Speed Week]]></category>
            <category><![CDATA[Performance]]></category>
            <guid isPermaLink="false">7dDkNRw13IhqI87teGSw70</guid>
            <dc:creator>Zaidoon Abd Al Hadi</dc:creator>
        </item>
        <item>
            <title><![CDATA[CDN-Cache-Control: Precision Control for your CDN(s)]]></title>
            <link>https://blog.cloudflare.com/cdn-cache-control/</link>
            <pubDate>Fri, 21 May 2021 11:00:02 GMT</pubDate>
            <description><![CDATA[ A new set of HTTP response headers provide control over our CDN’s caching decisions. CDN-Cache-Control allows customers to control how our CDN behaves without affecting the behavior of other caches. ]]></description>
            <content:encoded><![CDATA[ <p></p><p>Today we are thrilled to announce our support of a new set of HTTP response headers that provide surgical control over our CDN’s caching decisions. <a href="https://datatracker.ietf.org/doc/html/draft-cdn-control-header-01">CDN-Cache-Control</a> allows customers to directly control how our <a href="https://www.cloudflare.com/learning/cdn/what-is-a-cdn/">CDN</a> behaves without affecting the behavior of downstream or upstream caches.</p><p>You might be thinking that this sounds a lot like the <a href="https://support.cloudflare.com/hc/en-us/articles/115003206852-Understanding-Origin-Cache-Control">Cache-Control</a> header we all know and love. And it’s very similar! CDN-Cache-Control has exactly the same directives as the Cache-Control header. The problem CDN-Cache-Control sets out to solve is that with Cache-Control, some directives are targeted at specific classes of caches (like <code>s-maxage</code> for shared caches), while other directives are not targeted at controlling any specific classes of intermediary caches (think <code>stale-while-revalidate</code>). As these non-specific directives are returned to downstream caches, they’re often not applied uniformly. This problem is amplified as the number of intermediary caches grows between an origin and the client.</p><p>For example, a website may deploy a caching layer on the origin server itself, there might be a cache on the origin’s network, the site might use one or more CDNs to cache content distributed throughout the Internet, and the visitor’s browser might cache content as well. As the response returns from the origin, each of these layers must interpret the set Cache-Control directives. These layers, however, are not guaranteed to interpret Cache-Control in the same way, which can cause unexpected behavior and confusion.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/767XtpC9Rywutdgs1h0SC6/844b566957308086e8610fc2ce2bd0ac/image2-6.png" />
            
            </figure><p>We set out to <a href="https://www.cloudflare.com/learning/cdn/common-cdn-issues/">solve these problems</a> and have been working with industry peers who also run large CDNs to create an <a href="https://datatracker.ietf.org/doc/html/draft-cdn-control-header-01">industry standard solution</a> through the Internet Engineering Task Force. CDN-Cache-Control is aimed at providing directives to manage how specific CDNs behave when caching objects.</p>
    <div>
      <h2>Introducing CDN-Cache-Control</h2>
      <a href="#introducing-cdn-cache-control">
        
      </a>
    </div>
    <p>CDN-Cache-Control is a response header field set on the origin to control the behavior of CDN caches separately from other intermediaries that might handle a response. This feature can be used by setting the CDN-Cache-Control and/or <b>Cloudflare-CDN-Cache-Control</b> response header. Both of these new headers support the same directives currently supported by Cache-Control and also have the same semantics and directive precedence. In other words, if you were to copy and paste a Cache-Control value and insert it into either of these new headers, the same caching behavior should be observed.</p>
    <div>
      <h2>Header precedence; or, which header should I use when?</h2>
      <a href="#header-precedence-or-which-header-should-i-use-when">
        
      </a>
    </div>
    <p>When introducing a set of new cache response headers, a question at the forefront of the cache-conscious mind is how will these directives work when combined with each other or other Cache-Control directives? There are several options depending on how these headers are used. An origin can:</p><ol><li><p>Return the CDN-Cache-Control response header which Cloudflare will evaluate to make caching decisions. Cache-Control, if also returned by the origin, will be proxied as is (more on this later) and will not affect caching decisions made by Cloudflare. In addition, CDN-Cache-Control will also be proxied downstream in case there are other CDNs between Cloudflare and the browser.</p></li><li><p>Return the Cloudflare-CDN-Cache-Control response header. This results in the same behavior as the origin returning CDN-Cache-Control except we will NOT proxy Cloudflare-CDN-Cache-Control downstream because it’s a header only used to control Cloudflare. This is beneficial if you want only Cloudflare to have a different caching behavior while having all downstream servers rely on Cache-Control, or you simply don’t want Cloudflare to proxy the CDN-Cache-Control header downstream.</p></li><li><p>Return both Cloudflare-CDN-Cache-Control and CDN-Cache-Control response headers. In this case, Cloudflare will only look at Cloudflare-CDN-Cache-Control when making caching decisions because it is the most specific version of CDN-Cache-Control and will proxy CDN-Cache-Control downstream. Only forwarding CDN-Cache-Control in this situation is beneficial if you want Cloudflare to have a different caching behavior than other CDNs downstream.</p></li></ol><p>For example, a response leaves the origin server and can hit the following caches on the way to the browser and can be controlled by the following response headers (assuming the other CDNs support CDN-Cache-Control):</p>
<div><table><thead>
  <tr>
    <th><span>Caches </span></th>
    <th><span>Control Headers</span></th>
  </tr></thead>
<tbody>
  <tr>
    <td><span>Origin Server Cache</span></td>
    <td><span>Cache-Control </span></td>
  </tr>
  <tr>
    <td><span>Shared Cache on the Origin Network</span></td>
    <td><span>Cache-Control </span></td>
  </tr>
  <tr>
    <td><span>CDN #1</span></td>
    <td><span>CDN-Cache-Control</span></td>
  </tr>
  <tr>
    <td><span>Cloudflare CDN</span></td>
    <td><span>Cloudflare-CDN-Cache-Control/CDN-Cache-Control</span></td>
  </tr>
  <tr>
    <td><span>CDN #N</span></td>
    <td><span>CDN-Cache-Control</span></td>
  </tr>
  <tr>
    <td><span>Browser Cache</span></td>
    <td><span>Cache-Control </span></td>
  </tr>
</tbody></table></div><p>With these headers and directives set, intermediaries know whether it’s safe for something to be cached, how long it should be cached, and what to do once it’s no longer permitted to remain in cache.</p>
    <div>
      <h2>Interaction with other Cloudflare features</h2>
      <a href="#interaction-with-other-cloudflare-features">
        
      </a>
    </div>
    
    <div>
      <h3>Edge Cache TTL Page Rule</h3>
      <a href="#edge-cache-ttl-page-rule">
        
      </a>
    </div>
    <p>Edge Cache TTL is a <a href="https://support.cloudflare.com/hc/en-us/articles/218411427-Understanding-and-Configuring-Cloudflare-Page-Rules-Page-Rules-Tutorial-#h_18YTlvNlZET4Poljeih3TJ">page rule</a> that is meant to override the amount of time an asset is cached on the edge (Cloudflare data centers) and therefore overrides directives in Cloudflare-CDN-Cache-Control/CDN-Cache-Control managing how long an asset is cached on the edge. This page rule can be set in the rules section of the dashboard.</p>
    <div>
      <h3>Browser Cache TTL Page Rule</h3>
      <a href="#browser-cache-ttl-page-rule">
        
      </a>
    </div>
    <p><a href="https://support.cloudflare.com/hc/en-us/articles/200168276-Understanding-Browser-Cache-TTL">Browser Cache TTL</a> is a page rule that is meant to override the amount of time an asset is cached by browsers/servers downstream of Cloudflare. Therefore, Browser Cache TTL will only modify the <b>Cache-Control</b> response header. Cloudflare-CDN-Cache-Control/CDN-Cache-Control response headers will not be modified by this page rule.</p>
    <div>
      <h3>Interaction With Other Origin Response Headers</h3>
      <a href="#interaction-with-other-origin-response-headers">
        
      </a>
    </div>
    <p>The Expires response header returned by the origin, which generally directs a browser on how long before an object should be considered stale, will not affect the caching decision at Cloudflare when Cloudflare-CDN-Cache-Control/CDN-Cache-Control is being used.</p>
    <div>
      <h3>Interaction with Cloudflare Default Cache Values</h3>
      <a href="#interaction-with-cloudflare-default-cache-values">
        
      </a>
    </div>
    <p>In the situation where Cloudflare does not receive Cloudflare-CDN-Cache-Control, CDN-Cache-Control, or Cache-Control values, the general <a href="https://support.cloudflare.com/hc/en-us/articles/200172516#h_51422705-42d0-450d-8eb1-5321dcadb5bc">default values</a> will be used for cacheable assets.</p>
    <div>
      <h2>When should I use CDN-Cache-Control?</h2>
      <a href="#when-should-i-use-cdn-cache-control">
        
      </a>
    </div>
    <p>Caching is one of the most powerful tools available to ensure all possible requests are served from data centers near visitors to improve a website’s performance and limit origin egress. Cache-Control directives are the rules that dictate how caches should behave. These rules dictate how long something should stay in cache and direct the cache on what to do once that content has expired. However, in a setup where there are numerous caching layers between the origin and client, getting the desired control over each hop a response makes back to the client is complicated. This difficulty is exacerbated by unpredictable behavior by intermediary caches proxying or stripping cache control headers sent downstream.</p><p>Let’s walk through a few examples for how to use CDN-Cache-Control:</p>
    <div>
      <h3>Acme Corp</h3>
      <a href="#acme-corp">
        
      </a>
    </div>
    <p>Acme Corp is a user of Cloudflare’s CDN. They want to manage their cached asset’s TTLs separately for origin caches, CDN caches, and browser caches. Previously, Page Rules were required to specify their desired behavior. Now with CDN-Cache-Control, this common scenario can be accomplished solely through the use of origin-set response headers.</p><p><b>Before</b></p><p>Headers:</p><ul><li><p>Cache-Control: max-age=14400, s-maxage=84000</p></li><li><p>Set an Edge Cache TTL Page Rule on Cloudflare for 24400 seconds fixed to the asset’s path</p></li></ul><p>Cache Behavior:</p>
<div><table><thead>
  <tr>
    <th><span>Caches</span></th>
    <th><span>Cache TTL (seconds)</span></th>
  </tr></thead>
<tbody>
  <tr>
    <td><span>Origin Server Cache</span></td>
    <td><span>14400</span></td>
  </tr>
  <tr>
    <td><span>Cloudflare Edge</span></td>
    <td><span>24400</span></td>
  </tr>
  <tr>
    <td><span>Other CDNs</span></td>
    <td><span>84000</span></td>
  </tr>
  <tr>
    <td><span>Browser Cache</span></td>
    <td><span>14400</span></td>
  </tr>
</tbody></table></div><p><b>Now (no need for Page Rule configuration, and can set different TTLs on different CDNs)</b></p><p>Headers:</p><ul><li><p>Cache-Control: max-age=14400, s-maxage=84000</p></li><li><p>Cloudflare-CDN-Cache-Control: max-age=24400</p></li><li><p>CDN-Cache-Control: 18000</p></li></ul><p>Cache Behavior:</p>
<div><table><thead>
  <tr>
    <th><span>Caches</span><span> </span></th>
    <th><span>Cache TTL (seconds)</span></th>
  </tr></thead>
<tbody>
  <tr>
    <td><span>Origin Server Cache</span></td>
    <td><span>14400</span></td>
  </tr>
  <tr>
    <td><span>Network Shared Cache</span></td>
    <td><span>84000</span></td>
  </tr>
  <tr>
    <td><span>Cloudflare Edge</span></td>
    <td><span>24400</span></td>
  </tr>
  <tr>
    <td><span>Other CDNs</span></td>
    <td><span>18000</span></td>
  </tr>
  <tr>
    <td><span>Browser Cache</span></td>
    <td><span>14400</span></td>
  </tr>
</tbody></table></div>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/748TZ8fC9P7Fpcc7HlDQHS/53fe73aaad00de84a6cd0e7eb87e0bde/image3-7.png" />
            
            </figure>
    <div>
      <h3>ABC Industries</h3>
      <a href="#abc-industries">
        
      </a>
    </div>
    <p>ABC Industries uses multiple CDNs stacked together serially and wants cache-specific control over when to serve stale content in the case of an error or during revalidation. This can more easily be expressed by using the new CDN-Cache-Control headers in combination with Cache-Control headers.</p><p>Previously, if a user wanted to specify when certain intermediaries should serve stale content, this could not be done. It was up to the cache to decide whether or not the directive applied to it and whether it should pass the header downstream. The new headers provide CDN-specific control for when to use stale assets to fulfill requests.</p><p><b>Before</b></p><p>Headers:</p><ul><li><p>Cache-Control: stale-if-error=400</p></li></ul><p>Behavior in response to 5XX Error:</p>
<div><table><thead>
  <tr>
    <th><span>Caches</span></th>
    <th><span>Stale served (seconds) in response to error</span></th>
  </tr></thead>
<tbody>
  <tr>
    <td><span>Origin Cache Layer</span></td>
    <td><span>400 (if it assumes the directive applies)</span></td>
  </tr>
  <tr>
    <td><span>Cloudflare Edge</span></td>
    <td><span>400 (we assume the directive applies if we get it from upstream) </span></td>
  </tr>
  <tr>
    <td><span>Unknown CDN/Network caches/Browser Cache</span></td>
    <td><span>0 (if they assume the directives doesn’t apply or they don’t get them from upstream); or 400 (if they do assume it applies)</span></td>
  </tr>
</tbody></table></div><p><b>Now (explicit indication of when directives apply to CDNs)</b></p><p>Headers:</p><ul><li><p>Cache-Control: stale-if-error=400</p></li><li><p>Cloudflare-CDN-Cache-Control: stale-if-error=60</p></li><li><p>CDN-Cache-Control: stale-if-error=200</p></li></ul><p>Behavior in response to 5XX Error:</p>
<div><table><thead>
  <tr>
    <th><span>Caches</span></th>
    <th><span>Stale served (seconds) in response to error</span></th>
  </tr></thead>
<tbody>
  <tr>
    <td><span>Origin Cache Layer/Network Cache/Browser Cache</span></td>
    <td><span>400 (if it assumes the directive applies)</span></td>
  </tr>
  <tr>
    <td><span>Cloudflare Edge</span></td>
    <td><span>60</span></td>
  </tr>
  <tr>
    <td><span>Other CDN</span></td>
    <td><span>200</span></td>
  </tr>
</tbody></table></div>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6RDiwoaXQRmDRkSNhIi8NA/def3789cd66097c3140e09ef3b51896e/image4-10.png" />
            
            </figure>
    <div>
      <h2>Try it out!</h2>
      <a href="#try-it-out">
        
      </a>
    </div>
    <p><b>Overall, CDN-Cache-Control allows finer grained control of how Cloudflare manages cache lifetimes and revalidation behavior on a per-asset basis.</b></p><p>If you’re looking for more control over how your CDNs’ cache objects, I encourage you to try these new headers out. And if you’re another CDN reading this, I recommend looking to add support for <a href="https://datatracker.ietf.org/doc/html/draft-cdn-control-header-01">CDN-Cache-Control</a>!</p> ]]></content:encoded>
            <category><![CDATA[CDN]]></category>
            <category><![CDATA[Cache]]></category>
            <category><![CDATA[Product News]]></category>
            <guid isPermaLink="false">4HgNrGAchRxNQH8Zqphswv</guid>
            <dc:creator>Alex Krivit</dc:creator>
            <dc:creator>Zaidoon Abd Al Hadi</dc:creator>
        </item>
    </channel>
</rss>