
<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:media="http://search.yahoo.com/mrss/">
    <channel>
        <title><![CDATA[ The Cloudflare Blog ]]></title>
        <description><![CDATA[ Get the latest news on how products at Cloudflare are built, technologies used, and join the teams helping to build a better Internet. ]]></description>
        <link>https://blog.cloudflare.com</link>
        <atom:link href="https://blog.cloudflare.com/" rel="self" type="application/rss+xml"/>
        <language>en-us</language>
        
        <lastBuildDate>Tue, 14 Apr 2026 21:30:17 GMT</lastBuildDate>
        <item>
            <title><![CDATA[Introducing Observatory and Smart Shield — see how the world sees your website, and make it faster in one click]]></title>
            <link>https://blog.cloudflare.com/introducing-observatory-and-smart-shield/</link>
            <pubDate>Fri, 26 Sep 2025 14:00:00 GMT</pubDate>
            <description><![CDATA[ We're announcing two enhancements to our Application Performance suite that'll show how the world sees your website, and make it faster with one click - available Cloudflare Dashboard! ]]></description>
            <content:encoded><![CDATA[ <p>Modern users expect instant, reliable web experiences. When your application is slow, they don’t just complain — they leave. Even delays as small as 100 ms have been <a href="https://wpostats.com/"><u>shown to have a measurable impact on revenue, conversions, bounce rate, engagement and more</u></a>. </p><p>If you’re responsible for delivering on these expectations to the users of your product, you know there are many monitoring tools that show you how visitors experience your website, and can let you know when things are slow or causing issues. This is essential, but we believe understanding the condition is only half the story. The real value comes from integrating monitoring and remedies in the same view, giving customers the ability to quickly identify and resolve issues.</p><p>That's why today, we're excited to launch the new and improved <b>Observatory</b>, now in open beta. This monitoring and <a href="https://www.cloudflare.com/learning/performance/what-is-observability/">observability</a> tool goes beyond charts and graphs, by also telling you exactly how to improve your application's performance and resilience, and immediately showing you the impact of those changes. And we’re releasing it to all subscription tiers (including Free!), available today. </p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/T6HhZL51aLEhzD3lQPxCq/f9b03f05cf4db0b2e61c8e861df4ecdf/1.png" />
          </figure><p>But wait, there’s more! To make your users’ experience in Cloudflare even faster, we’re launching Smart Shield, available today for all subscription tiers. Using Observatory, you can pinpoint performance bottlenecks, and for many of the most common issues, you can now apply the fix in just a few clicks with our <b>Smart Shield</b> product. Double the fun!</p>
    <div>
      <h2>Our unique perspective: leveraging data from 20% of the web</h2>
      <a href="#our-unique-perspective-leveraging-data-from-20-of-the-web">
        
      </a>
    </div>
    <p>Every day, Cloudflare handles traffic for over 20% of the web, giving us a unique vantage point into what makes websites faster and more resilient. We built Observatory to take advantage of this position, uniting data that is normally scattered across different tools — including real-user data, synthetic testing, error rates, and backend telemetry — into a single platform. This gives you a complete, cohesive picture of your application's health end-to-end, in one spot, and enables you to easily identify and resolve performance issues.</p><p>For this launch, we're bringing together:</p><ul><li><p><b>Real-user data:</b> See how your application performs for real people, in the real world.</p></li><li><p><b>Back-end telemetry:</b> Break down the lifecycle of a request to pinpoint areas for improvement.</p></li><li><p><b>Error rates:</b> Understand the stability of your application at both the edge and origin.</p></li><li><p><b>Cache hit ratios:</b> Ensure you're maximizing the performance of your configuration.</p></li><li><p><b>Synthetic testing:</b> Proactively test and monitor key endpoints with powerful, accurate simulations.</p></li></ul><p>Let's take a quick look at each data set to see how we use them in Observatory.</p>
    <div>
      <h2>Real-user data</h2>
      <a href="#real-user-data">
        
      </a>
    </div>
    <p>There are two primary forms of data collection: real-user data and synthetic data. Real-user data are performance metrics collected from real traffic, from real visitors, to your application. It’s how users are <i>actually</i> seeing your application perform in the real world. It’s unpredictable, and covers every scenario.</p><p>Synthetic data is data collected using some sort of simulated test (loading a site in a headless browser, making network requests from a testing system to an endpoint, etc.). Tests are run under a predefined set of characteristics — location, network speed, etc. — to provide a consistent baseline.</p><p>Both forms of data have their uses, and companies with a strongly established culture of operational excellence tend to use both.</p><p>The first data you’ll see when you visit Observatory is real-user data collected with <a href="https://www.cloudflare.com/web-analytics/"><u>Real User Monitoring (RUM)</u></a>, with a particular focus on the <a href="https://www.cloudflare.com/learning/performance/what-are-core-web-vitals/"><u>Core Web Vital</u></a> metrics.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/400NHp7OBcSXNmLi5AxXb8/641f6436574e040bfbc14b56c7bfcd70/1.5.png" />
          </figure><p>This is very intentional.</p><p>Real-user data should be the source of truth when it comes to measuring performance and resiliency of your application. Even the best of synthetic data sources are always going to be an approximation. They cannot cover every possible scenario, and because they are being run from a lab environment, they will not always reveal issues that may be more sporadic and unpredictable.</p><p>They’re also the best representation of what your users are experiencing when they access your site and, at the end of the day, that’s why we focus on improving performance, resiliency,  and security for our users.</p><p>We believe so strongly in the importance of every company having access to accurate, detailed RUM data that we are providing it for free, to all accounts. In fact, we’re about to make our <a href="https://www.cloudflare.com/web-analytics/#:~:text=Privacy%20First"><u>privacy-first analytics</u></a> — which doesn’t track individual users for analytics — <a href="https://blog.cloudflare.com/the-rum-diaries-enabling-web-analytics-by-default/"><u>available by default for all free zones</u></a> (<b>excluding data from EU or UK visitors</b>), no setup necessary. We believe the right thing is arming everyone with detailed, actionable, real-user data, and we want to make it easy.</p>
    <div>
      <h2>Backend telemetry</h2>
      <a href="#backend-telemetry">
        
      </a>
    </div>
    <p>Front-end performance metrics are our best proxy for understanding the actual user experience of an application and as a result, they work great as key performance indicators (KPI’s).</p><p>But they’re not enough. Every primary metric should have some level of supporting diagnostic metrics that help us understand <i>why</i> our user metrics are performing like they are — so that we can quickly identify issues, bottlenecks, and areas of improvement.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2Un8yQdUf9DZw05gfS5WVs/187901b7e636cec35655ff954b1f38c4/2.png" />
          </figure><p>While the industry has largely, and rightfully, moved on from Time to First Byte (TTFB) as a primary metric of focus, it still has value as a diagnostic metric. In fact, we analyzed our RUM data and found a very strong connection between <a href="https://developers.cloudflare.com/speed/observatory/test-results/#synthetic-tests-and-real-user-monitoring-metrics"><u>Time to First Byte and Largest Contentful Paint</u></a>.</p><p>Google’s recommended thresholds for Time to First Byte are:</p><ul><li><p>Good: &lt;= 800ms</p></li><li><p>Needs Improvement: &gt; 800ms and &lt;= 1800ms</p></li><li><p>Poor: &gt; 1800ms</p></li></ul><p>Similarly, their official thresholds for Largest Contentful Paint are:</p><ul><li><p>Good: &lt;= 2500ms</p></li><li><p>Needs Improvement &gt; 2500ms and &lt;= 4000ms</p></li><li><p>Poor: &gt; 4000ms</p></li></ul><p>Looking across over 9 billion events, we found that when compared to the average site, sites with a “poor” (&gt;1800ms) TTFB are:</p><ul><li><p>70.1 percentage points less likely to have a “good” LCP</p></li><li><p>21.9 percentage points more likely to have a “needs improvement” LCP</p></li><li><p>48.2 percentage points more likely to have a “poor” LCP</p></li></ul><p>TTFB is an ill-defined blackbox, so we’re making a point to break that down into its various subparts so you can quickly pinpoint if the issue is with the connection establishment, the server response time, the network itself, and more. We’ll be working to break this down even further in the coming months as we expose the complete lifecycle of a request so you’re able to pinpoint <i>exactly</i> where the bottlenecks lie.</p>
    <div>
      <h2>Errors &amp; cache ratios</h2>
      <a href="#errors-cache-ratios">
        
      </a>
    </div>
    <p>Degradation in stability and performance are frequently directly connected to configuration changes or an increase in errors. Clear visibility into these characteristics can often cut right to the heart of the issue at hand, as well as point to opportunities for improvement of the overall efficiency and effectiveness of your application.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6j89m6ONeXh9v6XL35YJjn/1d65ac83476971fc42fccc2980bc79ff/3.png" />
          </figure><p>Observatory prominently surfaces cache hit ratio and error rates for <i>both</i> the edge and origin. This compliments the backend telemetry nicely, and helps to further breakdown the backend metrics you are seeing to help pinpoint areas of improvement.</p><p>Take cache hit ratio for example. Intuitively, we know that when content is served from cache on an edge server, it should be faster than when the request has to go all the way back to the origin server. Based on our data, again, that’s exactly what we see.</p><p>If we consider our Time To First Byte thresholds again (good is &lt;= 800ms; needs improvement is &gt; 800ms and less than 1800ms; poor is anything over 1800ms), when looking across 9 billion data points as collected by our RUM solution, we see that a whopping <b>91.7% of all pages served from Cloudflare’s cache have a “good” TTFB compared to 79.7% when the request has to be served from the origin server</b>.</p><p>In other words, optimizing origin performance (more on that in a bit) and moving more content to the edge are sure-fire ways to give you a much stronger performance baseline.</p>
    <div>
      <h2>Accurate and detailed synthetic testing</h2>
      <a href="#accurate-and-detailed-synthetic-testing">
        
      </a>
    </div>
    <p>While real-user data is our source of truth, synthetic testing and monitoring is important as well. Because tests are run in a more controlled environment (test from this location, at this time, with this criteria, etc.), the resulting data is a lot less noisy and variable. In addition, because there is not a user involved and we don’t have to worry about any observer effect, synthetic tests are able to grab a lot more information about the request and page lifecycle.</p><p>As a result, synthetic data tends to work very well for arming engineers with debugging information, as well as providing a cleaner set of data for comparing and contrasting results across different platforms, releases, and other situations.</p><p>Observatory provides two different types of synthetic tests.</p><p>The first synthetic test is a browser test. A browser test will load the requested page in a headless browser, run <a href="https://developer.chrome.com/docs/lighthouse"><u>Google’s Lighthouse</u></a> on it to report on key performance metrics, and provide some light suggestions for improvement. </p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3cvDSWqtBTMibgYysDgEoI/43cd0c684d3705fe021f588674a91cf6/4.png" />
          </figure><p>The second type of synthetic test Observatory provides is a network test. This is a brand new test type in Cloudflare, and is focused on giving you a better breakdown of the network and back-end performance of an endpoint.</p><p>Each network test will hit the provided endpoint for the test and record the wait time, server response time, connect time, SSL negotiation time, and total load time for the endpoint response. Because these tests are much more targeted, a single test in itself is not as valuable and can be prone to variation. That variation isn’t necessarily a bad thing—in fact, variability in these results can actually give you a better understanding of the breadth of results when real users hit that same endpoint.</p><p>For that reason, network tests trigger a series of individual runs against the provided endpoint spread out over a short period of time. The data for each response is recorded, and then presented as a histogram on the test results page, letting you see not just a single datapoint, but the long and short-tail of each metric. This gives you a much more accurate representation of reality than what a single test run can provide.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3gCWSp0HCTd4iJ0rTKpEpk/a610e47596eedd6b8cedf73dfcde09ca/5.png" />
          </figure><p>You are also able to compare network tests in Observatory, by selecting two network tests that have been completed. Again, all the data points for each test will be provided in a histogram, where you can easily compare the results of the two.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6mG2bRanAGzltvkucJImue/11f56a4d3c3af4cd2a65dab834a0f0af/6.png" />
          </figure><p>We are working on improving both synthetic test types in Q4 2025, focusing on making them more powerful and diagnostic.</p><p>As we mentioned before, even at its best, synthetic data is an approximation of what is actually happening. Accuracy is critical. Inaccurate data can distract teams with variability and faulty measurements.</p><p>It’s important that these tools are as accurate and true to the real world as possible. It’s also important to us that we give back to the community, both because it’s the right thing to do, and because we believe the best way to have the highest level of confidence in the measurement tools and frameworks we’re using is the rigor and scrutiny that open-source provides.</p><p>For those reasons, we’ll be working on open-sourcing many of the testing agents we’re using to power Observatory. We’ll share more on that soon, as well as more details about how we’ve built each different testing tool, and why.</p>
    <div>
      <h2>Doing something about it: Smart Suggestions</h2>
      <a href="#doing-something-about-it-smart-suggestions">
        
      </a>
    </div>
    <p>People don’t measure for the sake of having data and pretty charts. They measure because they want to be able to stay on top of the health of their application and find ways to improve it. Data is easy. Understanding what to do about the data you’re presented is both the hardest, and most important, part.</p><p>Monitoring without action is useless.</p><p>We’re building Observatory to have a <i>relentless</i> focus on actionability. Before any new metric is presented, we take some time to explore why that metric matters, when it’s something worth addressing, and what actions you should take if those metrics need improvement.</p><p>All of that leads us to our new Smart Suggestions. Wherever possible, we want to pair each metric with a set of opinionated, data-driven suggestions for how to make things better. We want to avoid vague hand-wavy advice and instead be prescriptive and specific and precise.</p><p>For example, let’s look at one particular recommendation we provide around improving Largest Contentful Paint.</p><p>Largest Contentful Paint is a core web vital metric that measures when the largest piece of content is displayed on the screen. That piece of content could be an image, video or text.</p><p>Much like TTFB, Largest Contentful Paint is a bit of a black box by itself. While it tells us how long it takes for that content to get on screen, there are a large number of potential bottlenecks that could be causing the delay. Perhaps the server response time was very slow. Or maybe there was something blocking the content from being displayed on the page. If the object was an image or video, perhaps the filesize was large and the resulting download was slow. LCP by itself doesn’t give us that level of granularity, so it’s hard to give more than hand wavy guidance on how to address it.</p><p>Thankfully, just like we can break TTFB into subparts, we can break LCP into its subparts as well. Specifically we can look at:</p><ul><li><p>Time to First Byte: how quickly the server responds to the request for HTML</p></li><li><p>Resource Load Delay: How long it takes after TTFB for the browser to discover the LCP resource</p></li><li><p>Resource Load Duration: How long it takes for the browser to download the LCP resource</p></li><li><p>Render Delay: How long it takes the browser to render the content, after it has the resource in hand.</p></li></ul><p>Breaking it down into these subparts, we can be much more diagnostic about what to do.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7qfKPLaTGTjJjhawTVoWAi/10ce739e376cabd7c468adfa280246dd/7.png" />
          </figure><p>In the example above, our recommendation engine analyzes the site's real-user data and notices that Resource Load Delay accounts for over 10% of total LCP time. As a result, there’s a high likelihood that the resource triggering LCP is large and could potentially be compressed to reduce file size. So we make a recommendation to enable compression using <a href="https://developers.cloudflare.com/images/polish/"><u>Polish</u></a>.</p><p>We’re very excited about the impact these suggestions will have on helping everyone quickly zero in on meaningful solutions for improving performance and resiliency, without having to wade through mountains of data to get there. As we analyze data, we’ll find more and more patterns of problems and the solutions they can map to. Expanding on our Smart Suggestions will be a constant and ongoing focus as we move forward, and we are working on adding much more content about those patterns and what we find in Q4.</p>
    <div>
      <h2>Fixing the biggest pain point: Smart Shield</h2>
      <a href="#fixing-the-biggest-pain-point-smart-shield">
        
      </a>
    </div>
    <p>Observatory gives you unprecedented insight into your application's health, but insights are only half the battle. The next challenge is acting on them, which brings us to another layer of complexity: protecting your origin. For many of our customers, proper management of origin routes and connections is one of the largest drivers of aggregate overall performance. As we mentioned before, we see a clear negative impact on user-facing performance metrics when we have to go back to the origin, and we want to make it as easy as possible for our customers to improve those experiences. Achieving this requires protecting against unnecessary load while ensuring only trusted traffic reaches your servers.</p><p>Today's customers have powerful tools to protect their origins, but achieving basic use cases remains frustratingly complex:</p><ul><li><p>Making applications faster</p></li><li><p>Reducing origin load</p></li><li><p>Understanding origin health issues</p></li><li><p>Restricting IP address access to origin servers</p></li></ul><p>These fundamental needs currently require navigating multiple APIs and dashboard settings. You shouldn't need to become an expert in each feature — we should analyze your traffic patterns and provide clear, actionable solutions.</p>
    <div>
      <h2>Smart Shield: the future of origin shielding</h2>
      <a href="#smart-shield-the-future-of-origin-shielding">
        
      </a>
    </div>
    <p>Smart Shield transforms origin protection from a complex, multi-tool challenge into a streamlined, intelligent solution that works on your behalf. Our unified API and UI combines all origin protection essentials — dynamic traffic acceleration, intelligent caching, health monitoring, and dedicated egress IPs — into one place that enables single-click configuration.</p><p>But we didn't stop at simplification. Smart Shield integrates with <b>Observatory</b> to provide both the <b>“what” </b>— identifying performance bottlenecks and health issues — and the <b>“how” </b>— delivering capabilities that increase performance, availability, and security.</p><p>This creates a continuous feedback loop: Observatory identifies problems, Smart Shield provides solutions, and real-time analytics verify the impact. </p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2OI8AZzHo5kW4mesYsqM7Z/e08a5961deda6246a8d4fb906f2f5483/8.png" />
          </figure>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6blpvetS2fS0CNAvu1lnp2/c16e1a330c2c260df4920f85b1650917/9.png" />
          </figure><p>But what does this mean for you? </p><ul><li><p>Reduce total cost of ownership (TCO)</p></li><li><p>Reduce the time-to-value (TTV) for performance, availability, and security issues pertaining to customer origins</p></li><li><p>Enable new features without guesswork and validate effectiveness in the data</p></li></ul><p>Your time stays focused on building incredible user experiences, not becoming a configuration expert. We are excited to give you back time for your customers and your engineers, while paving the way for how you make sure your origin infrastructure is easily optimized to delight your customers. </p>
    <div>
      <h2>Protecting and accelerating origins with smart Connection Reuse</h2>
      <a href="#protecting-and-accelerating-origins-with-smart-connection-reuse">
        
      </a>
    </div>
    <p>Keeping your origins fast and stable is a big part of what we do at Cloudflare. When you experience a traffic surge, the last thing you want is for a flood of <a href="https://www.cloudflare.com/learning/ssl/what-happens-in-a-tls-handshake/"><u>TLS handshakes</u></a> to knock your origin down, or for those new connections to stall your requests, leaving your users to wait for slow pages to load.</p><p>This is why we’ve made significant changes to how Cloudflare’s network talks to your origins to dramatically improve the performance of our origin connections. </p><p>When Cloudflare makes a request to your origins, we make them from a subset of the available machines in every Cloudflare data center so that we can improve your connection reuse. Until now, this pool would be sized the same by default for every application within a data center, and changes to the sizing of the pool for a particular customer would need to be made manually. This often led to suboptimal connection reuse for our customers, as we might be making requests from way more machines than were actually needed, resulting in fewer warm connection pools than we otherwise could have had. This also caused issues at our data centers from time to time, as larger applications might have more traffic than the default pool size was capable of serving, resulting in production incidents where engineers are paged and had to manually increase the fanout factor for specific customers.</p><p>Now, these pool sizes are determined automatically and dynamically. By tracking domain-level traffic volume within a datacenter, we can automatically scale up and scale down the number of machines that serve traffic destined for customer origin servers for any particular customer, improving both the performance of customer websites and the reliability of our network. A massive, high-volume website with a considerable amount of API traffic will no longer be processed by the same number of machines as a smaller and more typical website. Our systems can respond to changes in customer traffic patterns within seconds, allowing us to quickly ramp up and respond to surges in origin traffic.</p><p>Thanks to these improvements, Cloudflare now uses over 30% fewer connections across the board to talk to origins. To put this into a more understandable perspective, this translates to saving approximately 402 years of handshake time every day across our global traffic, or 12,060 years of handshake time saved per month! This means just by proxying your traffic through Cloudflare, you’ll see a 30% on average reduction in the amount of connections to your origin, keeping it more available while serving the same traffic volume and in turn lowering your egress fees. But, in many cases, the results observed can be far greater than 30%. For example, in one data center which is particularly heavy in API traffic, we saw a reduction in origin connections of ~60%! </p><p>Many don’t realize that making more connections to an origin requires more compute and time for systems to create TCP and SSL handshakes. This takes time away from serving content requested by your end-users and can act as a hidden tax on your performance and overall to your application.<b> We are proud to reduce the Internet's hidden tax </b>by finding intelligent, innovative ways to reduce the amount of connections needed while supporting the same traffic volume.</p><p>Watch out for more updates to Smart Shield at the start of 2026 — we’re working on adding self-serve support for dedicated CDN egress IP addresses, along with significant performance, reliability, and resilience improvements!</p>
    <div>
      <h2>Charting the course: next steps for Observatory &amp; Smart Shield</h2>
      <a href="#charting-the-course-next-steps-for-observatory-smart-shield">
        
      </a>
    </div>
    <p>We’re really excited to share these two products with everyone today. Smart Shield and Observatory combine to provide a powerful one-two punch of insight and easy remediation.</p><p>As we navigate the beta launch of Observatory, we know this is just the start.</p><p>Our vision for Observatory is to be the single source of truth for your application’s health. We know that making the right decisions requires robust, accurate data, and we want to arm our customers with the most comprehensive picture available.</p><p>In the coming months, we plan to continue driving forward with our goal of providing comprehensive data, backed by a clear path to action.</p><ul><li><p><b>Deeper, more diagnostic data. </b>We’ll continue to break down data silos, bringing in more metrics to make sure you have a truly comprehensive view of your application’s health. We’ll be focused on going deeper and being more diagnostic, breaking down every aspect of both the request and page lifecycle to give you more granular data.</p></li><li><p><b>More paths to solutions. </b>People don’t measure for the sake of looking at data, they measure to solve problems. We’re going to continue to expand our suggestions, arming you with more precise, data-driven solutions to a wider range of issues, letting you fix problems with a single click through Smart Shield and bringing a tighter feedback loop to validate the impact of your configuration updates.</p></li><li><p><b>Benchmarking against other products.</b> Some of our customers split traffic between different CDNs due to regulatory or compliance requirements. Naturally, this brings up a whole series of questions about comparing the performance of the split traffic. In Observatory, you can compare these today, but we have a lot of things planned to make this even easier.</p></li></ul><p>Try out <a href="https://dash.cloudflare.com/?to=/:account/:zone/speed/overview"><u>Observatory</u></a> and <a href="https://www.cloudflare.com/application-services/products/smart-shield/"><u>Smart Shield</u></a> yourself today. And if you have ideas or suggestions for making Observatory and Smart Shield better, <a href="https://docs.google.com/forms/d/e/1FAIpQLScRMJVR7SmkiloMjPciaTdLzvHzKE9v3L0c418l02a1sMRj_g/viewform?usp=sharing&amp;ouid=115763007691250405767"><u>we’re all ears and would love to talk</u></a>!</p><div>
  
</div><p></p> ]]></content:encoded>
            <category><![CDATA[Speed]]></category>
            <category><![CDATA[Performance]]></category>
            <category><![CDATA[Birthday Week]]></category>
            <category><![CDATA[Aegis]]></category>
            <guid isPermaLink="false">tfg3NnmVPl0IoCJgQYuao</guid>
            <dc:creator>Tim Kadlec</dc:creator>
            <dc:creator>Brian Batraski</dc:creator>
            <dc:creator>Noah Maxwell Kennedy</dc:creator>
        </item>
        <item>
            <title><![CDATA[MadeYouReset: An HTTP/2 vulnerability thwarted by Rapid Reset mitigations]]></title>
            <link>https://blog.cloudflare.com/madeyoureset-an-http-2-vulnerability-thwarted-by-rapid-reset-mitigations/</link>
            <pubDate>Thu, 14 Aug 2025 22:03:00 GMT</pubDate>
            <description><![CDATA[ A new HTTP/2 denial-of-service (DoS) vulnerability called MadeYouReset was recently disclosed by security researchers. Cloudflare HTTP DDoS mitigation, already protects from MadeYouReset. ]]></description>
            <content:encoded><![CDATA[ <p><i><sub>(Correction on August 19, 2025: This post was updated to correct and clarify details about the vulnerability and the HTTP/2 protocol.)</sub></i></p><p>On August 13, security researchers at Tel Aviv University disclosed a new HTTP/2 denial-of-service (DoS) vulnerability that they are calling MadeYouReset (<a href="https://www.kb.cert.org/vuls/id/767506"><u>CVE-2025-8671</u></a>). This vulnerability exists in a limited number of unpatched HTTP/2 server implementations that do not accurately track use of server-sent stream resets, which can lead to resource consumption. <b>If you’re using Cloudflare for HTTP DDoS mitigation, you’re already protected from MadeYouReset</b>.</p><p>Cloudflare was informed of this vulnerability in May through a coordinated disclosure process, and we were able to confirm that our systems were not susceptible. We foresaw this sort of attack while mitigating the "<a href="https://blog.cloudflare.com/on-the-recent-http-2-dos-attacks/"><u>Netflix vulnerabilities</u></a>" in 2019, and added even stronger defenses in response to <a href="https://blog.cloudflare.com/technical-breakdown-http2-rapid-reset-ddos-attack/"><u>Rapid Reset</u></a> (<a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2023-44487"><u>CVE-2023-44487</u></a>) in 2023. MadeYouReset and Rapid Reset are two conceptually similar attacks that exploit a fundamental feature within <a href="https://datatracker.ietf.org/doc/html/rfc9113#RST_STREAM"><u>the HTTP/2 specification</u></a> (RFC 9113): stream resets. In the HTTP/2 protocol, a client initiates a bidirectional stream that carries an HTTP request/response exchange, represented as frames sent between the client and server. Typically, HEADERS and DATA frames are used for a complete exchange.  Endpoints can use the RST_STREAM frame to prematurely terminate a stream, essentially cancelling operations and signalling that it won’t process any more request or response data. Furthermore, HTTP/2 requires that RST_STREAM is sent when there are protocol errors related to the stream. For example, <a href="http://datatracker.ietf.org/doc/html/rfc9113#section-6.1-10"><u>section 6.1 of RFC 9113</u></a> requires that when a DATA frame is received under the wrong circumstances, "...<i>the recipient MUST respond with a stream error (</i><a href="https://datatracker.ietf.org/doc/html/rfc9113#section-5.4.2"><i><u>Section 5.4.2</u></i></a><i>) of type STREAM_CLOSED</i>". </p><p>The vulnerability exploited by both MadeYouReset and Rapid Reset lies in the potential for malicious actors to abuse this stream reset mechanism. By repeatedly causing stream resets, attackers can overwhelm a server's resources. While the server is attempting to process and respond to a multitude of requests, the rapid succession of resets forces it to expend computational effort on starting and then immediately discarding these operations. This can lead to resource exhaustion and impact the availability of the targeted server for legitimate users; <a href="https://blog.cloudflare.com/technical-breakdown-http2-rapid-reset-ddos-attack/#impact-on-customers"><u>as described previously</u></a>, the main difference between the two attacks is that Rapid Reset exploits client-sent resets, while MadeYouReset exploits server-sent resets. It works by using a client to persuade a server into resetting streams via intentionally sending frames that trigger protocol violations, which in turn trigger stream errors.</p><p>RFC 9113 details a number of <a href="https://datatracker.ietf.org/doc/html/rfc9113#section-10.5"><u>denial-of-service considerations</u></a>. Fundamentally, the protocol provides many features with legitimate uses that can be exploited by attackers with nefarious intent. Implementations are advised to harden themselves: "An endpoint that doesn't monitor use of these features exposes itself to a risk of denial of service. Implementations SHOULD track the use of these features and set limits on their use."</p><p>Fortunately, the MadeYouReset vulnerability only impacts a relatively small number of HTTP/2 implementations. In most major HTTP/2 implementations already in widespread use today, the proactive measures taken to implement RFC 9113 guidance and counter Rapid Reset in 2023 have also provided substantial protection against MadeYouReset, limiting its potential impact and preventing a similarly disruptive event.</p><blockquote><p><b>A note about Cloudflare’s Pingora and its users:
</b>Our <a href="https://blog.cloudflare.com/pingora-open-source/"><u>open-sourced Pingora framework</u></a> uses the popular Rust-language h2 library for its HTTP/2 support. Versions of h2 prior to 0.4.11 were <a href="https://seanmonstar.com/blog/hyper-http2-didnt-madeyoureset/"><u>potentially susceptible to MadeYouReset</u></a>. Users of Pingora can patch their applications by updating their h2 crate version using the cargo update command. Pingora does not itself terminate inbound HTTP connections to Cloudflare’s network, meaning this vulnerability could not be exploited against Cloudflare’s infrastructure.</p></blockquote><p>We would like to credit researchers <a href="https://galbarnahum.com/posts/made-you-reset-intro"><u>Gal Bar Nahum</u></a>, Anat Bremler-Barr, and Yaniv Harel of Tel Aviv University for discovering this vulnerability and thank them for their leadership in the coordinated disclosure process. Cloudflare always encourages security researchers to submit vulnerabilities like this to our <a href="https://hackerone.com/cloudflare?type=team"><u>HackerOne Bug Bounty program</u></a>.</p> ]]></content:encoded>
            <category><![CDATA[Security]]></category>
            <category><![CDATA[Vulnerabilities]]></category>
            <category><![CDATA[Attacks]]></category>
            <category><![CDATA[DDoS]]></category>
            <guid isPermaLink="false">707qJXBfSyXWBe0ziAnp8G</guid>
            <dc:creator>Alex Forster</dc:creator>
            <dc:creator>Noah Maxwell Kennedy</dc:creator>
            <dc:creator>Lucas Pardue</dc:creator>
            <dc:creator>Evan Rittenhouse</dc:creator>
        </item>
        <item>
            <title><![CDATA[An exposed apt signing key and how to improve apt security]]></title>
            <link>https://blog.cloudflare.com/dont-use-apt-key/</link>
            <pubDate>Wed, 15 Dec 2021 13:56:03 GMT</pubDate>
            <description><![CDATA[ Recently, we received a bug bounty report regarding the GPG signing key used for pkg.cloudflareclient.com, the Linux package repository for our Cloudflare WARP products. ]]></description>
            <content:encoded><![CDATA[ 
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2oaXxSl3ccKgOfLLGNL1n3/2df3207c60020356052f2ade105dccb1/image1-79.png" />
            
            </figure><p>Recently, we received a bug bounty report regarding the GPG signing key used for pkg.cloudflareclient.com, the Linux package repository for our Cloudflare WARP products. The report stated that this private key had been exposed. We’ve since rotated this key and we are taking steps to ensure a similar problem can’t happen again. Before you read on, if you are a Linux user of Cloudflare WARP, please <a href="https://pkg.cloudflareclient.com/install#package-rotation">follow these instructions</a> to rotate the Cloudflare GPG Public Key trusted by your package manager. This only affects WARP users who have installed WARP on Linux. It does not affect Cloudflare customers of any of our other products or WARP users on mobile devices.</p><p>But we also realized that the impact of an improperly secured private key can have consequences that extend beyond the scope of one third-party repository. The remainder of this blog shows how to improve the security of apt with third-party repositories.</p>
    <div>
      <h3>The unexpected impact</h3>
      <a href="#the-unexpected-impact">
        
      </a>
    </div>
    <p>At first, we thought that the exposed signing key could only be used by an attacker to forge packages distributed through our package repository. However, when reviewing impact for Debian and Ubuntu platforms we found that our instructions were outdated and insecure. In fact, we found the majority of Debian package repositories on the Internet were providing the same poor guidance: download the GPG key from a website and then either pipe it directly into apt-key or copy it into <code>/etc/apt/trusted.gpg.d/</code>. This method adds the key as a trusted root for software installation from <i>any source</i>. To see why this is a problem, we have to understand how apt downloads and verifies software packages.</p>
    <div>
      <h3>How apt verifies packages</h3>
      <a href="#how-apt-verifies-packages">
        
      </a>
    </div>
    <p>In the early days of Linux, package maintainers wanted to make sure users could trust that the software being installed on their machines came from a trusted source.</p><p>Apt has a list of places to pull packages from (sources) and a method to validate those sources (trusted public keys). Historically, the keys were stored in a single keyring file: <code>/etc/apt/trusted.gpg</code>. Later, as third party repositories became more common, apt could also look inside <code>/etc/apt/trusted.gpg.d/</code> for individual key files.</p><p>What happens when you run apt update? First, apt fetches a signed file called InRelease from each source. Some servers supply separate Release and signature files instead, but they serve the same purpose. InRelease is a file containing metadata that can be used to cryptographically validate every package in the repository. Critically, it is also signed by the repository owner’s private key. As part of the update process, apt verifies that the InRelease file has a valid signature, and that the signature was generated by a trusted root. If everything checks out, a local package cache is updated with the repository’s contents. This cache is directly used when installing packages. The chain of signed InRelease files and cryptographic hashes ensures that each downloaded package hasn’t been corrupted or tampered with along the way.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1D8XXtbwU25pQ7eViP4FFz/98d53164d30968a85a412e592ce7f391/BLOG-895.png" />
            
            </figure>
    <div>
      <h3>A typical third-party repository today</h3>
      <a href="#a-typical-third-party-repository-today">
        
      </a>
    </div>
    <p>For most Ubuntu/Debian users today, this is what adding a third-party repository looks like in practice:</p><ol><li><p>Add a file in <code>/etc/apt/sources.list.d/</code> telling apt where to look for packages.</p></li><li><p>Add the gpg public key to <code>/etc/apt/trusted.gpg.d/</code>, probably via apt-key.</p></li></ol><p>If apt-key is used in the second step, the command typically pops up a deprecation warning, telling you not to use apt-key. There’s a good reason: adding a key like this trusts it for any repository, not just the source from step one. This means if the private key associated with this new source is compromised, attackers can use it to bypass apt’s signature verification and install their own packages.</p><p>What would this type of attack look like? Assume you’ve got a stock Debian setup with a default sources list<sup>1</sup>:</p>
            <pre><code>deb http://deb.debian.org/debian/ bullseye main non-free contrib
deb http://security.debian.org/debian-security bullseye-security main contrib non-free</code></pre>
            <p>At some point you installed a trusted key that was later exposed, and the attacker has the private key. This key was added alongside a source pointing at https, assuming that even if the key is broken an attacker would have to break TLS encryption as well to install software via that route.</p><p>You’re enjoying a hot drink at your local cafe, where someone nefarious has managed to hack the router without your knowledge. They’re able to intercept http traffic and modify it without your knowledge. An auto-update script on your laptop runs <code>apt update</code>. The attacker pretends to be deb.debian.org, and because at least one source is configured to use http, the attacker doesn’t need to break https. They return a modified InRelease file signed with the compromised key, indicating that a newer update of the bash package is available. apt pulls the new package (again from the attacker) and installs it, as root. Now you’ve got a big problem<sup>2</sup>.</p>
    <div>
      <h3>A better way</h3>
      <a href="#a-better-way">
        
      </a>
    </div>
    <p>It seems the way most folks are told to set up third-party Debian repositories is wrong. What if you could tell apt to <a href="https://wiki.debian.org/DebianRepository/UseThirdParty">only trust that GPG key for a specific source</a>? That, combined with the use of https, would significantly reduce the impact of a key compromise. As it turns out, there’s a way to do that! You’ll need to do two things:</p><ol><li><p>Make sure the key isn’t in <code>/etc/apt/trusted.gpg</code> or <code>/etc/apt/trusted.gpg.d/</code> anymore. If the key is its own file, the easiest way to do this is to move it to <code>/usr/share/keyrings/</code>. Make sure the file is owned by root, and only root can write to it. This step is important, because it prevents apt from using this key to check all repositories in the sources list.</p></li><li><p>Modify the sources file in <code>/etc/apt/sources.list.d/</code> telling apt that this particular repository can be “signed by” a specific key. When you’re done, the line should look like this:</p></li></ol>
            <pre><code>deb [signed-by=/usr/share/keyrings/cloudflare-client.gpg] https://pkg.cloudflareclient.com/ bullseye main</code></pre>
            <p>Some source lists contain other metadata indicating that the source is only valid for certain architectures. If that’s the case, just add a space in the middle, like so:</p>
            <pre><code>deb [amd64 signed-by=/usr/share/keyrings/cloudflare-client.gpg] https://pkg.cloudflareclient.com/ bullseye main</code></pre>
            <p>We’ve updated the instructions on our own repositories for the <a href="https://pkg.cloudflareclient.com/">WARP Client</a> and <a href="https://pkg.cloudflare.com/">Cloudflare</a> with this information, and we hope others will follow suit.</p><p>If you run <code>apt-key list</code> on your own machine, you’ll probably find several keys that are trusted far more than they should be. Now you know how to fix them!</p><p>For those running your own repository, now is a great time to review your installation instructions. If your instructions tell users to curl a public key file and pipe it straight into sudo apt-key, maybe there’s a safer way. While you’re in there, ensuring the package repository supports https is a great way to add an extra layer of security (and if you host your traffic via Cloudflare, <a href="https://www.cloudflare.com/ssl/">it’s easy to set up, and free</a>. You can follow <a href="/cloudflare-repositories-ftw/">this blog post</a> to learn how to properly configure Cloudflare to cache Debian packages).</p><hr /><p><sup>1</sup>RPM-based distros like Fedora, CentOS, and RHEL also use a common trusted GPG store to validate packages, but since they generally use https by default to fetch updates they aren’t vulnerable to this particular attack.</p><p><sup>2</sup>The attack described above requires an active on-path network attacker. If you are using the WARP client or Cloudflare for Teams to tunnel your traffic to Cloudflare, your network traffic cannot be tampered with on local networks.</p> ]]></content:encoded>
            <category><![CDATA[Security]]></category>
            <category><![CDATA[Cloudflare Zero Trust]]></category>
            <category><![CDATA[WARP]]></category>
            <category><![CDATA[Zero Trust]]></category>
            <guid isPermaLink="false">3Cmown4J1B4wuqzgTWkNzA</guid>
            <dc:creator>Jeff Hiner</dc:creator>
            <dc:creator>Matt Schulte</dc:creator>
            <dc:creator>Thomas Calderon</dc:creator>
            <dc:creator>Noah Maxwell Kennedy</dc:creator>
        </item>
    </channel>
</rss>