
<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:media="http://search.yahoo.com/mrss/">
    <channel>
        <title><![CDATA[ The Cloudflare Blog ]]></title>
        <description><![CDATA[ Get the latest news on how products at Cloudflare are built, technologies used, and join the teams helping to build a better Internet. ]]></description>
        <link>https://blog.cloudflare.com</link>
        <atom:link href="https://blog.cloudflare.com/" rel="self" type="application/rss+xml"/>
        <language>en-us</language>
        
        <lastBuildDate>Mon, 13 Apr 2026 19:44:42 GMT</lastBuildDate>
        <item>
            <title><![CDATA[The most-seen UI on the Internet? Redesigning Turnstile and Challenge Pages]]></title>
            <link>https://blog.cloudflare.com/the-most-seen-ui-on-the-internet-redesigning-turnstile-and-challenge-pages/</link>
            <pubDate>Fri, 27 Feb 2026 06:00:00 GMT</pubDate>
            <description><![CDATA[ We serve 7.6 billion challenges daily. Here’s how we used research, AAA accessibility standards, and a unified architecture to redesign the Internet’s most-seen user interface. ]]></description>
            <content:encoded><![CDATA[ <p>You've seen it. Maybe you didn't register it consciously, but you've seen it. That little widget asking you to verify you're human. That full-page security check before accessing a website. If you've spent any time on the Internet, you've encountered Cloudflare's Turnstile widget or Challenge Pages — likely more times than you can count.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5YaxxmA9nz7AufmcJmhagL/0db6b65ec7456bc8091affc6beaf3ec2/Image_1_-_Turnstile.png" />
          </figure><p><sup><i>The Turnstile widget – a familiar sight across millions of websites</i></sup></p><p>When we say that a large portion of the Internet sits behind Cloudflare, we mean it. Our Turnstile widget and Challenge Pages are served 7.67 billion times every single day. That's not a typo. Billions. This might just be the most-seen user interface on the Internet.</p><p>And that comes with enormous responsibility.</p><p>Designing a product with billions of eyeballs on it isn't just challenging — it requires a fundamentally different approach. Every pixel, every word, every interaction has to work for someone's grandmother in rural Japan, a teenager in São Paulo, a visually impaired developer in Berlin, and a busy executive in Lagos. All at the same time. In moments of frustration.</p><p>Today we’re sharing the story of how we redesigned Turnstile and Challenge Pages. It's a story told in three parts, by three of us: the design process and research that shaped our decisions (Leo), the engineering challenge of deploying changes at unprecedented scale (Ana), and the measurable impact on billions of users (Marina).</p><p>Let's start with how we approached the problem from a design perspective.</p>
    <div>
      <h2>Part 1: The design process</h2>
      <a href="#part-1-the-design-process">
        
      </a>
    </div>
    
    <div>
      <h3>The problem</h3>
      <a href="#the-problem">
        
      </a>
    </div>
    <p>Let's be honest: nobody likes being asked to prove they're human. You know you're human. I know I'm human. The only one who doesn't seem convinced is that little widget standing between you and the website you're trying to access. At best, it's a minor inconvenience. At worst? You've probably wanted to throw your computer out the window in a fit of rage. We've all been there. And no one would blame you.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/640zjNaqDcNdJy4mYN6H14/ce184df68c9612d77f0767726bf27822/2.png" />
          </figure><p><sup><i>Turnstile integrated into a login flow</i></sup></p><p>As the world warms up to what appears to be an inevitable AI revolution, the need for security verification is only increasing. At Cloudflare, we've seen a significant rise in bot attacks — and in response, organizations are investing more heavily in security measures. That means more challenges being issued to more end users, more often.</p><p>The numbers tell the story:</p><p>2023: 2.14B daily</p><p>2024: 3B daily</p><p>2025: 5.35B daily</p><p>That's a 58.1% average increase in security checks, year over year. More security checks mean more opportunities for end user frustration. The more companies integrate these verification systems to protect themselves and their customers, the higher the chance that someone, somewhere, is going to have a bad experience.</p><p>We knew it was time to take a hard look at our flagship products and ask ourselves: Are we doing right by the billions of people who encounter these experiences? Are we fulfilling our mission to build a better Internet — not just a more secure one, but a more human one?</p><p>The answer, we discovered, was: we could do better.</p>
    <div>
      <h3>The design audit</h3>
      <a href="#the-design-audit">
        
      </a>
    </div>
    <p>Before redesigning anything, we needed to understand what we were working with. We started by conducting a comprehensive audit of every state, every error message, and every interaction across both Turnstile and Challenge Pages.</p><p>What we found wasn't the best.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1g1exDgeRH9QlApBXItcfL/fb0051d1dabaa6c91cf976ef64793502/3.png" />
          </figure><p><sup><i>The state of inconsistency in the Turnstile widget. Multiple states with no unified approach</i></sup></p><p>The inconsistencies were glaring. We had no unified approach across the multitude of different error scenarios. Some messages were overly verbose and technical ("Your device clock is set to a wrong time or this challenge page was accidentally cached by an intermediary and is no longer available"). Others were too vague to be helpful ("Timed out"). The visual language varied wildly — different layouts, different hierarchies, different tones of voice.</p><p>We also examined the feedback we'd received online. Social media, support tickets, community forums — we read it all. The frustration was palpable, and much of it was avoidable.</p><p>Take our feedback mechanism, for example. We offered users feedback options like "The widget sometimes fails" versus "The widget fails all the time." But what's the difference, really? And how were they supposed to know how often it failed? We were asking users to interpret ambiguous options during their most frustrated moments. The more we left open to interpretation, the less useful the feedback became — and the more frustration we saw across social channels.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5xKRSM0FfDZikEECwgHoof/ad55208973698cb237444c21d384aff8/4.png" />
          </figure><p><sup><i>The previous feedback screen: "The widget sometimes fails" vs "The widget fails all the time" — what's the difference?</i></sup></p><p>Our Challenge Pages — the full-page security blocks that appear when we detect suspicious activity or when site owners have heightened security settings — had similar issues. Some states were confusing. Others used too much technical jargon. Many failed to provide actionable guidance when users needed it most.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5JUxHjJ4VG13F7QfLONJEQ/fa443e5dd24f10d0c256864cd3f42734/5.png" />
          </figure><p><sup><i>The state of inconsistency on the Challenge pages. Multiple states with no unified approach</i></sup></p><p>The audit was humbling. But it gave us a clear picture of where we needed to focus.</p>
    <div>
      <h2>Mapping the user journey</h2>
      <a href="#mapping-the-user-journey">
        
      </a>
    </div>
    <p>To design better experiences, we first needed to understand every possible path a user could take. What was the happy path? Was there even one? And what were the unhappy paths that led to escalating frustration?</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1oTbFZoRu7guIxzoe64qcm/4f579fe2e70d6225a51504b3de10030f/6.png" />
          </figure><p><sup><i>Mapping the complete user journey — from initial encounter through error scenarios, with sentiment tracking</i></sup></p><p>This was a true cross-functional effort. We worked closely with engineers like Ana who knew the technical ins and outs of every edge case, and with Marina on the product side who understood not just how the product worked, but how users felt about it — the love and the hate we'd see online.</p><p>We have some of the smartest people working on bot protection at Cloudflare. But intelligence and clarity aren't the same thing. There's a delicate balance between technical complexity and user simplicity. Only when these two dance together successfully can we communicate information in a way that actually makes sense to people.</p><p>And here's the thing: the messaging has to work for everyone. A person of any age. Any mental or physical capability. Any cultural background. Any level of technical sophistication. That's what designing at scale really means — you can’t ignore edge cases, since, at such scale, they are no longer edge cases.</p>
    <div>
      <h2>Establishing a unified information architecture</h2>
      <a href="#establishing-a-unified-information-architecture">
        
      </a>
    </div>
    <p>One of the most influential books in UX design is Steve Krug's <a href="https://sensible.com/dont-make-me-think/"><u>Don't Make Me Think</u></a>. The core principle is simple: every moment a user spends trying to interpret, understand, or decode your interface is a moment of friction. And friction, especially in moments of frustration, leads to abandonment.</p><p>Our audit revealed that we were asking users to think far too much. Different pieces of information occupied the same space in the UI across different states. There was no consistent visual hierarchy. Users encountering an error state in Turnstile would find information in a completely different place than they would on a Challenge Page.</p><p>We made a fundamental decision: <b>one information architecture to rule them all</b>.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3runU0ihKhNpgdw3LxNZUv/aa4bd76efb5847fde0659bccdae7242d/7.png" />
          </figure><p><sup><i>Visual diagram displaying a unified information architecture with a consistent structure across Turnstile widget and Challenge pages</i></sup></p><p>Both Turnstile and Challenge Pages would now follow the same structural pattern. The same visual hierarchy. The same placement for actions, for explanatory text, for links to documentation.</p><p>Did this constrain our design options? Absolutely. We had to say no to a lot of creative ideas that didn't fit the framework. But constraints aren't the enemy of good design — they're often its best friend. By limiting our options, we could go deeper on the details that actually mattered.</p><p>For users, the benefit is profound: they don't need to re-learn what each piece of the UI means. Error states look consistent. Help links are always in the same place. Once you understand one state, you understand them all. That's cognitive load reduced to a minimum — exactly where it should be during a security verification.</p>
    <div>
      <h2>What user research taught us</h2>
      <a href="#what-user-research-taught-us">
        
      </a>
    </div>
    <p>How do you keep yourself accountable when redesigning something that billions of people see? You test. A lot.</p><p>We recruited 8 participants across 8 different countries, deliberately seeking diversity in age, digital savviness, and cultural background. We weren't looking for tech-savvy early adopters — we wanted to understand how the redesign would work for everyone.</p><p>Our approach was rigorous: participants saw both the current experience and proposed changes, without knowing which was "old" or "new." We counterbalanced positioning to eliminate bias. And we did not just test our new ideas, but also challenged our assumptions about what needed changing in the first place.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/59mmLHihbM9TewXmlYQwbO/e5db88efca948de1b31e9dc499195eb8/8.png" />
          </figure><p><sup><i>Two different versions of a Turnstile being tested in an A/B test</i></sup></p>
    <div>
      <h3>Some things didn’t need fixing</h3>
      <a href="#some-things-didnt-need-fixing">
        
      </a>
    </div>
    <p>One hypothesis: should we align with competitors? Most CAPTCHA providers show "I am human" across all states. We use distinct content — "Verify you are human," then "Verifying...," then "Success!"</p><p>Were we overcomplicating things? We tested it head-to-head.</p><p>Our approach won decisively. For the interactivity state, "Verify you are human" scored 5 out of 8 points versus just 3 for "I am human." For the verifying state, it was even more dramatic — 7.5 versus 0.5. Users wanted to know what was happening, not just be told what they were.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6ke1kO0i7EZxZm6voQBpyn/f489bef9b66d1221aa89adb5746559b7/9.png" />
          </figure><p><sup><i>User testing results: users strongly favored our approach over the competitor-style design</i></sup></p><p>This experiment didn't ship as a feature, but it was invaluable. It gave us confidence we weren't just being different for the sake of it. Some things were already right.</p>
    <div>
      <h3>But these needed to change</h3>
      <a href="#but-these-needed-to-change">
        
      </a>
    </div>
    <p>The research surfaced four areas where we were failing users:</p><p><b>Help, not bureaucracy</b>. When users encountered errors, we offered "Send Feedback." In testing, they were baffled. "Who am I sending this to? The website? Cloudflare? My ISP?" More importantly, we discovered something fundamental: at the moment of maximum frustration, people don't want to file a report — they want to fix the problem. We replaced "Send Feedback" with "Troubleshoot" — a single word that promises action rather than bureaucracy.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2jN2reUR55qCbssCDFTZfB/fb5396ec853ee549ebfec5d0d94b901f/10.png" />
          </figure><p><sup><i>The problematic "Send Feedback" prompt: users didn't know who they were sending feedback to</i></sup></p><p><b>Attention, not alarm</b>. We'd used red backgrounds liberally for errors. The reaction in testing was visceral — participants felt they had failed, felt powerless. Even for simple issues that would resolve with a retry, users assumed the worst and gave up. Red at full saturation wasn't communicating "Here's something to address." It was communicating "You have failed, and there's nothing you can do." The fix: red only for icons, never for text or backgrounds.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5seE6Xcrj9lvSpBYDEkk6N/7f0c1c17fd86b05397d35b685b0addfb/11.png" />
          </figure><p><sup><i>The evolution: from the states with unclear error state description in red to much clearer and concise error communication in neutral-color text.</i></sup></p><p><b>Scannable, not verbose</b>. We'd tried to be thorough, explaining errors in technical detail. It backfired. Non-technical users found it alienating. Technical users didn't need it. Everyone was trying to read it in the tiny real estate of a widget. The lesson: less is more, especially in constrained spaces during stressful moments.</p><p><b>Accessible to everyone</b>. Our audit revealed 10px fonts in some states. Grey text that technically met AA (at least 4.5:1 for normal text and 3:1 for large text) compliance but was difficult to read in practice. "Technically compliant" isn't good enough when you're serving the entire Internet.</p><p>We set a clear goal: to meet the <a href="https://www.w3.org/TR/WCAG22/"><u>WCAG 2.2 AAA</u></a> standard— the highest and most stringent level of web accessibility compliance, designed to make content accessible to the broadest range of users, including those with severe disabilities. Throughout the redesign, when visual consistency conflicted with readability, readability won. Every time.</p><p>This extended beyond vision. We designed for screen reader users, keyboard-only navigators, and people with color vision variations — going beyond what automated compliance tools can catch.</p><p>And accessibility isn't just about impairments — it's about language. What fits in English, overflows in German. What's concise in Spanish is ambiguous in Japanese. Supporting over 40 languages forced us to radically simplify. The same "Unable to connect to website / Troubleshoot" pattern now works across English, Bulgarian, Danish, German, Greek, Japanese, Indonesian, Russian, Slovak, Slovenian, Serbian, Filipino, and many more.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6e4pvgMUS4BUXsPqi1qV6l/b6ffdc0d5f1e8e90394169db7162d10c/12.png" />
          </figure><p><sup><i>The redesigned error state across 12 languages — consistent layout despite varying text lengths </i></sup></p>
    <div>
      <h2>Final redesign</h2>
      <a href="#final-redesign">
        
      </a>
    </div>
    <p>So what did we actually ship?</p><p>First, let's talk about what we didn't change. The happy path — "Verify you are human" → "Verifying..." → "Success!" — tested exceptionally well. Users understood what was happening at each stage. The distinct content for each state, which we'd worried might be overcomplicating things, was actually our competitive advantage.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2R4QJ04uz9r1TVZjuqsHG9/61c1023eaa105b4841456258f3370220/13.png" />
          </figure><p><i><sup> The happy path: Verify you are human → Verifying → Success! These states tested well and remained largely unchanged</sup></i></p><p>But for the states that needed work, we made significant changes guided by everything we learned.</p>
    <div>
      <h3>Simplified, scannable content</h3>
      <a href="#simplified-scannable-content">
        
      </a>
    </div>
    <p>We radically reduced the amount of text in error states. Instead of verbose explanations like "Your device clock is set to a wrong time or this challenge page was accidentally cached by an intermediary and is no longer available," we now show:</p><ol><li><p>A clear, simple state name (e.g., "Incorrect device time")</p></li><li><p>A prominent "Troubleshoot" link</p></li></ol><p>That's it. The detailed guidance now lives in a dedicated modal screen that opens when users need it — giving them room to actually read and follow troubleshooting steps.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4ZYjlJgw6DOiTuJBFXpewn/5d714c3a19723dfe9fa9802d0d5926b8/14.png" />
          </figure><p><sup><i>The troubleshooting modal: detailed guidance when users need it, without cluttering the widget</i></sup></p><p>The troubleshooting modal provides context ("This error occurs when your device's clock or calendar is inaccurate. To complete this website’s security verification process, your device must be set to the correct date and time in your time zone."), numbered steps to try, links to documentation, and — only after the user has tried to resolve the issue — an option to submit feedback to Cloudflare. Help first, feedback second.</p>
    <div>
      <h3>AAA accessibility compliance</h3>
      <a href="#aaa-accessibility-compliance">
        
      </a>
    </div>
    <p>Every state now meets WCAG 2.2 AAA standards for contrast and readability. Font sizes have established minimums. Interactive elements are clearly focusable and properly announced by screen readers.</p>
    <div>
      <h3>Unified experience across Turnstile and Challenge pages</h3>
      <a href="#unified-experience-across-turnstile-and-challenge-pages">
        
      </a>
    </div>
    <p>Whether users encounter the compact Turnstile widget or a full Challenge Page, the information architecture is now consistent. Same hierarchy. Same placement. Same mental model.</p><p>Challenge Pages now follow a clean structure: the website name and favicon at the top, a clear status message (like "Verification successful" or "Your browser is out of date"), and actionable guidance below. No more walls of orange or red text. No more technical jargon without context.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4PuWePTOaLihpfqm2iimJW/e34c4a009c36524a6d72c15ae0f78d00/15.png" />
          </figure><p><sup><i>Re-designed Challenge page states with clear troubleshooting instructions.</i></sup></p>
    <div>
      <h3>Validated across languages</h3>
      <a href="#validated-across-languages">
        
      </a>
    </div>
    <p>Every piece of content was tested in over 40 supported languages. Our process involved three layers of validation:</p><ol><li><p>Initial design review by the design team</p></li><li><p>Professional translation by our qualified vendor</p></li><li><p>Final review by native-speaking Cloudflare employees</p></li></ol><p>This wasn't just about translation accuracy — it was about ensuring the visual design held up when content length varied dramatically between languages.</p>
    <div>
      <h3>The complete picture</h3>
      <a href="#the-complete-picture">
        
      </a>
    </div>
    <p>The result is a security verification experience that's clearer, more accessible, less frustrating, and — crucially — just as secure. We didn't compromise on protection to improve the experience. We proved that good design and strong security aren't in conflict.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5t6FRRzLamGaTbEiZqVpnf/92b688679d1c8265ba3c6fd4159061bf/16.png" />
          </figure><p><sup><i>Re-designed Turnstile widgets on the left and a re-designed Challenge page on the right</i></sup></p><p>But designing the experience was only half the battle. Shipping it to billions of users? That's where Ana comes in.</p>
    <div>
      <h2>Part 2: Shipping to billions</h2>
      <a href="#part-2-shipping-to-billions">
        
      </a>
    </div>
    
    <div>
      <h4><b>Beyond centering a div</b></h4>
      <a href="#beyond-centering-a-div">
        
      </a>
    </div>
    <p>Some may say the hardest part of being a Frontend Engineer is centering a div. In reality, the real challenge often lies much deeper, especially when working close to the platform primitives. Building a critical piece of Internet infrastructure using native APIs forces you to think differently about UI development, tradeoffs, and long-term maintainability.</p><p>In our case, we use Rust to handle the UI for both the Turnstile widget and the Challenge page. This decision brought clear benefits in terms of safety and consistency across platforms, but it also increased frontend complexity. Many of us are used to the ergonomics of modern frameworks like React, where common UI interactions come almost for free. Working with Rust meant reimplementing even simple interactions using lower level constructs like <i>document.getElementById</i>, <i>createElement</i>, and <i>appendChild</i>.</p><p>On top of that, compile times and strict checks naturally slowed down rapid UI iteration compared to JavaScript based frameworks. Debugging was also more involved, as the tooling ecosystem is still evolving. These constraints pushed us to be more deliberate, more thoughtful, and ultimately more disciplined in how we approached UI development.</p>
    <div>
      <h4><b>Small visual changes, big global impact</b></h4>
      <a href="#small-visual-changes-big-global-impact">
        
      </a>
    </div>
    <p>What initially looked like small visual tweaks such as padding adjustments or alignment changes quickly revealed a much bigger challenge: internationalization.</p><p>Once translations were available, we had to ensure that content remained readable and usable across 38 languages and 16 different UI states. Text length variability alone required careful design decisions. Some translations can be 30 to 300 percent longer than English. A short English string like “Stuck?” becomes “Tidak bisa melanjutkan?” in Indonesian or “Es geht nicht weiter?” in German, dramatically changing layout requirements.</p><p>Right-to-left language support added another layer of complexity. Supporting Arabic, Persian or Farsi, and Hebrew meant more than flipping text direction. Entire layouts had to be mirrored, including alignment, navigation patterns, directional icons, and animation flows. Many of these elements are implicitly designed with left-to-right assumptions, so we had to revisit those decisions and make them truly bidirectional.</p><p>Ordered lists also required special care. Not every culture uses the Western 1, 2, 3 numbering system, and hardcoding numeric sequences can make interfaces feel foreign or incorrect. We leaned on locale-aware numbering and fully translatable list formats to ensure ordering felt natural and culturally appropriate in every language.</p>
    <div>
      <h4><b>Building confidence through testing</b></h4>
      <a href="#building-confidence-through-testing">
        
      </a>
    </div>
    <p>As we started listing action points in feedback reports, correctness became even more critical. Every action needed to render properly, trigger the right flow, and behave consistently across states, languages, and edge cases.</p><p>To get there, we invested heavily in testing. Unit tests helped us validate logic in isolation, while end-to-end tests ensured that new states and languages worked as expected in real scenarios. This testing foundation gave us confidence to iterate safely, prevented regressions, and ensured that feedback reports remained reliable and actionable for users.</p>
    <div>
      <h4><b>The outcome</b></h4>
      <a href="#the-outcome">
        
      </a>
    </div>
    <p>What began as a set of technical constraints turned into an opportunity to build a more robust, inclusive, and well-tested UI system. Working with fewer abstractions and closer to the browser primitives forced us to rethink assumptions, improve our internationalization strategy, and raise the overall quality bar.</p><p>The result is not just a solution that works, but one we trust. And that trust is what allows us to keep improving, even when centering a div turns out to be the easy part.</p>
    <div>
      <h2>Part 3: The impact</h2>
      <a href="#part-3-the-impact">
        
      </a>
    </div>
    <p>Designing for billions of people is a responsibility we take seriously. At this scale, it is essential to leverage measurable data to tell us the real impact of our design choices. As we prepare to roll out these changes, we are focusing on <b>five key metrics</b> that will tell us if we’ve truly succeeded in making the Internet’s most-seen UI more human.</p>
    <div>
      <h4><b>1. Challenge Completion Rate</b></h4>
      <a href="#1-challenge-completion-rate">
        
      </a>
    </div>
    <p>Our primary north star is the <b>Challenge Solve Rate: </b>the percentage of issued challenges that are successfully completed. By moving away from technical jargon like "intermediary caching" and toward simple, actionable labels like "Incorrect device time," we expect a significant uptick in CSR. A higher CSR doesn't mean we're being easier on bots; it means we’re removing the hurdles that were accidentally tripping up legitimate human users.</p>
    <div>
      <h4><b>2. Time to Complete</b></h4>
      <a href="#2-time-to-complete">
        
      </a>
    </div>
    <p>Every second a user spends on a challenge page is a second they aren't getting the information that they need. Our research showed that users were often paralyzed by choice when seeing a wall of red text. With our new scannable, neutral-color design, we are tracking <b>Time to Complete</b> to ensure users can identify and resolve issues in seconds rather than minutes.</p>
    <div>
      <h4><b>3. Abandonment Rate Changes</b></h4>
      <a href="#3-abandonment-rate-changes">
        
      </a>
    </div>
    <p>In the past, our liberal use of "saturated red" caused a visceral reaction: users felt they had failed and simply gave up. By reserving red only for icons and using a unified architecture, we aim to reduce Abandonment Rates. We want users to feel empowered to click Troubleshoot rather than feeling powerless and clicking away.</p>
    <div>
      <h4><b>4. Support Ticket Volume</b></h4>
      <a href="#4-support-ticket-volume">
        
      </a>
    </div>
    <p>One of the bigger shifts from a product perspective is our new Troubleshooting Modal. By providing clear, numbered steps directly within the widget, we are building self-service support into the UI. We expect this to result in a measurable decrease in support ticket volume for both our customers and our own internal teams.</p>
    <div>
      <h4><b>5. Social Sentiment</b></h4>
      <a href="#5-social-sentiment">
        
      </a>
    </div>
    <p>We know that security challenges are rarely loved, but they shouldn't be hated because they are confusing. We are monitoring <b>Social Sentiment</b> across community forums, feedback reports, and social channels to see if the conversation shifts from "this widget is broken" to "I had an issue, but I fixed it".</p><p>As a Product Manager, my goal is often invisible security — the best challenge is the one the user never sees. But when a challenge <i>must</i> be seen, it should be an assistant, not a bouncer. This redesign proves that <b>AAA accessibility</b> and <b>high-security standards</b> aren't in competition; they are two sides of the same coin. By unifying the architecture of Turnstile and Challenge Pages, we’ve built a foundation that allows us to iterate faster and protect the Internet more humanely than ever before.</p>
    <div>
      <h2>Looking ahead</h2>
      <a href="#looking-ahead">
        
      </a>
    </div>
    <p>This redesign is a foundation, not a finish line.</p><p>We're continuing to monitor how users interact with the new experience, and we're committed to iterating based on what we learn. The feedback mechanisms we've built into the new design — the ones that actually help users troubleshoot, rather than just asking them to report problems — will give us richer insights than we've ever had before.</p><p>We're also watching how the security landscape evolves. As bot attacks grow more sophisticated, and as AI continues to blur the line between human and automated behavior, the challenge of verification will only get harder. Our job is to stay ahead — to keep improving security without making the human experience worse.</p><p>If you encounter the new Turnstile or Challenge Pages and have feedback, we want to hear it. Reach out through our <a href="https://community.cloudflare.com/"><u>community forums</u></a> or use the feedback mechanisms built into the experience itself.</p> ]]></content:encoded>
            <category><![CDATA[Security Week]]></category>
            <category><![CDATA[Turnstile]]></category>
            <category><![CDATA[Challenge Page]]></category>
            <category><![CDATA[Design]]></category>
            <category><![CDATA[Product Design]]></category>
            <category><![CDATA[User Research]]></category>
            <category><![CDATA[Bots]]></category>
            <category><![CDATA[Bot Management]]></category>
            <category><![CDATA[WAF]]></category>
            <category><![CDATA[Engineering]]></category>
            <category><![CDATA[Product News]]></category>
            <category><![CDATA[Accessibility]]></category>
            <guid isPermaLink="false">19fiiQAG0XsaS9p0daOBus</guid>
            <dc:creator>Leo Bacevicius</dc:creator>
            <dc:creator>Ana Foppa</dc:creator>
            <dc:creator>Marina Elmore</dc:creator>
        </item>
        <item>
            <title><![CDATA[Cloudflare outage on November 18, 2025]]></title>
            <link>https://blog.cloudflare.com/18-november-2025-outage/</link>
            <pubDate>Tue, 18 Nov 2025 00:00:00 GMT</pubDate>
            <description><![CDATA[ Cloudflare suffered a service outage on November 18, 2025. The outage was triggered by a bug in generation logic for a Bot Management feature file causing many Cloudflare services to be affected. 
 ]]></description>
            <content:encoded><![CDATA[ <p>On 18 November 2025 at 11:20 UTC (all times in this blog are UTC), Cloudflare's network began experiencing significant failures to deliver core network traffic. This showed up to Internet users trying to access our customers' sites as an error page indicating a failure within Cloudflare's network.  </p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3ony9XsTIteX8DNEFJDddJ/7da2edd5abca755e9088002a0f5d1758/BLOG-3079_2.png" />
          </figure><p><b>The issue was not caused, directly or indirectly, by a cyber attack or malicious activity of any kind.</b> Instead, it was triggered by a change to one of our database systems' permissions which caused the database to output multiple entries into a “feature file” used by our Bot Management system. That feature file, in turn, doubled in size. The larger-than-expected feature file was then propagated to all the machines that make up our network.</p><p>The software running on these machines to route traffic across our network reads this feature file to keep our Bot Management system up to date with ever changing threats. The software had a limit on the size of the feature file that was below its doubled size. That caused the software to fail.</p><p>After we initially wrongly suspected the symptoms we were seeing were caused by a hyper-scale DDoS attack, we correctly identified the core issue and were able to stop the propagation of the larger-than-expected feature file and replace it with an earlier version of the file. Core traffic was largely flowing as normal by 14:30. We worked over the next few hours to mitigate increased load on various parts of our network as traffic rushed back online. As of 17:06 all systems at Cloudflare were functioning as normal.</p><p>We are sorry for the impact to our customers and to the Internet in general. Given Cloudflare's importance in the Internet ecosystem any outage of any of our systems is unacceptable. That there was a period of time where our network was not able to route traffic is deeply painful to every member of our team. We know we let you down today.</p><p>This post is an in-depth recount of exactly what happened and what systems and processes failed. It is also the beginning, though not the end, of what we plan to do in order to make sure an outage like this will not happen again.</p>
    <div>
      <h2>The outage</h2>
      <a href="#the-outage">
        
      </a>
    </div>
    <p>The chart below shows the volume of 5xx error HTTP status codes served by the Cloudflare network. Normally this should be very low, and it was right up until the start of the outage. </p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7GdZcWhEqNjwOmLcsKOXT0/fca7e6970d422d04c81b2baafb988cbe/BLOG-3079_3.png" />
          </figure><p>The volume prior to 11:20 is the expected baseline of 5xx errors observed across our network. The spike, and subsequent fluctuations, show our system failing due to loading the incorrect feature file. What’s notable is that our system would then recover for a period. This was very unusual behavior for an internal error.</p><p>The explanation was that the file was being generated every five minutes by a query running on a ClickHouse database cluster, which was being gradually updated to improve permissions management. Bad data was only generated if the query ran on a part of the cluster which had been updated. As a result, every five minutes there was a chance of either a good or a bad set of configuration files being generated and rapidly propagated across the network.</p><p>This fluctuation made it unclear what was happening as the entire system would recover and then fail again as sometimes good, sometimes bad configuration files were distributed to our network. Initially, this led us to believe this might be caused by an attack. Eventually, every ClickHouse node was generating the bad configuration file and the fluctuation stabilized in the failing state.</p><p>Errors continued until the underlying issue was identified and resolved starting at 14:30. We solved the problem by stopping the generation and propagation of the bad feature file and manually inserting a known good file into the feature file distribution queue. And then forcing a restart of our core proxy.</p><p>The remaining long tail in the chart above is our team restarting remaining services that had entered a bad state, with 5xx error code volume returning to normal at 17:06.</p><p>The following services were impacted:</p><table><tr><th><p><b>Service / Product</b></p></th><th><p><b>Impact description</b></p></th></tr><tr><td><p>Core CDN and security services</p></td><td><p>HTTP 5xx status codes. The screenshot at the top of this post shows a typical error page delivered to end users.</p></td></tr><tr><td><p>Turnstile</p></td><td><p>Turnstile failed to load.</p></td></tr><tr><td><p>Workers KV</p></td><td><p>Workers KV returned a significantly elevated level of HTTP 5xx errors as requests to KV’s “front end” gateway failed due to the core proxy failing.</p></td></tr><tr><td><p>Dashboard</p></td><td><p>While the dashboard was mostly operational, most users were unable to log in due to Turnstile being unavailable on the login page.</p></td></tr><tr><td><p>Email Security</p></td><td><p>While email processing and delivery were unaffected, we observed a temporary loss of access to an IP reputation source which reduced spam-detection accuracy and prevented some new-domain-age detections from triggering, with no critical customer impact observed. We also saw failures in some Auto Move actions; all affected messages have been reviewed and remediated.</p></td></tr><tr><td><p>Access</p></td><td><p>Authentication failures were widespread for most users, beginning at the start of the incident and continuing until the rollback was initiated at 13:05. Any existing Access sessions were unaffected.</p><p>
</p><p>All failed authentication attempts resulted in an error page, meaning none of these users ever reached the target application while authentication was failing. Successful logins during this period were correctly logged during this incident. </p><p>
</p><p>Any Access configuration updates attempted at that time would have either failed outright or propagated very slowly. All configuration updates are now recovered.</p></td></tr></table><p>As well as returning HTTP 5xx errors, we observed significant increases in latency of responses from our CDN during the impact period. This was due to large amounts of CPU being consumed by our debugging and observability systems, which automatically enhance uncaught errors with additional debugging information.</p>
    <div>
      <h2>How Cloudflare processes requests, and how this went wrong today</h2>
      <a href="#how-cloudflare-processes-requests-and-how-this-went-wrong-today">
        
      </a>
    </div>
    <p>Every request to Cloudflare takes a well-defined path through our network. It could be from a browser loading a webpage, a mobile app calling an API, or automated traffic from another service. These requests first terminate at our HTTP and TLS layer, then flow into our core proxy system (which we call FL for “Frontline”), and finally through Pingora, which performs cache lookups or fetches data from the origin if needed.</p><p>We previously shared more detail about how the core proxy works <a href="https://blog.cloudflare.com/20-percent-internet-upgrade/."><u>here</u></a>. </p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6qlWXM3gh4SaYYvsGc7mFV/99294b22963bb414435044323aed7706/BLOG-3079_4.png" />
          </figure><p>As a request transits the core proxy, we run the various security and performance products available in our network. The proxy applies each customer’s unique configuration and settings, from enforcing WAF rules and DDoS protection to routing traffic to the Developer Platform and R2. It accomplishes this through a set of domain-specific modules that apply the configuration and policy rules to traffic transiting our proxy.</p><p>One of those modules, Bot Management, was the source of today’s outage. </p><p>Cloudflare’s <a href="https://www.cloudflare.com/application-services/products/bot-management/"><u>Bot Management</u></a> includes, among other systems, a machine learning model that we use to generate bot scores for every request traversing our network. Our customers use bot scores to control which bots are allowed to access their sites — or not.</p><p>The model takes as input a “feature” configuration file. A feature, in this context, is an individual trait used by the machine learning model to make a prediction about whether the request was automated or not. The feature configuration file is a collection of individual features.</p><p>This feature file is refreshed every few minutes and published to our entire network and allows us to react to variations in traffic flows across the Internet. It allows us to react to new types of bots and new bot attacks. So it’s critical that it is rolled out frequently and rapidly as bad actors change their tactics quickly.</p><p>A change in our underlying ClickHouse query behaviour (explained below) that generates this file caused it to have a large number of duplicate “feature” rows. This changed the size of the previously fixed-size feature configuration file, causing the bots module to trigger an error.</p><p>As a result, HTTP 5xx error codes were returned by the core proxy system that handles traffic processing for our customers, for any traffic that depended on the bots module. This also affected Workers KV and Access, which rely on the core proxy.</p><p>Unrelated to this incident, we were and are currently migrating our customer traffic to a new version of our proxy service, internally known as <a href="https://blog.cloudflare.com/20-percent-internet-upgrade/"><u>FL2</u></a>. Both versions were affected by the issue, although the impact observed was different.</p><p>Customers deployed on the new FL2 proxy engine, observed HTTP 5xx errors. Customers on our old proxy engine, known as FL, did not see errors, but bot scores were not generated correctly, resulting in all traffic receiving a bot score of zero. Customers that had rules deployed to block bots would have seen large numbers of false positives. Customers who were not using our bot score in their rules did not see any impact.</p><p>Throwing us off and making us believe this might have been an attack was another apparent symptom we observed: Cloudflare’s status page went down. The status page is hosted completely off Cloudflare’s infrastructure with no dependencies on Cloudflare. While it turned out to be a coincidence, it led some of the team diagnosing the issue to believe that an attacker may be targeting both our systems as well as our status page. Visitors to the status page at that time were greeted by an error message:</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7LwbB5fv7vdoNRWWDGN7ia/dad8cef76eee1305e0216d74a813612b/BLOG-3079_5.png" />
          </figure><p>In the internal incident chat room, we were concerned that this might be the continuation of the recent spate of high volume <a href="https://techcommunity.microsoft.com/blog/azureinfrastructureblog/defending-the-cloud-azure-neutralized-a-record-breaking-15-tbps-ddos-attack/4470422"><u>Aisuru</u></a> <a href="https://blog.cloudflare.com/defending-the-internet-how-cloudflare-blocked-a-monumental-7-3-tbps-ddos/"><u>DDoS attacks</u></a>:</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3Ph13HSsOGC0KYRfoeZmSy/46522e46ed0132d2ea551aef4c71a5d6/BLOG-3079_6.png" />
          </figure>
    <div>
      <h3>The query behaviour change</h3>
      <a href="#the-query-behaviour-change">
        
      </a>
    </div>
    <p>I mentioned above that a change in the underlying query behaviour resulted in the feature file containing a large number of duplicate rows. The database system in question uses ClickHouse’s software.</p><p>For context, it’s helpful to know how ClickHouse distributed queries work. A ClickHouse cluster consists of many shards. To query data from all shards, we have so-called distributed tables (powered by the table engine <code>Distributed</code>) in a database called <code>default</code>. The Distributed engine queries underlying tables in a database <code>r0</code>. The underlying tables are where data is stored on each shard of a ClickHouse cluster.</p><p>Queries to the distributed tables run through a shared system account. As part of efforts to improve our distributed queries security and reliability, there’s work being done to make them run under the initial user accounts instead.</p><p>Before today, ClickHouse users would only see the tables in the <code>default</code> database when querying table metadata from ClickHouse system tables such as <code>system.tables</code> or <code>system.columns</code>.</p><p>Since users already have implicit access to underlying tables in <code>r0</code>, we made a change at 11:05 to make this access explicit, so that users can see the metadata of these tables as well. By making sure that all distributed subqueries can run under the initial user, query limits and access grants can be evaluated in a more fine-grained manner, avoiding one bad subquery from a user affecting others.</p><p>The change explained above resulted in all users accessing accurate metadata about tables they have access to. Unfortunately, there were assumptions made in the past, that the list of columns returned by a query like this would only include the “<code>default</code>” database:</p><p><code>SELECT
  name,
  type
FROM system.columns
WHERE
  table = 'http_requests_features'
order by name;</code></p><p>Note how the query does not filter for the database name. With us gradually rolling out the explicit grants to users of a given ClickHouse cluster, after the change at 11:05 the query above started returning “duplicates” of columns because those were for underlying tables stored in the r0 database.</p><p>This, unfortunately, was the type of query that was performed by the Bot Management feature file generation logic to construct each input “feature” for the file mentioned at the beginning of this section. </p><p>The query above would return a table of columns like the one displayed (simplified example):</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/ZIC5X8vMM7ifbJc0vxgLD/49dd33e7267bdb03b265ee0acccf381d/Screenshot_2025-11-18_at_2.51.24%C3%A2__PM.png" />
          </figure><p>However, as part of the additional permissions that were granted to the user, the response now contained all the metadata of the <code>r0</code> schema effectively more than doubling the rows in the response ultimately affecting the number of rows (i.e. features) in the final file output. </p>
    <div>
      <h3>Memory preallocation</h3>
      <a href="#memory-preallocation">
        
      </a>
    </div>
    <p>Each module running on our proxy service has a number of limits in place to avoid unbounded memory consumption and to preallocate memory as a performance optimization. In this specific instance, the Bot Management system has a limit on the number of machine learning features that can be used at runtime. Currently that limit is set to 200, well above our current use of ~60 features. Again, the limit exists because for performance reasons we preallocate memory for the features.</p><p>When the bad file with more than 200 features was propagated to our servers, this limit was hit — resulting in the system panicking. The FL2 Rust code that makes the check and was the source of the unhandled error is shown below:</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/640fjk9dawDk7f0wJ8Jm5S/668bcf1f574ae9e896671d9eee50da1b/BLOG-3079_7.png" />
          </figure><p>This resulted in the following panic which in turn resulted in a 5xx error:</p><p><code>thread fl2_worker_thread panicked: called Result::unwrap() on an Err value</code></p>
    <div>
      <h3>Other impact during the incident</h3>
      <a href="#other-impact-during-the-incident">
        
      </a>
    </div>
    <p>Other systems that rely on our core proxy were impacted during the incident. This included Workers KV and Cloudflare Access. The team was able to reduce the impact to these systems at 13:04, when a patch was made to Workers KV to bypass the core proxy. Subsequently, all downstream systems that rely on Workers KV (such as Access itself) observed a reduced error rate. </p><p>The Cloudflare Dashboard was also impacted due to both Workers KV being used internally and Cloudflare Turnstile being deployed as part of our login flow.</p><p>Turnstile was impacted by this outage, resulting in customers who did not have an active dashboard session being unable to log in. This showed up as reduced availability during two time periods: from 11:30 to 13:10, and between 14:40 and 15:30, as seen in the graph below.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/nB2ZlYyXiGTNngsVotyjN/479a0f9273c160c63925be87592be023/BLOG-3079_8.png" />
          </figure><p>The first period, from 11:30 to 13:10, was due to the impact to Workers KV, which some control plane and dashboard functions rely upon. This was restored at 13:10, when Workers KV bypassed the core proxy system.

The second period of impact to the dashboard occurred after restoring the feature configuration data. A backlog of login attempts began to overwhelm the dashboard. This backlog, in combination with retry attempts, resulted in elevated latency, reducing dashboard availability. Scaling control plane concurrency restored availability at approximately 15:30.</p>
    <div>
      <h2>Remediation and follow-up steps</h2>
      <a href="#remediation-and-follow-up-steps">
        
      </a>
    </div>
    <p>Now that our systems are back online and functioning normally, work has already begun on how we will harden them against failures like this in the future. In particular we are:</p><ul><li><p>Hardening ingestion of Cloudflare-generated configuration files in the same way we would for user-generated input</p></li><li><p>Enabling more global kill switches for features</p></li><li><p>Eliminating the ability for core dumps or other error reports to overwhelm system resources</p></li><li><p>Reviewing failure modes for error conditions across all core proxy modules</p></li></ul><p>Today was Cloudflare's worst outage <a href="https://blog.cloudflare.com/details-of-the-cloudflare-outage-on-july-2-2019/"><u>since 2019</u></a>. We've had outages that have made our <a href="https://blog.cloudflare.com/post-mortem-on-cloudflare-control-plane-and-analytics-outage/"><u>dashboard unavailable</u></a>. Some that have caused <a href="https://blog.cloudflare.com/cloudflare-service-outage-june-12-2025/"><u>newer features</u></a> to not be available for a period of time. But in the last 6+ years we've not had another outage that has caused the majority of core traffic to stop flowing through our network.</p><p>An outage like today is unacceptable. We've architected our systems to be highly resilient to failure to ensure traffic will always continue to flow. When we've had outages in the past it's always led to us building new, more resilient systems.</p><p>On behalf of the entire team at Cloudflare, I would like to apologize for the pain we caused the Internet today. </p><table><tr><th><p>Time (UTC)</p></th><th><p>Status</p></th><th><p>Description</p></th></tr><tr><td><p>11:05</p></td><td><p>Normal.</p></td><td><p>Database access control change deployed.</p></td></tr><tr><td><p>11:28</p></td><td><p>Impact starts.</p></td><td><p>Deployment reaches customer environments, first errors observed on customer HTTP traffic.</p></td></tr><tr><td><p>11:32-13:05</p></td><td><p>The team investigated elevated traffic levels and errors to Workers KV service.</p><p>

</p></td><td><p>The initial symptom appeared to be degraded Workers KV response rate causing downstream impact on other Cloudflare services.</p><p>
</p><p>Mitigations such as traffic manipulation and account limiting were attempted to bring the Workers KV service back to normal operating levels.</p><p>
</p><p>The first automated test detected the issue at 11:31 and manual investigation started at 11:32. The incident call was created at 11:35.</p></td></tr><tr><td><p>13:05</p></td><td><p>Workers KV and Cloudflare Access bypass implemented — impact reduced.</p></td><td><p>During investigation, we used internal system bypasses for Workers KV and Cloudflare Access so they fell back to a prior version of our core proxy. Although the issue was also present in prior versions of our proxy, the impact was smaller as described below.</p></td></tr><tr><td><p>13:37</p></td><td><p>Work focused on rollback of the Bot Management configuration file to a last-known-good version.</p></td><td><p>We were confident that the Bot Management configuration file was the trigger for the incident. Teams worked on ways to repair the service in multiple workstreams, with the fastest workstream a restore of a previous version of the file.</p></td></tr><tr><td><p>14:24</p></td><td><p>Stopped creation and propagation of new Bot Management configuration files.</p></td><td><p>We identified that the Bot Management module was the source of the 500 errors and that this was caused by a bad configuration file. We stopped automatic deployment of new Bot Management configuration files.</p></td></tr><tr><td><p>14:24</p></td><td><p>Test of new file complete.</p></td><td><p>We observed successful recovery using the old version of the configuration file and then focused on accelerating the fix globally.</p></td></tr><tr><td><p>14:30</p></td><td><p>Main impact resolved. Downstream impacted services started observing reduced errors.</p></td><td><p>A correct Bot Management configuration file was deployed globally and most services started operating correctly.</p></td></tr><tr><td><p>17:06</p></td><td><p>All services resolved. Impact ends.</p></td><td><p>All downstream services restarted and all operations fully restored.</p></td></tr></table><p></p> ]]></content:encoded>
            <category><![CDATA[Outage]]></category>
            <category><![CDATA[Post Mortem]]></category>
            <category><![CDATA[Bot Management]]></category>
            <guid isPermaLink="false">oVEUcpjyyDA8DSSXiE7E6</guid>
            <dc:creator>Matthew Prince</dc:creator>
        </item>
        <item>
            <title><![CDATA[Giving users choice with Cloudflare’s new Content Signals Policy]]></title>
            <link>https://blog.cloudflare.com/content-signals-policy/</link>
            <pubDate>Wed, 24 Sep 2025 13:10:00 GMT</pubDate>
            <description><![CDATA[ Cloudflare’s Content Signals Policy gives creators a new tool to control use of their content. 
 ]]></description>
            <content:encoded><![CDATA[ <p>If we want to keep the web open and thriving, we need more tools to express how content creators want their data to be used while allowing open access. Today the tradeoff is too limited. Either website operators keep their content open to the web and risk people using it for unwanted purposes, or they move their content behind logins and limit their audience.</p><p>To address the concerns our customers have today about how their content is being used by crawlers and data scrapers, we are launching the Content Signals Policy. This policy is a new addition to robots.txt that allows you to express your preferences for how your content can be used after it has been accessed. </p>
    <div>
      <h2>What <code>robots.txt</code> does, and does not, do today</h2>
      <a href="#what-robots-txt-does-and-does-not-do-today">
        
      </a>
    </div>
    <p><a href="https://www.cloudflare.com/learning/bots/what-is-robots-txt/"><u>Robots.txt</u></a> is a plain text file hosted on your domain that implements the <a href="https://www.rfc-editor.org/rfc/rfc9309.html"><u>Robots Exclusion Protocol</u></a>. It allows you to instruct which crawlers and bots can access which parts of your site.  Many crawlers and some bots obey robots.txt files, but not all do.</p><p>For example, if you wanted to allow all crawlers to access every part of your site, you could host a robots.txt file that has the following: </p>
            <pre><code>User-agent: * 
Allow: /
</code></pre>
            <p>A user-agent is how your browser, or a bot, identifies themselves to the resource they are accessing. In this case, the asterisk tells visitors that any user agent, on any device or browser, can access the content. The / in the <code>Allow</code> field tells the visitor that they can access any part of the site as well.</p><p>The <code>robots.txt</code> file can also include commentary by adding characters after # symbol. Bots and machines will ignore these comments, but it is one way to leave more human-readable notes to someone reviewing the file. Here is <a href="https://www.cloudflare.com/robots.txt"><u>one example</u></a>:</p>
            <pre><code>#    .__________________________.
#    | .___________________. |==|
#    | | ................. | |  |
#    | | ::[ Dear robot ]: | |  |
#    | | ::::[ be nice ]:: | |  |
#    | | ::::::::::::::::: | |  |
#    | | ::::::::::::::::: | |  |
#    | | ::::::::::::::::: | |  |
#    | | ::::::::::::::::: | | ,|
#    | !___________________! |(c|
#    !_______________________!__!
#   /                            \
#  /  [][][][][][][][][][][][][]  \
# /  [][][][][][][][][][][][][][]  \
#(  [][][][][____________][][][][]  )
# \ ------------------------------ /
#  \______________________________/
</code></pre>
            <p>Website owners can make <code>robots.txt</code> more specific by listing certain user-agents (such as for only permitting certain bot user-agents or browser user-agents) and by stating which parts of a site they are or are not allowed to crawl. The example below tells bots to skip crawling the archives path.</p>
            <pre><code>User-agent: * 
Disallow: /archives/
</code></pre>
            <p>And the example here gets more specific, telling Google’s bot to skip crawling the archives path.</p>
            <pre><code>User-agent: Googlebot 
Disallow: /archives/
</code></pre>
            <p>This allows you to specify which crawlers are allowed and what parts of your site they can access. It does not, however, let them know what they are able to do with your content after accessing it. As many have <a href="https://datatracker.ietf.org/wg/aipref/about/"><u>realized,</u></a> there needs to be a standard, machine-readable way to signal the rules of your road for how your data can be used even after it has been accessed. </p><p>That is what the Content Signals Policy allows you to express: your preferences for what a crawler can, and cannot do with your content. </p>
    <div>
      <h2>Why are we launching the Content Signals Policy now? </h2>
      <a href="#why-are-we-launching-the-content-signals-policy-now">
        
      </a>
    </div>
    <p>There are companies that scrape vast troves of data from the Internet every day. There is a real cost to website operators to serve these data scrapers, in particular when they receive no compensation in return; we are experiencing a classic <a href="https://en.wikipedia.org/wiki/Free-rider_problem"><u>free-rider problem</u></a>. This is only going to get worse: we expect bot traffic to exceed human traffic on the Internet by the end of 2029, and by 2031, we anticipate that bot activity alone will surpass the sum of current Internet traffic. </p><p>The de facto defaults of the Internet permitted this. The norm had been that your data would be ingested, but then you, the creator of that content, would get something in return: either referral traffic that you could monetize, or at a minimum some sort of attribution that cited you as the author. Think of the <a href="https://en.wikipedia.org/wiki/Linkback"><u>linkback</u></a> in the early days of blogging, which was a way to give credit to the original creator of the work. No money changed hands, but that attribution drove future discovery and had intrinsic value. This norm has been embedded in many permissive licenses such as <a href="https://en.wikipedia.org/wiki/MIT_License"><u>MIT</u></a> and <a href="https://creativecommons.org/share-your-work/cclicenses/"><u>Creative Commons</u></a>, each of which require attribution back to the original creator. </p><p>That world has changed; that scraped content is now sometimes used to economically compete against the original creator. It’s left many with an <a href="https://blog.cloudflare.com/introducing-ai-crawl-control/"><u>impossible choice</u></a>: do you lock down access to your content and data, or accept the reality of fewer referrals and minimal attribution? If the only recourse is the former, the open transmission of ideas on the web is harmed and newer entrants to the AI ecosystem are put at an unfair disadvantage for their efforts to train new models. </p>
    <div>
      <h2>The Cloudflare Content Signals Policy</h2>
      <a href="#the-cloudflare-content-signals-policy">
        
      </a>
    </div>
    <p>The Content Signals Policy integrates into website operators’ robots.txt files. It is human-readable text following the # symbol to designate it as a comment. This policy defines three content signals - search, ai-input, and ai-train - and their relevance to crawlers.</p><p>A website operator can then optionally express their preferences via machine-readable content signals. </p>
            <pre><code># As a condition of accessing this website, you agree to abide by the following content signals:

# (a)  If a content-signal = yes, you may collect content for the corresponding use.
# (b)  If a content-signal = no, you may not collect content for the corresponding use.
# (c)  If the website operator does not include a content signal for a corresponding use, the website operator neither grants nor restricts permission via content signal with respect to the corresponding use.

# The content signals and their meanings are: 

# search: building a search index and providing search results (e.g., returning hyperlinks and short excerpts from your website's contents).  Search does not include providing AI-generated search summaries.
# ai-input: inputting content into one or more AI models (e.g., retrieval augmented generation, grounding, or other real-time taking of content for generative AI search answers). 
# ai-train: training or fine-tuning AI models.

# ANY RESTRICTIONS EXPRESSED VIA CONTENT SIGNALS ARE EXPRESS RESERVATIONS OF RIGHTS UNDER ARTICLE 4 OF THE EUROPEAN UNION DIRECTIVE 2019/790 ON COPYRIGHT AND RELATED RIGHTS IN THE DIGITAL SINGLE MARKET. </code></pre>
            <p>There are three parts to this text: </p><ul><li><p>The first paragraph explains to companies how to interpret any given content signal.  “Yes” means go, “no” means stop, and the absence of a signal conveys no meaning. That final, neutral option is important: it lets website operators express a preference with respect to one content signal without requiring them to do so for another.    </p></li><li><p>The second paragraph defines the content signals vocabulary. We kept the signals simple to make it easy for anyone accessing content to abide by them.  </p></li><li><p>The final paragraph reminds those automating access to data that these content signals might have legal rights in various jurisdictions. </p></li></ul><p>A website operator can then announce their specific preferences in machine-readable text using comma-delimited, ‘yes’ or ‘no’ syntax. If a website operator wants to allow search, disallow training, and expressed no preference regarding ai-input, they could include the following in their robots.txt:</p>
            <pre><code>User-Agent: *
Content-Signal: search=yes, ai-train=no 
Allow: / 
</code></pre>
            <p>If a website operator leaves the content signal for ai-input blank like in the above example, it does not mean they have no preference regarding that use; it just means they have not used this part of their robots.txt file to express it.</p>
    <div>
      <h2>How to add content signals to your website</h2>
      <a href="#how-to-add-content-signals-to-your-website">
        
      </a>
    </div>
    <p>If you already know how to configure your robots.txt file, deploying content signals is as simple as adding the Content Signals Policy above and then defining your preferences via a content signal.  </p><p>We want to make adopting content signals simple. Cloudflare customers have already turned on our managed robots.txt feature for over 3.8 million domains. By doing so, they have chosen to instruct companies that they <a href="https://www.cloudflare.com/learning/ai/how-to-block-ai-crawlers/">do not want the content on those domains to be used for AI training</a>. For these customers, we will update the robots.txt file that we already serve on their behalf to include the Content Signals Policy and the following signals:</p>
            <pre><code>Content-Signal: search=yes, ai-train=no</code></pre>
            <p>We will not serve an “ai-input” signal for our managed robots.txt customers. We don’t know their preference with respect to that signal, and we don’t want to guess.  </p><p>Starting today, we also will serve the commented, human-readable Content Signals Policy for any free customer zone that does not have an existing robots.txt file. In practice, that means a request to robots.txt on that domain would return the comments that define what content signals are. These comments are ignored by crawlers. Importantly, it will not include any Allow or Disallow directives, nor will not serve any actual content signals. The users are the ones to choose and express their actual preferences if and when they are ready to do so. Customers with an existing robots.txt file will see no change.</p><p>Zones on a free plan can turn off the Content Signals Policy in the Security Settings section of the Cloudflare dashboard, as well as via the Overview section. </p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/69VPgMTwoI1KqUTP4cNqG5/9576a3ca6eeee93b58688aea7f7ff0ae/BLOG-2956_2.png" />
          </figure><p>To create your own content signals, just copy and paste the text that we help you generate at <a href="http://contentsignals.org"><u>ContentSignals.org</u></a> into your <code>robots.txt</code> file, or immediately deploy via the Deploy to Cloudflare button. You can alternatively turn on our <a href="https://developers.cloudflare.com/bots/additional-configurations/managed-robots-txt/"><u>managed robots.txt feature</u></a> if you would like to express your preference to disallow training. </p><p>It’s important to remember that content signals express preferences; they are not <a href="https://www.cloudflare.com/learning/ai/how-to-prevent-web-scraping/">technical countermeasures against scraping</a>. Some companies might simply ignore them. If you are a website publisher seeking to control what others do with your content, we think it is best to combine your content signals with <a href="https://developers.cloudflare.com/waf/"><u>WAF</u></a> rules and <a href="https://www.cloudflare.com/application-services/products/bot-management/"><u>Bot Management</u></a>.</p><p>While these Cloudflare features aim to make it easier to use, we want to encourage adoption by anyone, anywhere. In order to promote this practice, we are releasing this policy under a <a href="https://creativecommons.org/publicdomain/zero/1.0/"><u>CC0 License</u></a>, which allows anyone to implement and use it freely. </p>
    <div>
      <h2>What’s next</h2>
      <a href="#whats-next">
        
      </a>
    </div>
    <p>Our customers are fully in the driver’s seat for what crawlers they want to allow and what they’d like to block. Some want to write for the superintelligence, others want more control: we think they should be the ones to decide.</p><p>Content signals allow anyone to express how they want their content to be used after it has been accessed. Enabling the ability to express preferences was overdue. </p><p>We know there’s more work to do. Signaling the rules of the road only works if others recognize those rules. That’s why we’ll continue to work in standards bodies to develop and standardize solutions that meet the needs of our customers and are accepted by the broader Internet community.</p><p>We hope you’ll join us in these efforts: the open web is worth fighting for.</p> ]]></content:encoded>
            <category><![CDATA[Birthday Week]]></category>
            <category><![CDATA[Policy & Legal]]></category>
            <category><![CDATA[Bot Management]]></category>
            <category><![CDATA[AI]]></category>
            <guid isPermaLink="false">1wk9EDViBe0NsG2Hs8dURz</guid>
            <dc:creator>Will Allen</dc:creator>
        </item>
        <item>
            <title><![CDATA[Building unique, per-customer defenses against advanced bot threats in the AI era]]></title>
            <link>https://blog.cloudflare.com/per-customer-bot-defenses/</link>
            <pubDate>Tue, 23 Sep 2025 14:00:00 GMT</pubDate>
            <description><![CDATA[ Today, we are announcing a new approach to catching bots: using models to provide behavioral anomaly detection unique to each bot management customer and stop sophisticated bot attacks.  ]]></description>
            <content:encoded><![CDATA[ <p>Today, we are announcing a new approach to catching bots: using models to provide <b>behavioral anomaly detection </b><b><i>unique to each bot management customer</i></b> and stop sophisticated bot attacks. </p><p>With this per-customer approach, we’re giving every bot management customer hyper-personalized security capabilities to stop even the sneakiest bots. We’re doing this by not only making a first-request judgement call, but also by tracking behavior of bots who play the long-game and continuously execute unwanted behavior on our customers’ websites. We want to share how this service works, and where we’re focused. Our new platform has the power to fuel hundreds of thousands of unique detection suites, and we’ve heard our first target loud and clear from site owners: <a href="https://www.cloudflare.com/the-net/building-cyber-resilience/regain-control-ai-crawlers/"><u>protect websites</u></a> from the explosion of sophisticated, AI-driven web scraping.</p>
    <div>
      <h2>The new arms race: the rise of AI-driven scraping</h2>
      <a href="#the-new-arms-race-the-rise-of-ai-driven-scraping">
        
      </a>
    </div>
    <p>The battle against malicious bots used to be a simpler affair. Attackers used scripts that were fairly easy to identify through static, predictable signals: a request with a missing User-Agent header, a malformed method name, or traffic from a non-standard port was a clear indicator of malicious intent. However, the Internet is always evolving. As websites became more dynamic to create rich user experiences, attackers evolved their tools in response. The simple scripts of yesterday were replaced by headless browsers and automation frameworks, capable of rendering pages and mimicking human interaction with far greater fidelity.</p><p>AI has made this even trickier. The rise of <a href="https://www.cloudflare.com/learning/ai/what-is-generative-ai/"><u>Generative AI</u></a> has fundamentally changed the capabilities and the motivations of attackers. The web scraping of today isn’t limited to competitive price intelligence or content aggregation, but driven by the voracious appetite of <a href="https://www.cloudflare.com/learning/ai/what-is-large-language-model/"><u>Large Language Models (LLMs)</u></a> for training data.</p><p>Cloudflare’s data shows this shift in stark terms. In mid-2025, <a href="https://radar.cloudflare.com/ai-insights?dateStart=2025-07-01&amp;dateEnd=2025-07-07#crawl-purpose"><b><u>crawling for the purpose of AI model training accounted for nearly 80% of all AI bot activity</u></b></a> on our network, a significant increase from the year prior. Modern scraping tools are now AI-powered themselves. They leverage LLMs for semantic understanding of page content, use computer vision to solve visual challenges, and employ reinforcement learning to navigate complex websites they’ve never seen before. The evolution of these bots exposes critical vulnerability in the traditional, one-size-fits-all approach to security. While global threat intelligence is immensely powerful for stopping widespread attacks, these new <b>AI-powered scrapers are designed to blend in</b>. They can rotate IP addresses through residential proxies, generate human-like user agents, and mimic plausible browsing patterns. A request from one of these bots might not look anomalous when compared to the trillions of requests we see across the Cloudflare network, but would appear anomalous when compared to the established patterns of legitimate users on a specific website. This means we need to build defenses against these bots from every angle we have — from the global view to specific behavior on a single application. </p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3muiMDClrUwUrh5yoDbqlv/9df48cc59dcefed98b16b7df7f72fbd6/image3.png" />
          </figure>
    <div>
      <h2>Globally scalable bot fingerprinting</h2>
      <a href="#globally-scalable-bot-fingerprinting">
        
      </a>
    </div>
    <p>To target specific well-known bots or bot actors, we leverage the Cloudflare network to fingerprint bots that we see behave similarly across millions of websites. Since June, Cloudflare’s bot detection security analysts have written <b>50 heuristics</b> to catch bots using a variety of signals, including but not limited to <b>HTTP/2 fingerprints</b> and <b>Client Hello extensions. </b>By observing traffic on millions of websites, we establish a baseline of legitimate fingerprints of common browsers and benign devices. When a new, unique fingerprint suddenly appears across many different sites, it's a tell-tale sign of a distributed botnet or a new automation tool, allowing our analysts to block the bot's signature itself and neutralize the entire campaign, regardless of the thousands of different IP addresses it might use.</p><p>Recently, we also introduced <a href="https://developers.cloudflare.com/bots/additional-configurations/detection-ids/#additional-detections"><b><u>detection improvements to tackle residential proxy networks</u></b></a> and similar commercial proxies, which are used by attackers to make their bots appear as thousands of distinct real visitors, allowing them to bypass traditional security measures. The superpower of this detection improvement? Combining the vast amount of network data we see with particular client-side fingerprints obtained through the millions of challenge solves that happen across the Internet daily. <a href="https://developers.cloudflare.com/cloudflare-challenges/"><u>Challenges</u></a> have always served as an ideal mitigation action for customers who want to protect their applications without compromising real-user experience, but now they also serve as a gift that keeps on giving: in this case, <b><i>feeding the Cloudflare threat detection teams a constant stream of client-side information</i></b> that allows us to pattern match to determine IP addresses that are used by residential proxy networks.</p><p>This detection improvement is already ingesting data from the entire Cloudflare network, automatically catching more malicious traffic for all customers using <a href="https://developers.cloudflare.com/bots/get-started/super-bot-fight-mode/"><u>Super Bot Fight Mode</u></a> (bot protection included for Pro, Business, and all Enterprise customers) and <a href="https://developers.cloudflare.com/bots/get-started/bot-management/"><u>Enterprise Bot Management</u></a>. Examining 7 days of data from the time of authoring this post, we’ve observed <b>11 billion requests</b> from millions of unique IP addresses that we’ve identified as connected to residential or commercial proxy networks. This is just one piece of the global detection puzzle; the existing <a href="https://blog.cloudflare.com/residential-proxy-bot-detection-using-machine-learning/"><u>residential proxy detection features in our ML</u></a><b> </b>already catch <i>tens of millions of requests every hour</i>. </p>
    <div>
      <h2>Hyper-personalized security: learning what's normal for <i>you</i></h2>
      <a href="#hyper-personalized-security-learning-whats-normal-for-you">
        
      </a>
    </div>
    <p>The new arms race against AI-powered bots necessitates a closer look — something more precise. For instance, a script that systematically scrapes every user profile on a social media site, or every product listing on an e-commerce platform, is exhibiting behavior that is fundamentally abnormal for <i>that application</i>, even if a standalone request appears benign. This realization is at the heart of our new strategy: to win this new arms race, defenses must become as bespoke and adaptive as the attacks they face.</p><p>To meet this challenge, we built a new, foundational platform engineered to deploy custom <a href="https://www.cloudflare.com/learning/ai/what-is-machine-learning/"><u>machine learning models</u></a> for every bot management customer. We’re creating a unique defense for every application. Because each website has different traffic, the traffic that we flag as anomalous will, of course, be different for each zone — for this system, we want to be clear that data from one customer’s zone won’t be used to train the model for another customer’s use.</p><p>Announcing this as a new platform capability, rather than a single feature, is a deliberate choice. It aligns with how we’ve approached our most significant innovations, from <a href="https://www.cloudflare.com/developer-platform/products/workers/"><u>Cloudflare Workers</u></a> changing how developers build applications, to <a href="https://www.cloudflare.com/developer-platform/products/ai-gateway/"><u>AI Gateway</u></a> creating a single control plane for AI observability and security. By focusing on the platform, we tackle the <a href="https://www.cloudflare.com/learning/ai/how-to-prevent-web-scraping/">scraping problems</a> our customers are seeing today <i>and</i> power future detections as bot attacks become increasingly sophisticated.</p><p>Our new generation of per-customer anomaly detection is a three-step process, designed to identify malicious behavior by first understanding what constitutes legitimate traffic for each individual website and API.</p>
    <div>
      <h3>Step 1: Establishing a dynamic baseline</h3>
      <a href="#step-1-establishing-a-dynamic-baseline">
        
      </a>
    </div>
    <p>For each customer zone, our behavioral detections ingest traffic data to build a baseline of normal activity. Rather than taking a static snapshot, our new platform ingests data to make living, continuously updated calculations of what “normal” looks like on a specific website. This approach understands seasonality, recognizes traffic spikes from legitimate marketing campaigns, and maps the typical pathways users take through a site. This approach evolves the concept of Anomaly Detection already present in our Enterprise Bot Management suite, but applies it at a far more granular and dynamic per-customer level.</p>
    <div>
      <h3>Step 2: Identifying the anomalies</h3>
      <a href="#step-2-identifying-the-anomalies">
        
      </a>
    </div>
    <p>Once the baseline of "normal" is established, we begin the true work — identifying deviations. Because the baseline is specific to each website, the anomalies detected are highly contextual, perhaps even invisible to a global system. We can examine a few different types of websites to unpack this:</p><ul><li><p><b>For a gaming company:</b> A normal traffic baseline might show millions of users making frequent, rapid API calls to a matchmaking service or an in-game inventory system. A behavioral detection model trained on this baseline would immediately flag a single user making slow, methodical, sequential API calls to scrape the entire player leaderboard. This behavior, while low in volume, is a clear anomaly against the backdrop of normal gameplay patterns.</p></li><li><p><b>For a retail website:</b> The normal baseline is a complex funnel of users browsing categories, viewing products, adding items to a cart, and proceeding to checkout. These detections would identify an actor that systematically visits every single product page in alphabetical order at a machine-like pace, without ever interacting with the cart or session cookies, as a significant anomaly indicative of <a href="https://www.cloudflare.com/learning/bots/what-is-content-scraping/"><u>content scraping</u></a>.</p></li><li><p><b>For a media publisher:</b> Normal user behavior involves reading a few articles, following internal links, and spending a measurable amount of time on each page. An anomaly would be a script that hits thousands of article URLs per minute, spending less than a second on each, purely to extract the text content for AI model training.</p></li></ul><p>In each case, the malicious activity is defined not by a universal signature, but <b><i>by its deviation from the application's unique, established norm</i></b>.</p>
    <div>
      <h3>Step 3: Generating actionable findings</h3>
      <a href="#step-3-generating-actionable-findings">
        
      </a>
    </div>
    <p>Detecting an anomaly is only half the battle. The power of bot management comes from its seamless integration into the Cloudflare security ecosystem you already use, turning detection into immediate, actionable findings. Customers can benefit from these behavioral detection improvements in two ways:</p><ol><li><p><b>New Bot Detection IDs: </b>For our Enterprise customers, we’re introducing a new set of <a href="https://developers.cloudflare.com/bots/additional-configurations/detection-ids/"><u>Bot Detection IDs</u></a>. Website owners and security teams can write WAF security rules to challenge, rate-limit, or block traffic based on the specific anomalies flagged by these detections. Since each detection type is tied to a unique ID, customers can see exactly what kind of behavior caused a request to be flagged as anomalous, offering a detailed, per-request view into stealthy malicious traffic. And for a wider view, customers can filter by Detection ID from their Security Analytics, to see the bigger picture of all traffic captured by that detection type.</p></li><li><p><b>Improving Bot Score:</b> Another key output from these new, per-customer models will be to directly influence the Bot Score of a request. A request flagged as anomalous will have its score lowered, moving it into the "Likely Automated" (scores 2-29) or "Automated" (score 1) categories. This means that existing WAF custom rules based on Bot Score will automatically see impact and become more effective against bespoke attacks, with no changes required. This functionality update is available today for our latest <a href="https://developers.cloudflare.com/bots/additional-configurations/detection-ids/#account-takeover-detections"><u>account takeover detection</u></a>, <a href="https://blog.cloudflare.com/residential-proxy-bot-detection-using-machine-learning/"><u>residential proxy detections</u></a> and our recent <a href="https://developers.cloudflare.com/bots/additional-configurations/detection-ids/#additional-detections"><u>enhancements</u></a>, and will be implemented in the future for our behavioral scraping detection. </p></li></ol><p>This three-step process is already in action with our behavioral detections to catch <a href="https://developers.cloudflare.com/bots/additional-configurations/detection-ids/#account-takeover-detections"><u>account takeover</u></a> attacks. Taking bot detection ID 201326598 as an example: it (1) establishes a zone-level baseline that understands what normal traffic patterns look like for a specific website, (2) examines anomalous login failures to identify brute force and credential stuffing attacks, then (3) allows customers to mitigate these attacks by automatically influencing bot score <i>and</i> offering more visibility with the detection ID’s analytics. </p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5w8HUyr51JD8K4EYT7teeL/ed825aa96c3ae1809199d32734f0e60d/image4.png" />
          </figure><p>This integration strategy creates a flywheel effect: the new intelligence from these improved detections immediately enhances the value of existing products like Super Bot Fight Mode, Bot Management, and the WAF, making the entire Cloudflare platform stronger for you.</p>
    <div>
      <h2>Taking on sophisticated scrapers</h2>
      <a href="#taking-on-sophisticated-scrapers">
        
      </a>
    </div>
    <p>The first challenge we’re tackling is sophisticated scraping. AI-driven scraping is one of the most pressing and rapidly evolving threats facing website owners today, and its adaptive nature makes it an ideal adversary for a system designed to fight an enemy that constantly changes its tactics.</p><p>The first generation of our improved behavioral detections are tuned specifically to detect scraping by analyzing signals that go beyond simple request headers. These include:</p><ul><li><p><b>Behavioral Analysis:</b> Looking at session traversal paths, the sequence of requests, and interaction (or lack thereof) with dynamic page elements.</p></li><li><p><b>Client Fingerprinting:</b> Analyzing subtle signals from the client to identify signs of automation such as JA4 fingerprints in the context of the customer's specific traffic baseline.</p></li><li><p><b>Content-Agnostic Detection:</b> These models do not need to understand the content of a page, only the patterns of how it is being accessed. This makes them highly scalable and efficient, without actually using the unique content on a website to make judgement calls.</p></li></ul><p>How do these scraping detections look, in practice? We validated our logic for detecting scraping with early adopters in a closed beta, in order to receive ground-truth feedback and tune our detections. As with any ideal detection, our goal is to capture as much malicious traffic as possible, without compromising the experience of legitimate website visitors. Looking at just a 24-hour period, our new scraping detections have caught hundreds of millions of requests, flagging <b>138 million scraping requests on just 5 of our early beta zones</b>.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3dmVkAJR9ELqrGMFR4tbcI/732bbb2477c350ec97d8fcd70d57b782/image2.png" />
          </figure><p>Naturally, we see an overlap with our existing system of bot scoring, but the numbers here show us concretely that our new method of behavioral detections have a completely new value add: <b>34% of the requests flagged by our new scraping detections would not have been detected by our existing bot score system</b>, making us all the more eager to use these novel detections to inform the way we score automation.</p>
    <div>
      <h2>A birthday gift for the Internet</h2>
      <a href="#a-birthday-gift-for-the-internet">
        
      </a>
    </div>
    <p>Our mission to help build a better Internet means that when we develop powerful new defenses, we believe in democratizing access to them. Protecting the entire Internet from new and evolving threats requires raising the baseline of security for everyone.</p><p>In that spirit, we’re excited to announce that our enhanced behavioral detections will not only roll out to bot management customers, but will also benefit Cloudflare customers using our global Super Bot Fight Mode<b> </b>system. For our Enterprise Bot Management customers, we automatically tune our detections based on the exact traffic for each zone. Because these advanced models are trained on your zone’s specific traffic, they detect even the most evasive attacks: from account takeovers to web scraping to other attacks executed through residential proxy networks — and we consider this only the tip of the iceberg of behavioral bot profiling. </p>
    <div>
      <h2>The road ahead</h2>
      <a href="#the-road-ahead">
        
      </a>
    </div>
    <p>Our initial focus on scraping is just the beginning of a new wave of behavioral bot detections. The infrastructure we’ve built is a flexible, powerful foundation for tackling a wide range of malicious behavior on your websites; the same principles of establishing a per-customer baseline and detecting anomalies can be applied to other critical threats that are unique to an application's logic, such as credential stuffing, inventory hoarding, carding attacks, and API abuse.</p><p>We are moving into an era where generic defenses are no longer enough. As threats become more personal, so must the defenses against them, and paving this path of behavioral detections is our latest gift to the Internet. Our first offering of scraping behavioral detections is just around the corner: customers will be able to turn on this new detection from the <a href="https://dash.cloudflare.com/?to=/:account/:zone/security/overview"><u>Security Overview</u></a> page in their dashboard. </p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/9EW8B0vJ43k28c5USM5Ho/6a180ca73844c7432749ca36a12684aa/image5.png" />
          </figure><p>(We’re always looking for enthusiastic humans to help us in our mission against bots! If you’re interested in helping us build a better Internet, check out our <a href="https://www.cloudflare.com/careers/jobs/"><u>open positions.</u></a>)</p> ]]></content:encoded>
            <category><![CDATA[Birthday Week]]></category>
            <category><![CDATA[AI]]></category>
            <category><![CDATA[Bots]]></category>
            <category><![CDATA[Bot Management]]></category>
            <guid isPermaLink="false">1l4pM7l0pDUGAgKypKgs15</guid>
            <dc:creator>Jin-Hee Lee</dc:creator>
            <dc:creator>Oliver Payne</dc:creator>
            <dc:creator>Bob AminAzad</dc:creator>
            <dc:creator>Viktor Chynarov</dc:creator>
            <dc:creator>Aleksandar Pavlov Hrusanov</dc:creator>
            <dc:creator>Prajjwal Gupta</dc:creator>
        </item>
        <item>
            <title><![CDATA[Helping protect journalists and local news from AI crawlers with Project Galileo]]></title>
            <link>https://blog.cloudflare.com/ai-crawl-control-for-project-galileo/</link>
            <pubDate>Tue, 23 Sep 2025 13:00:00 GMT</pubDate>
            <description><![CDATA[ We are excited to announce that Project Galileo will now include access to Cloudflare's Bot Management and AI Crawl Control services. ]]></description>
            <content:encoded><![CDATA[ <p></p><p>We are excited to announce that <a href="https://www.cloudflare.com/galileo/"><u>Project Galileo</u></a> will now include access to Cloudflare's <a href="https://www.cloudflare.com/application-services/products/bot-management/"><u>Bot Management</u></a> and <a href="https://developers.cloudflare.com/ai-crawl-control/"><u>AI Crawl Control</u></a> services. Participants in the program, which include roughly 750 journalists, independent news organizations, and other non-profits supporting news-gathering around the world, will now have the ability to <a href="https://www.cloudflare.com/the-net/building-cyber-resilience/regain-control-ai-crawlers/"><u>protect their websites from AI crawlers</u></a>—for free. </p><p>Project Galileo is Cloudflare's free program to help protect important civic voices online. Launched in 2014, it now includes more than 3,000 organizations in 125 countries, and it has served as the foundation for other free Cloudflare programs that help protect <a href="https://www.cloudflare.com/athenian/"><u>democratic elections</u></a>, <a href="https://blog.cloudflare.com/project-cybersafe-schools/"><u>public schools</u></a>, <a href="https://blog.cloudflare.com/heeding-the-call-to-support-australias-most-at-risk-entities/"><u>public health clinics</u></a>, and other <a href="https://www.cloudflare.com/press-releases/2022/project-safekeeping-zero-trust-for-critical-infra/"><u>critical infrastructure</u></a>.  </p><p>Although we think all Project Galileo participants will benefit from these additional free services, we believe they are essential for news organizations. </p><p>News organizations, particularly local news, are facing significant challenges in transitioning to the <a href="https://blog.cloudflare.com/content-independence-day-no-ai-crawl-without-compensation/"><u>AI-driven web</u></a>. As people increasingly turn to AI models for information, less of their web traffic is making it to the actual website where that information originated. Industries, like news organizations, that rely on user traffic to generate revenue are increasingly at-risk. </p><p>Allowing news organizations to monitor and control how AI crawlers are interacting with their websites, will help them better protect their content and make more informed decisions about engaging with AI companies. Ultimately, our goal is to provide the tools news organizations need to negotiate fair compensation for their work.  </p>
    <div>
      <h3>Traffic and the news</h3>
      <a href="#traffic-and-the-news">
        
      </a>
    </div>
    <p>AI is fundamentally changing how traffic flows on the Internet. Cloudflare recently <a href="https://blog.cloudflare.com/ai-search-crawl-refer-ratio-on-radar/#how-does-this-measurement-work"><u>published data</u></a> that <a href="https://blog.cloudflare.com/content-independence-day-no-ai-crawl-without-compensation/"><u>shows</u></a> with Open AI its 750 times more difficult for website owners to get the same volume of traffic than it was with previous Google search. With Anthropic, it's 30,000 times more difficult. </p><p>News organizations rely on traffic to not only connect with their readers, but also generate revenue from subscriptions, advertising, e-commerce, and licensing. The CEO of the Financial Times recently <a href="https://www.theguardian.com/media/2025/sep/06/existential-crisis-google-use-ai-search-upended-web-publishers-models"><u>stated</u></a> that AI had caused a ''pretty sudden and sustained' decline of 25% to 30% in traffic to its articles arriving via search engines." </p><p>Potential losses of user traffic and revenue come at an already precarious time for the news industry. It is well-documented that small, independent newspapers and news radio stations continue to face significant financial pressure, particularly in the United States. According to recent US Congressional <a href="https://www.judiciary.senate.gov/imo/media/doc/2024-01-10_-_testimony_-_coffey.pdf"><u>testimony</u></a>, more than two newspapers closed per week in 2024 with one third of the country's newspapers set to close before the beginning of 2025. <a href="https://localnewsinitiative.northwestern.edu/projects/state-of-local-news/2024/report/#executive-summary"><u>A 2024</u></a> report by the Northwestern Local News Initiative reported more than 206 US counties were without any local news source, and 1,561 had only one.  </p><p>Recent funding <a href="https://www.nytimes.com/2025/08/26/us/politics/public-broadcast-cuts.html"><u>cuts</u></a> to the <a href="https://www.nytimes.com/2025/09/13/us/politics/public-broadcasting-cuts.html"><u>Corporation for Public Broadcasting and National Public Radio</u></a>, which provided grants, programing, and other support to public news stations around the US, have put further strain on these organizations with <a href="https://radio.wpsu.org/2025-09-11/penn-state-plans-close-wpsu-board-committee-rejects-transfer-whyy"><u>more closures expected</u></a>. </p>
    <div>
      <h3>Giving control back to journalists</h3>
      <a href="#giving-control-back-to-journalists">
        
      </a>
    </div>
    <p>An important first step in helping journalists and news organizations adapt to the AI-driven web is providing tools to help them monitor and control AI models' access to their content. </p><blockquote><p>“In an era defined by AI and digital disruption, providing robust tools to independent media isn’t just support - it’s a lifeline” - Meera, CEO <a href="https://internews.org/">Internews</a> Europe</p></blockquote><blockquote><p>"Independent publishers need tools that are easy to use and affordable, so they can focus on growing their business. LION appreciates the security and protection Cloudflare has provided our members through Project Galileo for years, and we're excited to see more resources now available to help members manage the rapidly evolving landscape of digital security."  - Sarah Gustavus Lim, <a href="https://lionpublishers.com/">LION</a> Membership Director </p></blockquote><p>Cloudflare <a href="https://www.cloudflare.com/application-services/products/bot-management/"><u>Bot Management</u></a> and <a href="https://developers.cloudflare.com/ai-crawl-control/"><u>AI Crawl Control</u></a> were designed for exactly these purposes. Bot management is a security tool that uses machine learning to analyze web traffic to distinguish between good bots, like search engine crawlers, and bad bots that attack websites or steal credentials. It allows website owners to block bad bots from reaching their websites, while making sure helpful bots can continue to do their work.</p><p>AI Crawl Control provides similar tools to identify and manage AI crawlers. Cloudflare uses a variety of techniques to identify and categorize crawlers (HTTP header, heuristics, and other behavior) giving website owners the ability to analyze their activity by type (e.g. AI search, AI scraper), where they are coming from (Google, OpenAI, Anthorpic, etc.), and what content they are accessing. Here’s the kind of data that Cloudflare’s AI Crawl Control tool can provide (using the <a href="http://radar.cloudflare.com"><u>radar.cloudflare.com</u></a> domain) as an example:</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/r4I2STKojUo1fBuXWWokG/b0f01faa2f48f6047b7ceb00e6bb84e6/image1.png" />
          </figure>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6YxdJKNg3NbJeYELrRZ2cg/8ada51524091a526bafabcb2ad306492/image2.png" />
          </figure><p>Cloudflare combines these insights with easy-to-use controls that allow website owners to make informed decisions about whether to make their data available, including to only certain types of bots or to individual AI companies. This would, for example, allow a local newspaper to decide to <a href="https://www.cloudflare.com/learning/ai/how-to-block-ai-crawlers/">block all AI crawlers</a> and maintain direct connection to their readers via their own website, <a href="https://www.cloudflare.com/learning/ai/how-to-prevent-web-scraping/">block only AI scrapers </a>while allowing AI search crawlers that refer traffic, or negotiate and sell exclusive access to their content to a single AI company. The following image shows how AI Crawl Control lets users allow or block access on a crawler-by-crawler basis:</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/11AY83EbOO6wV8102Hy6wm/62e9d5a14626b080d7ee51bff011597a/image4.png" />
          </figure><p>We think the ability to control and monitor AI crawler activity will provide immediate help to news organizations looking to protect their content and understand how models are using their data. </p><p>We also think it will provide longer term insights that will allow news organizations to negotiate mutually beneficial relationships with AI companies over time.  </p><blockquote><p>"Independent media's ability to fulfill its democratic function by gathering news and distributing trusted information depends on generating revenues free from political or business influence. By monitoring and monetizing the crawling of publisher's sites, media can protect their intellectual property while developing new revenue streams to support their quality journalism." - Ryan Powell, Head of Innovation and Media Business at <a href="https://ipi.media/">International Press Institute</a></p></blockquote>
    <div>
      <h3>A free press, if we can keep it</h3>
      <a href="#a-free-press-if-we-can-keep-it">
        
      </a>
    </div>
    <p>Journalism is part of the foundation of free society and democratic governance. It helps hold power accountable and provides a voice to the marginalized and underrepresented. It also protects the free and open markets that allow startups to challenge powerful incumbents.  </p><p>Local news in particular helps create shared identity. Not only by covering community events, high school sports, farmers markets, and new businesses, but also providing essential transparency and oversight over local officials, school boards, public safety events, and elections. </p><p>Helping protect journalists and news organizations online has always been part of Cloudflare's mission. We see it as essential to our business and the future of the Internet.  </p><p>If you are interested in learning more about <a href="https://www.cloudflare.com/galileo/"><u>Project Galileo</u></a>, sign up today. If you are interested in helping build a better Internet, <a href="https://www.cloudflare.com/careers/"><u>come join us</u></a>.
</p> ]]></content:encoded>
            <category><![CDATA[Birthday Week]]></category>
            <category><![CDATA[Project Galileo]]></category>
            <category><![CDATA[Impact]]></category>
            <category><![CDATA[Bot Management]]></category>
            <category><![CDATA[AI]]></category>
            <guid isPermaLink="false">1aO7Ty9ZIj6nSXApr9xgmu</guid>
            <dc:creator>Patrick Day</dc:creator>
            <dc:creator>Jocelyn Woolbright</dc:creator>
        </item>
        <item>
            <title><![CDATA[The age of agents: cryptographically recognizing agent traffic]]></title>
            <link>https://blog.cloudflare.com/signed-agents/</link>
            <pubDate>Thu, 28 Aug 2025 14:00:00 GMT</pubDate>
            <description><![CDATA[ Cloudflare now lets websites and bot creators use Web Bot Auth to segment agents from verified bots, making it easier for customers to allow or disallow the many types of user and partner directed. ]]></description>
            <content:encoded><![CDATA[ <p>On the surface, the goal of handling bot traffic is clear: keep malicious bots away, while letting through the helpful ones. Some bots are evidently malicious — such as mass price scrapers or those testing stolen credit cards. Others are helpful, like the bots that index your website. Cloudflare has segmented this second category of helpful bot traffic through our <a href="https://developers.cloudflare.com/bots/concepts/bot/#verified-bots"><u>verified bots</u></a> program, <a href="https://developers.cloudflare.com/bots/concepts/bot/verified-bots/policy/"><u>vetting</u></a> and validating bots that are transparent about who they are and what they do.</p><p>Today, the rise of <a href="https://agents.cloudflare.com/"><u>agents</u></a> has transformed how we interact with the Internet, often blurring the distinctions between benign and malicious bot actors. Bots are no longer directed only by the bot owners, but also by individual end users to act on their behalf. These bots directed by end users are often working in ways that website owners want to allow, such as planning a trip, ordering food, or making a purchase.</p><p>Our customers have asked us for easier, more granular ways to ensure specific <a href="https://www.cloudflare.com/learning/bots/what-is-a-bot/"><u>bots</u></a>, <a href="https://www.cloudflare.com/learning/bots/what-is-a-web-crawler/"><u>crawlers</u></a>, and <a href="https://www.cloudflare.com/learning/ai/what-is-agentic-ai/"><u>agents</u></a> can reach their websites, while continuing to block bad actors. That’s why we’re excited to introduce <b>signed agents</b>, an extension of our verified bots program that gives a new bot classification in our security rules and in Radar. Cloudflare has long recognized agents — but we’re now endowing them with their own classification to make it even easier for our customers to set the traffic lanes they want for their website. </p>
    <div>
      <h2>The age of agents</h2>
      <a href="#the-age-of-agents">
        
      </a>
    </div>
    <p>Cloudflare has continuously expanded our verified bot categorization to include different functions as the market has evolved. For instance, we first announced our grouping of <a href="https://blog.cloudflare.com/ai-bots/"><u>AI crawler traffic as an official bot category</u></a> in 2023. And in 2024, when OpenAI announced a <a href="https://openai.com/index/searchgpt-prototype/"><u>new AI search prototype</u></a> and introduced <a href="https://platform.openai.com/docs/bots"><u>three different bots</u></a> with distinct purposes, we <a href="https://blog.cloudflare.com/cloudflare-ai-audit-control-ai-content-crawlers/"><u>added three new categories</u></a> to account for this innovation: AI Search, AI Assistant, and Archiver.</p><p>But the bot landscape is constantly evolving. Let's unpack a common type of verified AI bot — an AI crawler such as <a href="https://radar.cloudflare.com/bots/directory/gptbot"><u>GPTBot</u></a>. Even though the bot performs an array of tasks, the bot’s ultimate purpose is a singular, repetitive task on behalf of the operator of that bot: fetch and index information. Its intelligence is applied to performing that singular job on behalf of that bot owner. </p><p>Agents, though, are different. Think about an AI agent tasked by a user to "Book the best deal for a round-trip flight to New York City next month." These agents sometimes use remote browsing products like Cloudflare's <a href="https://developers.cloudflare.com/browser-rendering/"><u>Browser Rendering</u></a> and similar products from companies like Browserbase and Anchor Browser. And here is the key distinction: this particular type of bot isn’t operating on behalf of a single company, like OpenAI in the prior example, but rather the end users themselves. </p>
    <div>
      <h2>Introducing signed agents</h2>
      <a href="#introducing-signed-agents">
        
      </a>
    </div>
    <p>In May, we announced Web Bot Auth, a new method of <a href="https://blog.cloudflare.com/web-bot-auth/"><u>using cryptography to verify bot and agent traffic</u></a>. HTTP message signatures allow bots to authenticate themselves and allow customer origins to identify them. This is one of the authentication methods we use today for our verified bots program. </p><p>What, exactly, is a <a href="https://developers.cloudflare.com/bots/concepts/bot/signed-agents/"><u>signed agent</u></a>? First, they are agents that are generally directed by an end user instead of a single company or entity. Second, the infrastructure or remote browsing platform the agents use is signing their HTTP requests via Web Both Auth, with Cloudflare validating these message signatures. And last, they comply with our <a href="https://developers.cloudflare.com/bots/concepts/bot/signed-agents/policy/"><u>signed agent policy</u></a>.</p><p>The signed agents classification improves on our existing frameworks in a couple of ways:</p><ol><li><p><b>Increased precision and visibility:</b> we’ve updated the <i>Cloudflare bots and agents directory to include signed agents</i> in addition to verified bots. This allows us to verify the cryptographic signatures of a much wider set of automated traffic, and our customers to granularly apply their security preferences more easily. Bot operators can now <i>submit signed agent applications from the Cloudflare dashboard</i>, allowing bot owners to specify to us how they think we should segment their automated traffic. </p></li><li><p><b>Easier controls from security rules</b>: similar to how they can take action on verified bots as a group, our Enterprise customers will be able to take action on <i>signed agents as a group when configuring their security rules</i>. This new field will be available in the Cloudflare dashboard under security rules soon.</p></li></ol><p>To apply to have an agent added to Cloudflare’s directory of bots and agents, customers should complete the <a href="https://dash.cloudflare.com?to=/:account/configurations/bot-submission-form"><u>Bot Submission Form</u></a> in the Cloudflare dashboard. Here, they can specify whether the submission should be considered for the signed agents list or the verified bots list. All signed agents will be recognized by their cryptographic signatures through <a href="https://datatracker.ietf.org/doc/html/draft-meunier-web-bot-auth-architecture"><u>Web Bot Auth validation</u></a>. </p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5caeGdhlmI3dO3GNZKeEUg/0dac239a94732404861b3876f6bdb8b6/BLOG-2930_2.png" />
          </figure><p><sub>The Bot Submission Form, available in the Cloudflare dashboard for bot owners to submit both verified bot and signed agent applications.</sub></p><p>We want to be clear: our verified bots program isn’t going anywhere. In fact, well-behaved and transparent applications that make use of signed agents can further qualify to be a verified bot, if their specific service adheres to our <a href="https://developers.cloudflare.com/bots/concepts/bot/verified-bots/policy/"><u>policy</u></a>. For instance,<a href="https://radar.cloudflare.com/scan"> <u>Cloudflare Radar's URL Scanner</u></a>, which relies on Browser Rendering as a service to scan URLs, is a <a href="https://radar.cloudflare.com/bots/directory/cloudflare-radar-url-scanner"><u>verified bot</u></a>. While Browser Rendering itself does not qualify to be a verified bot, URL Scanner does, since the bot owner (in this case, Cloudflare Radar) directs the traffic sent by the bot and always identifies itself with a unique Web Bot Auth signature — distinct from <a href="https://developers.cloudflare.com/browser-rendering/reference/automatic-request-headers/"><u>Browser Rendering’s signature</u></a>. </p>
    <div>
      <h2>From an agent’s perspective… </h2>
      <a href="#from-an-agents-perspective">
        
      </a>
    </div>
    <p>Since the launch of Web Bot Auth, our own Browser Rendering product has been sending signed Web Bot Auth HTTP headers, and is always given a bot score of 1 for our Bot Management customers. As of today, Browser Rendering will now show up in this new signed agent category. </p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1F8Z0E6WqJTxLf9G3PLB3a/84e80539be402066fe02ab60c431100a/BLOG-2930_3.png" />
          </figure><p>We’re also excited to announce the first cohort of agents that we’re partnering with and will be classifying as signed agents: <a href="https://openai.com/index/introducing-chatgpt-agent/"><u>ChatGPT agent</u></a>, <a href="https://block.xyz/inside/block-open-source-introduces-codename-goose"><u>Goose</u></a> from Block, <a href="https://docs.browserbase.com/introduction/what-is-browserbase"><u>Browserbase</u></a>, and <a href="https://anchorbrowser.io/"><u>Anchor Browser</u></a>. They are perfect examples of this new classification because their remote browsers are used by their end customers, not necessarily the companies themselves. We’re thrilled to partner with these teams to take this critical step for the AI ecosystem:</p><blockquote><p>“<i>When we built Goose as an open source tool, we designed it to run locally with an extensible architecture that lets developers automate complex workflows. As Goose has evolved to interact with external services and third-party sites on users' behalf, Web Bot Auth enables those sites to trust Goose while preserving what makes it unique. </i><b><i>This authentication breakthrough unlocks entirely new possibilities for autonomous agents</i></b>." – <b>Douwe Osinga</b>, Staff Software Engineer, Block</p></blockquote><blockquote><p><i>"At Browserbase, we provide web browsing capabilities for some of the largest AI applications. We're excited to partner with Cloudflare to support the adoption of Web Bot Auth, a critical layer of identity for agents. </i><b><i>For AI to thrive, agents need reliable, responsible web access.</i></b><i>"</i>  – <b>Paul Klein</b>, CEO, Browserbase</p></blockquote><blockquote><p><i>“Anchor Browser has partnered with Cloudflare to let developers ship verified browser agents. This way </i><b><i>trustworthy bots get reliable access while sites stay protected</i></b><i>.”</i> – <b>Idan Raman</b>, CEO, Anchor Browser</p></blockquote>
    <div>
      <h2>Updated visibility on Radar</h2>
      <a href="#updated-visibility-on-radar">
        
      </a>
    </div>
    <p>We want everyone to be in the know about our bot classifications. Cloudflare began publishing verified bots on our Radar page <a href="https://radar.cloudflare.com/bots#verified-bots"><u>back in 2022</u></a>, meaning anyone on the Internet — Cloudflare customer or not — can see all of our <a href="https://radar.cloudflare.com/bots#verified-bots"><u>verified bots on Radar</u></a>. We dynamically update the list of bots, but show more than just a list: we announced on <a href="https://www.cloudflare.com/en-gb/press-releases/2025/cloudflare-just-changed-how-ai-crawlers-scrape-the-internet-at-large/"><u>Content Independence Day</u></a> that <a href="https://blog.cloudflare.com/ai-search-crawl-refer-ratio-on-radar/#one-more-thing"><u>every verified bot would get its own page</u></a> in our public-facing directory on Radar, which includes the traffic patterns that we see for each bot.</p><p>Our directory has been updated to include <a href="https://radar.cloudflare.com/bots/directory"><b><u>both signed agents and verified bots</u></b></a> — we share exactly how Cloudflare classifies the bots that it recognizes, plus we surface all of the traffic that Cloudflare observes from these many recognized agents and bots. Through this updated directory, we’re not only giving better visibility to our customers, but also striving to set a higher standard for transparency of bot traffic on the Internet. </p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/65QPFjmbBde3EzHTOwElSL/cccc8f23c37716c251e0c21850855265/BLOG-2930_4.png" />
          </figure><p><sub>Cloudflare Radar’s Bots Directory, which lists verified bots and signed agents. This view is filtered to view only agent entries.</sub></p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2wBz7UwrQQzT7rJJnXiF8C/16eed3f1afd95cac32c4bcb647c6e5e6/BLOG-2930_5.png" />
          </figure><p><sub>Cloudflare Radar’s signed agent page for ChatGPT agent, which includes its traffic patterns for the last 7 days, from August 21, 2025 to August 27, 2025. </sub></p>
    <div>
      <h2>What’s now, what’s next</h2>
      <a href="#whats-now-whats-next">
        
      </a>
    </div>
    <p>As of today, the Cloudflare bot directory supports both bots and agents in a more clear-cut way, and customers or agent creators can submit agents to be signed and recognized <a href="https://dash.cloudflare.com/?to=/:account/configurations/bot-submission-form"><u>through their account dashboard</u></a>. In addition, anyone can see our signed agents and their traffic patterns on Radar. Soon, customers will be able to take action on signed agents as a group within their firewall rules, the same way you can take action on our verified bots. </p><p>Agents are changing the way that humans interact with the Internet. Websites need to know what tools are interacting with them, and for the builders of those tools to be able to easily scale. Message signatures help achieve both of these goals, but this is only step one. Cloudflare will continue to make it easier for agents and websites to interact (or not!) at scale, in a seamless way. </p><p>
</p> ]]></content:encoded>
            <category><![CDATA[AI Week]]></category>
            <category><![CDATA[AI]]></category>
            <category><![CDATA[Bots]]></category>
            <category><![CDATA[AI Bots]]></category>
            <category><![CDATA[Bot Management]]></category>
            <category><![CDATA[Security]]></category>
            <guid isPermaLink="false">1LQFWI1jzZnWAqR4iFMLLi</guid>
            <dc:creator>Jin-Hee Lee</dc:creator>
        </item>
        <item>
            <title><![CDATA[The next step for content creators in working with AI bots: Introducing AI Crawl Control]]></title>
            <link>https://blog.cloudflare.com/introducing-ai-crawl-control/</link>
            <pubDate>Thu, 28 Aug 2025 14:00:00 GMT</pubDate>
            <description><![CDATA[ Cloudflare launches AI Crawl Control (formerly AI Audit) and introduces easily customizable 402 HTTP responses. ]]></description>
            <content:encoded><![CDATA[ <p><i>Empowering content creators in the age of AI with smarter crawling controls and direct communication channels</i></p><p>Imagine you run a regional news site. Last month an AI bot scraped 3 years of archives in minutes — with no payment and little to no referral traffic. As a small company, you may struggle to get the AI company's attention for a licensing deal. Do you block all crawler traffic, or do you let them in and settle for the few referrals they send? </p><p>It’s picking between two bad options.</p><p>Cloudflare wants to help break that stalemate. On July 1st of this year, we declared <a href="https://www.cloudflare.com/press-releases/2025/cloudflare-just-changed-how-ai-crawlers-scrape-the-internet-at-large/"><u>Content Independence Day</u></a> based on a simple premise: creators deserve control of how their content is accessed and used. Today, we're taking the next step in that journey by releasing AI Crawl Control to general availability — giving content creators and AI crawlers an important new way to communicate.</p>
    <div>
      <h2>AI Crawl Control goes GA</h2>
      <a href="#ai-crawl-control-goes-ga">
        
      </a>
    </div>
    <p>Today, we're rebranding our AI Audit tool as <b>AI Crawl Control</b> and moving it from beta to <b>general availability</b>. This reflects the tool's evolution from simple monitoring to detailed insights and <a href="https://www.cloudflare.com/learning/ai/how-to-block-ai-crawlers/">control over how AI systems can access your content</a>. </p><p>The market response has been overwhelming: content creators across industries needed real agency, not just visibility. AI Crawl Control delivers that control.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/pIAbmCR0tTK71umann3w0/e570c5f898e3d399babf6d1f82c2f3d8/image3.png" />
          </figure>
    <div>
      <h2>Using HTTP 402 to help publishers license content to AI crawlers</h2>
      <a href="#using-http-402-to-help-publishers-license-content-to-ai-crawlers">
        
      </a>
    </div>
    <p>Many content creators have faced a binary choice: either they block all AI crawlers and miss potential licensing opportunities and referral traffic; or allow them through without any compensation. Many content creators had no practical way to say "we're open for business, but let's talk terms first."</p><p>Our customers are telling us:</p><ul><li><p>We want to license our content, but crawlers don't know how to reach us. </p></li><li><p>Blanket blocking feels like we're closing doors on potential revenue and referral traffic. </p></li><li><p>We need a way to communicate our terms before crawling begins. </p></li></ul><p>To address these needs, we are making it easier than ever to send customizable<b> </b><a href="https://developer.mozilla.org/en-US/docs/Web/HTTP/Reference/Status/402">402 HTTP status codes</a>. </p><p>Our <a href="https://blog.cloudflare.com/introducing-pay-per-crawl/#what-if-i-could-charge-a-crawler"><u>private beta launch of Pay Per Crawl</u></a> put the HTTP 402 (“Payment Required”) response codes to use, working in tandem with Web Bot Auth to enable direct payments between agents and content creators. Today, we’re making customizable 402 response codes available to every paid Cloudflare customer — not just pay per crawl users.</p><p>Here's how it works: in AI Crawl Control, paying Cloudflare customers will be able to select individual bots to block with a configurable message parameter and send 402 payment required responses. Think: "To access this content, email partnerships@yoursite.com or call 1-800-LICENSE" or "Premium content available via API at api.yoursite.com/pricing."</p><p>On an average day, Cloudflare customers are already sending over one billion 402 response codes. This shows a deep desire to move beyond blocking to open communication channels and new monetization models. With the 402 HTTP status code, content creators can tell crawlers exactly how to properly license their content, creating a direct path from crawling to a commercial agreement. We are excited to make this easier than ever in the AI Crawl Control dashboard. </p>
    <div>
      <h2>How to customize your 402 status code with AI Crawl Control: </h2>
      <a href="#how-to-customize-your-402-status-code-with-ai-crawl-control">
        
      </a>
    </div>
    <p><b>For Paid Plan Users:</b></p><ul><li><p>When you block individual crawlers from the AI Crawl Control dashboard, you can now choose to send 402 Payment Required status codes and customize your message. For example: <b>To access this content, email partnerships@yoursite.com or call 1-800-LICENSE</b>.</p></li></ul><p>The response will look like this:</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5v5x41azcAK14DBhXjXPEX/8c0960b4bb556d62e88d19c9dd544f12/image4.png" />
          </figure><p>The message can be configured from Settings in the AI Crawl Control Dashboard:</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2KMdRYwoey9RdYIxmzmFO1/7b39fd82d43349ee1cc4832cb602eb56/image1.png" />
          </figure>
    <div>
      <h2>Beyond just blocking AI bots</h2>
      <a href="#beyond-just-blocking-ai-bots">
        
      </a>
    </div>
    <p>This is just the beginning. We're planning to add additional parameters that will let crawlers understand the content's value, freshness, and licensing terms directly in the 402 response. Imagine crawlers receiving structured data about content quality and update frequency, for example, in addition to contact information.</p><p>Meanwhile, <a href="https://blog.cloudflare.com/introducing-pay-per-crawl/">pay per crawl</a> continues advancing through beta, giving content creators the infrastructure to automatically monetize crawler access with transparent, usage-based pricing.</p><p>What excites us most is the market shift we're seeing. We're moving to a world where content creators have clear monetization paths to become active participants in the development of rich AI experiences. </p><p>The 402 response is a bridge between two industries that want to work together: content creators whose work fuels AI development, and AI companies who need high-quality data. Cloudflare’s AI Crawl Control creates the infrastructure for these partnerships to flourish.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/31Np3qX2ssbeGaJnZHQodA/92246d3618778715c2e8b295b7acaa29/image5.png" />
          </figure><div>
  
</div><p></p> ]]></content:encoded>
            <category><![CDATA[AI Week]]></category>
            <category><![CDATA[Pay Per Crawl]]></category>
            <category><![CDATA[AI]]></category>
            <category><![CDATA[Bots]]></category>
            <category><![CDATA[AI Bots]]></category>
            <category><![CDATA[Bot Management]]></category>
            <guid isPermaLink="false">3UcNgGUfIUIm0EEtNwgLAT</guid>
            <dc:creator>Will Allen</dc:creator>
            <dc:creator>Pulkita Kini</dc:creator>
            <dc:creator>Cam Whiteside</dc:creator>
        </item>
        <item>
            <title><![CDATA[Perplexity is using stealth, undeclared crawlers to evade website no-crawl directives]]></title>
            <link>https://blog.cloudflare.com/perplexity-is-using-stealth-undeclared-crawlers-to-evade-website-no-crawl-directives/</link>
            <pubDate>Mon, 04 Aug 2025 13:00:00 GMT</pubDate>
            <description><![CDATA[ Perplexity is repeatedly modifying their user agent and changing IPs and ASNs to hide their crawling activity, in direct conflict with explicit no-crawl preferences expressed by websites. ]]></description>
            <content:encoded><![CDATA[ <p></p><p>We are observing stealth crawling behavior from Perplexity, an AI-powered answer engine. Although Perplexity initially crawls from their declared user agent, when they are presented with a network block, they appear to obscure their crawling identity in an attempt to circumvent the website’s preferences. We see continued evidence that Perplexity is repeatedly modifying their user agent and changing their source <a href="https://www.cloudflare.com/learning/network-layer/what-is-an-autonomous-system/"><u>ASNs</u></a> to hide their crawling activity, as well as ignoring — or sometimes failing to even fetch — <a href="https://www.cloudflare.com/learning/bots/what-is-robots-txt/"><u>robots.txt</u> </a>files.</p><p>The Internet as we have known it for the past three decades is <a href="https://blog.cloudflare.com/content-independence-day-no-ai-crawl-without-compensation/"><u>rapidly changing</u></a>, but one thing remains constant: it is built on trust. There are clear preferences that crawlers should be transparent, serve a clear purpose, perform a specific activity, and, most importantly, follow website directives and preferences. Based on Perplexity’s observed behavior, which is incompatible with those preferences, we have de-listed them as a verified <a href="https://www.cloudflare.com/learning/bots/what-is-a-bot/">bot</a> and added heuristics to our managed rules that <a href="https://www.cloudflare.com/learning/ai/how-to-block-ai-crawlers/">block this stealth crawling</a>.</p>
    <div>
      <h3>How we tested</h3>
      <a href="#how-we-tested">
        
      </a>
    </div>
    <p>We received complaints from customers who had both disallowed Perplexity crawling activity in their <code>robots.txt</code> files and also created <a href="https://www.cloudflare.com/learning/ddos/glossary/web-application-firewall-waf/">WAF rules</a> to specifically block both of Perplexity’s <a href="https://docs.perplexity.ai/guides/bots"><u>declared crawlers</u></a>: <code>PerplexityBot</code> and <code>Perplexity-User</code>. These customers told us that Perplexity was still able to access their content even when they saw its bots successfully blocked. We confirmed that Perplexity’s crawlers were in fact being blocked on the specific pages in question, and then performed several targeted tests to confirm what exact behavior we could observe.</p><p>We created multiple brand-new <a href="https://www.cloudflare.com/learning/dns/glossary/what-is-a-domain-name/">domains</a>, similar to <code>testexample.com</code> and <code>secretexample.com</code>. These domains were newly purchased and had not yet been indexed by any search engine nor made publicly accessible in any discoverable way. We implemented a <code>robots.txt</code> file with directives to stop any respectful bots from accessing any part of a website:  </p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/66QyzKuX9DQqQYPvCZpw4m/78e7bbd4ff79dd2f1523e70ef54dab9e/BLOG-2879_-_2.png" />
          </figure><p>We conducted an experiment by querying Perplexity AI with questions about these domains, and discovered Perplexity was still providing detailed information regarding the exact content hosted on each of these restricted domains. This response was unexpected, as we had taken all necessary precautions to prevent this data from being retrievable by their <a href="https://www.cloudflare.com/learning/bots/what-is-a-web-crawler/"><u>crawlers</u></a>.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/08ZLg0OE7vX8x35f9rDeg/a3086959793ac565b329fbbab5e52d1e/BLOG-2879_-_3.png" />
          </figure><p></p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5uHc0gooXlr98LB56KBb3g/b7dae5987a64f2442d1f89cf21e974ba/BLOG-2879_-_4.png" />
          </figure>
    <div>
      <h3>Obfuscating behavior observed</h3>
      <a href="#obfuscating-behavior-observed">
        
      </a>
    </div>
    <p><b>Bypassing Robots.txt and undisclosed IPs/User Agents</b></p><p>Our multiple test domains explicitly prohibited all automated access by specifying in robots.txt and had specific WAF rules that blocked crawling from <a href="https://docs.perplexity.ai/guides/bots"><u>Perplexity’s public crawlers</u></a>. We observed that Perplexity uses not only their declared user-agent, but also a generic browser intended to impersonate Google Chrome on macOS when their declared crawler was blocked. </p><table><tr><td><p>Declared</p></td><td><p>Mozilla/5.0 AppleWebKit/537.36 (KHTML, like Gecko; compatible; Perplexity-User/1.0; +https://perplexity.ai/perplexity-user)</p></td><td><p>20-25m daily requests</p></td></tr><tr><td><p>Stealth</p></td><td><p>Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/124.0.0.0 Safari/537.36</p></td><td><p>3-6m daily requests</p></td></tr></table><p>Both their declared and undeclared crawlers were attempting to access the content for scraping contrary to the web crawling norms as outlined in RFC <a href="https://datatracker.ietf.org/doc/html/rfc9309"><u>9309</u></a>.</p><p>This undeclared crawler utilized multiple IPs not listed in <a href="https://docs.perplexity.ai/guides/bots"><u>Perplexity’s official IP range</u></a>, and would rotate through these IPs in response to the restrictive robots.txt policy and block from Cloudflare. In addition to rotating IPs, we observed requests coming from different ASNs in attempts to further evade website blocks. This activity was observed across tens of thousands of domains and millions of requests per day. We were able to fingerprint this crawler using a combination of <a href="https://www.cloudflare.com/learning/ai/what-is-machine-learning/">machine learning</a> and network signals.</p><p>An example: </p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4UKtFs1UPddDh9OCtMuwzC/bcdabf5fdd9b0d029581b14a90714d91/unnamed.png" />
          </figure><p>Of note: when the stealth crawler was successfully blocked, we observed that Perplexity uses other data sources — including other websites — to try to create an answer. However, these answers were less specific and lacked details from the original content, reflecting the fact that the block had been successful. </p>
    <div>
      <h2>How well-meaning bot operators respect website preferences</h2>
      <a href="#how-well-meaning-bot-operators-respect-website-preferences">
        
      </a>
    </div>
    <p>In contrast to the behavior described above, the Internet has expressed clear preferences on how good crawlers should behave. All well-intentioned crawlers acting in good faith should:</p><p><b>Be transparent</b>. Identify themselves honestly, using a unique user-agent, a declared list of IP ranges or <a href="https://developers.cloudflare.com/bots/concepts/bot/verified-bots/web-bot-auth/"><u>Web Bot Auth</u></a> integration, and provide contact information if something goes wrong.</p><p><b>Be well-behaved netizens</b>. Don’t flood sites with excessive traffic, <a href="https://www.cloudflare.com/learning/bots/what-is-data-scraping/"><u>scrape</u></a> sensitive data, or use stealth tactics to try and dodge detection.</p><p><b>Serve a clear purpose</b>. Whether it’s powering a voice assistant, checking product prices, or making a website more accessible, every bot has a reason to be there. The purpose should be clearly and precisely defined and easy for site owners to look up publicly.</p><p><b>Separate bots for separate activities</b>. Perform each activity from a unique bot. This makes it easy for site owners to decide which activities they want to allow. Don’t force site owners to make an all-or-nothing decision. </p><p><b>Follow the rules</b>. That means checking for and respecting website signals like <code>robots.txt</code>, staying within rate limits, and never bypassing security protections.</p><p>More details are outlined in our official <a href="https://developers.cloudflare.com/bots/concepts/bot/verified-bots/policy/"><u>Verified Bots Policy Developer Docs</u></a>.</p><p>OpenAI is an example of a leading AI company that follows these best practices. They clearly <a href="https://platform.openai.com/docs/bots"><u>outline their crawlers</u> and </a>give detailed explanations for each crawler’s purpose. They respect robots.txt and do not try to evade either a robots.txt directive or a network level block. And <a href="https://openai.com/index/introducing-chatgpt-agent/"><u>ChatGPT Agent</u></a> is signing http requests using the newly proposed open standard <a href="https://developers.cloudflare.com/bots/concepts/bot/verified-bots/web-bot-auth/"><u>Web Bot Auth</u></a>.</p><p>When we ran the same test as outlined above with ChatGPT, we found that ChatGPT-User fetched the robots file and stopped crawling when it was disallowed. We did not observe follow-up crawls from any other user agents or third party bots. When we removed the disallow directive from the robots entry, but presented ChatGPT with a block page, they again stopped crawling, and we saw no additional crawl attempts from other user agents. Both of these demonstrate the appropriate response to website owner preferences.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/HMJjS7DRmu4octZ99HX8K/753966a88476f80d7a981b1c135fd251/BLOG-2879_-_6.png" />
          </figure>
    <div>
      <h2>How can you protect yourself?</h2>
      <a href="#how-can-you-protect-yourself">
        
      </a>
    </div>
    <p>All the undeclared crawling activity that we observed from Perplexity’s hidden User Agent was scored by our <a href="https://www.cloudflare.com/application-services/products/bot-management/">bot management system </a>as a bot and was unable to pass managed challenges. Any bot management customer who has an existing block rule in place is already protected. Customers who don’t want to block traffic can set up rules to <a href="https://developers.cloudflare.com/waf/custom-rules/use-cases/challenge-bad-bots/"><u>challenge requests</u></a>, giving real humans an opportunity to proceed. Customers with existing challenge rules are already protected. Lastly, we added signature matches for the stealth crawler into our <a href="https://developers.cloudflare.com/bots/concepts/bot/#ai-bots"><u>managed rule</u></a> that <a href="https://developers.cloudflare.com/bots/additional-configurations/block-ai-bots/"><u>blocks AI crawling activity</u></a>. This rule is available to all customers, including our free customers.  </p>
    <div>
      <h2>What’s next?</h2>
      <a href="#whats-next">
        
      </a>
    </div>
    <p>It's been just over a month since we announced <a href="https://blog.cloudflare.com/content-independence-day-no-ai-crawl-without-compensation/">Content Independence Day</a>, giving content creators and publishers more control over how their content is accessed. Today, over two and a half million websites have chosen to completely disallow AI training through our managed robots.txt feature or our <a href="https://developers.cloudflare.com/bots/concepts/bot/#ai-bots"><u>managed rule blocking AI Crawlers</u></a>. Every Cloudflare customer is now able to selectively decide which declared AI crawlers are able to access their content in accordance with their business objectives.</p><p>We expected a change in bot and crawler behavior based on these new features, and we expect that the techniques bot operators use to evade detection will continue to evolve. Once this post is live the behavior we saw will almost certainly change, and the methods we use to stop them will keep evolving as well. </p><p>Cloudflare is actively working with technical and policy experts around the world, like the IETF efforts to standardize <a href="https://ietf-wg-aipref.github.io/drafts/draft-ietf-aipref-vocab.html?cf_target_id=_blank"><u>extensions to robots.txt</u></a>, to establish clear and measurable principles that well-meaning bot operators should abide by. We think this is an important next step in this quickly evolving space.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/25VWBDa33UWxDOtqEVEx5o/41eb4ddc262551b83179c1c23a9cb1e6/BLOG-2879_-_7.png" />
          </figure><p></p> ]]></content:encoded>
            <category><![CDATA[Cloudforce One]]></category>
            <category><![CDATA[Threat Intelligence]]></category>
            <category><![CDATA[AI Bots]]></category>
            <category><![CDATA[Bots]]></category>
            <category><![CDATA[AI]]></category>
            <category><![CDATA[Bot Management]]></category>
            <category><![CDATA[Security]]></category>
            <category><![CDATA[Generative AI]]></category>
            <guid isPermaLink="false">6XJtrSa1t6frcelkMGuYOV</guid>
            <dc:creator>Gabriel Corral</dc:creator>
            <dc:creator>Vaibhav Singhal</dc:creator>
            <dc:creator>Brian Mitchell</dc:creator>
            <dc:creator>Reid Tatoris</dc:creator>
        </item>
        <item>
            <title><![CDATA[Introducing pay per crawl: Enabling content owners to charge AI crawlers for access]]></title>
            <link>https://blog.cloudflare.com/introducing-pay-per-crawl/</link>
            <pubDate>Tue, 01 Jul 2025 10:00:00 GMT</pubDate>
            <description><![CDATA[ Pay per crawl is a new feature to allow content creators to charge AI crawlers for access to their content.  ]]></description>
            <content:encoded><![CDATA[ 
    <div>
      <h2>A changing landscape of consumption </h2>
      <a href="#a-changing-landscape-of-consumption">
        
      </a>
    </div>
    <p>Many publishers, content creators and website owners currently feel like they have a binary choice — either leave the front door wide open for AI to consume everything they create, or create their own walled garden. But what if there was another way?</p><p>At Cloudflare, we started from a simple principle: we wanted content creators to have control over who accesses their work. If a creator wants to <a href="https://www.cloudflare.com/learning/ai/how-to-block-ai-crawlers/">block all AI crawlers</a> from their content, they should be able to do so. If a creator wants to allow some or all AI crawlers full access to their content for free, they should be able to do that, too. Creators should be in the driver’s seat.</p><p>After hundreds of conversations with news organizations, publishers, and large-scale social media platforms, we heard a consistent desire for a third path: They’d like to allow AI crawlers to access their content, but they’d like to get compensated. Currently, that requires knowing the right individual and striking a one-off deal, which is an insurmountable challenge if you don’t have scale and leverage. </p>
    <div>
      <h2>What if I could charge a crawler? </h2>
      <a href="#what-if-i-could-charge-a-crawler">
        
      </a>
    </div>
    <p>We believe your choice need not be binary — there should be a third, more nuanced option: <b>You can charge for access.</b> Instead of a blanket block or uncompensated open access, we want to empower content owners to monetize their content at Internet scale.</p><p>We’re excited to help dust off a mostly forgotten piece of the web: <a href="https://developer.mozilla.org/en-US/docs/Web/HTTP/Reference/Status/402"><b><u>HTTP response code 402</u></b></a>.</p>
    <div>
      <h2>Introducing pay per crawl</h2>
      <a href="#introducing-pay-per-crawl">
        
      </a>
    </div>
    <p><a href="http://www.cloudflare.com/paypercrawl-signup/">Pay per crawl</a>, in private beta, is our first experiment in this area. </p><p>Pay per crawl integrates with existing web infrastructure, leveraging <a href="https://www.cloudflare.com/learning/ddos/glossary/hypertext-transfer-protocol-http/">HTTP status codes</a> and established authentication mechanisms to create a framework for paid content access. </p><p>Each time an AI crawler requests content, they either present payment intent via request headers for successful access (<a href="https://developer.mozilla.org/en-US/docs/Web/HTTP/Reference/Status/200"><u>HTTP response code 200</u></a>), or receive a <code>402 Payment Required</code> response with pricing. Cloudflare acts as the Merchant of Record for pay per crawl and also provides the underlying technical infrastructure.</p>
    <div>
      <h3>Publisher controls and pricing</h3>
      <a href="#publisher-controls-and-pricing">
        
      </a>
    </div>
    <p>Pay per crawl grants domain owners full control over their monetization strategy. They can define a flat, per-request price across their entire site. Publishers will then have three distinct options for a crawler:</p><ul><li><p><b>Allow:</b> Grant the crawler free access to content.</p></li><li><p><b>Charge:</b> Require payment at the configured, domain-wide price.</p></li><li><p><b>Block:</b> Deny access entirely, with no option to pay.</p></li></ul>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2PhxxI7f3Teb521mPRFQUL/1ecfd01f60f165b35c27ab9457f8b152/image3.png" />
          </figure><p>An important mechanism here is that even if a crawler doesn’t have a billing relationship with Cloudflare, and thus couldn’t be charged for access, a publisher can still choose to ‘charge’ them. This is the functional equivalent of a network level block (an HTTP <code>403 Forbidden</code> response where no content is returned) — but with the added benefit of telling the crawler there could be a relationship in the future. </p><p>While publishers currently can define a flat price across their entire site, they retain the flexibility to bypass charges for specific crawlers as needed. This is particularly helpful if you want to allow a certain crawler through for free, or if you want to negotiate and execute a content partnership outside the pay per crawl feature. </p><p>To ensure integration with each publisher’s existing security posture, Cloudflare enforces Allow or Charge decisions via a rules engine that operates only after existing WAF policies and <a href="https://www.cloudflare.com/learning/bots/what-is-bot-management/">bot management</a> or bot blocking features have been applied.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3NI9GUkR8RmmApQyOgb1mI/4f77c199ccdc5ebc166204cdaec72c48/image2.png" />
          </figure>
    <div>
      <h3>Payment headers and access</h3>
      <a href="#payment-headers-and-access">
        
      </a>
    </div>
    <p>As we were building the system, we knew we had to solve an incredibly important technical challenge: ensuring we could charge a specific crawler, but prevent anyone from spoofing that crawler. Thankfully, there’s a way to do this using <a href="https://developers.cloudflare.com/bots/concepts/bot/verified-bots/web-bot-auth/"><u>Web Bot Auth</u></a> proposals.</p><p>For crawlers, <a href="https://blog.cloudflare.com/web-bot-auth/"><u>this involves:</u></a></p><ul><li><p>Generating an Ed25519 key pair, and making the <a href="https://datatracker.ietf.org/doc/html/rfc7517"><u>JWK</u></a>-formatted public key available in a hosted directory</p></li><li><p>Registering with Cloudflare to provide the URL of your key directory and user agent information.</p></li><li><p>Configuring your crawler to use <a href="https://datatracker.ietf.org/doc/rfc9421/"><u>HTTP Message Signatures</u></a> with each request.</p></li></ul><p>Once registration is accepted, crawler requests should always include <code>signature-agent</code>, <code>signature-input</code>, and <code>signature</code> headers to identify your crawler and discover paid resources.</p>
            <pre><code>GET /example.html
Signature-Agent: "https://signature-agent.example.com"
Signature-Input: sig2=("@authority" "signature-agent")
 ;created=1735689600
 ;keyid="poqkLGiymh_W0uP6PZFw-dvez3QJT5SolqXBCW38r0U"
 ;alg="ed25519"
 ;expires=1735693200
;nonce="e8N7S2MFd/qrd6T2R3tdfAuuANngKI7LFtKYI/vowzk4lAZYadIX6wW25MwG7DCT9RUKAJ0qVkU0mEeLElW1qg=="
 ;tag="web-bot-auth"
Signature: sig2=:jdq0SqOwHdyHr9+r5jw3iYZH6aNGKijYp/EstF4RQTQdi5N5YYKrD+mCT1HA1nZDsi6nJKuHxUi/5Syp3rLWBA==:</code></pre>
            
    <div>
      <h3>Accessing paid content</h3>
      <a href="#accessing-paid-content">
        
      </a>
    </div>
    <p>Once a crawler is set up, determination of whether content requires payment can happen via two flows:</p>
    <div>
      <h4>Reactive (discovery-first)</h4>
      <a href="#reactive-discovery-first">
        
      </a>
    </div>
    <p>Should a crawler request a paid URL, Cloudflare returns an <code>HTTP 402 Payment Required</code> response, accompanied by a <code>crawler-price</code> header. This signals that payment is required for the requested resource.</p>
            <pre><code>HTTP 402 Payment Required
crawler-price: USD XX.XX</code></pre>
            <p> The crawler can then decide to retry the request, this time including a <code>crawler-exact-price</code> header to indicate agreement to pay the configured price.</p>
            <pre><code>GET /example.html
crawler-exact-price: USD XX.XX </code></pre>
            
    <div>
      <h4>Proactive (intent-first)</h4>
      <a href="#proactive-intent-first">
        
      </a>
    </div>
    <p>Alternatively, a crawler can preemptively include a <code>crawler-max-price</code> header in its initial request.</p>
            <pre><code>GET /example.html
crawler-max-price: USD XX.XX</code></pre>
            <p>If the price configured for a resource is equal to or below this specified limit, the request proceeds, and the content is served with a successful <code>HTTP 200 OK</code> response, confirming the charge:</p>
            <pre><code>HTTP 200 OK
crawler-charged: USD XX.XX 
server: cloudflare</code></pre>
            <p>If the amount in a <code>crawler-max-price</code> request is greater than the content owner’s configured price, only the configured price is charged. However, if the resource’s configured price exceeds the maximum price offered by the crawler, an <code>HTTP</code><code><b> </b></code><code>402 Payment Required</code> response is returned, indicating the specified cost.  Only a single price declaration header, <code>crawler-exact-price</code> or <code>crawler-max-price</code>, may be used per request.</p><p>The <code>crawler-exact-price</code> or <code>crawler-max-price</code> headers explicitly declare the crawler's willingness to pay. If all checks pass, the content is served, and the crawl event is logged. If any aspect of the request is invalid, the edge returns an <code>HTTP 402 Payment Required</code> response.</p>
    <div>
      <h3>Financial settlement</h3>
      <a href="#financial-settlement">
        
      </a>
    </div>
    <p>Crawler operators and content owners must configure pay per crawl payment details in their Cloudflare account. Billing events are recorded each time a crawler makes an authenticated request with payment intent and receives an HTTP 200-level response with a <code>crawler-charged</code> header. Cloudflare then aggregates all the events, charges the crawler, and distributes the earnings to the publisher.</p>
    <div>
      <h2>Content for crawlers today, agents tomorrow </h2>
      <a href="#content-for-crawlers-today-agents-tomorrow">
        
      </a>
    </div>
    <p>At its core, pay per crawl begins a technical shift in how content is controlled online. By providing creators with a robust, programmatic mechanism for valuing and controlling their digital assets, we empower them to continue creating the rich, diverse content that makes the Internet invaluable. </p><p>We expect pay per crawl to evolve significantly. It’s very early: we believe many different types of interactions and marketplaces can and should develop simultaneously. We are excited to support these various efforts and open standards.</p><p>For example, a publisher or new organization might want to charge different rates for different paths or content types. How do you introduce dynamic pricing based not only upon demand, but also how many users your AI application has? How do you introduce granular licenses at internet scale, whether for training, <a href="https://www.cloudflare.com/learning/ai/inference-vs-training/">inference</a>, search, or something entirely new?</p><p>The true potential of pay per crawl may emerge in an <a href="https://www.cloudflare.com/learning/ai/what-is-agentic-ai/">agentic</a> world. What if an agentic paywall could operate entirely programmatically? Imagine asking your favorite deep research program to help you synthesize the latest cancer research or a legal brief, or just help you find the best restaurant in Soho — and then giving that agent a budget to spend to acquire the best and most relevant content. By anchoring our first solution on <b>HTTP response code 402</b>, we enable a future where intelligent agents can programmatically negotiate access to digital resources. </p>
    <div>
      <h2>Getting started</h2>
      <a href="#getting-started">
        
      </a>
    </div>
    <p>Pay per crawl is currently in private beta. We’d love to hear from you if you’re either a crawler interested in paying to access content or a content creator interested in charging for access. You can reach out to us at <a href="http://www.cloudflare.com/paypercrawl-signup/"><u>http://www.cloudflare.com/paypercrawl-signup/</u></a> or contact your Account Executive if you’re an existing Enterprise customer.</p> ]]></content:encoded>
            <category><![CDATA[Pay Per Crawl]]></category>
            <category><![CDATA[AI Bots]]></category>
            <category><![CDATA[Bots]]></category>
            <category><![CDATA[AI]]></category>
            <category><![CDATA[Bot Management]]></category>
            <guid isPermaLink="false">7AJ8tUOFDvk5mCTrDjBPDq</guid>
            <dc:creator>Will Allen</dc:creator>
            <dc:creator>Simon Newton</dc:creator>
        </item>
        <item>
            <title><![CDATA[Message Signatures are now part of our Verified Bots Program, simplifying bot authentication]]></title>
            <link>https://blog.cloudflare.com/verified-bots-with-cryptography/</link>
            <pubDate>Tue, 01 Jul 2025 10:00:00 GMT</pubDate>
            <description><![CDATA[ Bots can start authenticating to Cloudflare using public key cryptography, preventing them from being spoofed and allowing origins to have confidence in their identity. ]]></description>
            <content:encoded><![CDATA[ <p>As a site owner, how do you know which bots to allow on your site, and which you’d like to block? Existing identification methods rely on a combination of IP address range (which may be shared by other services, or change over time) and user-agent header (easily spoofable). These have limitations and deficiencies. In our <a href="https://blog.cloudflare.com/web-bot-auth/"><u>last blog post</u></a>, we proposed using HTTP Message Signatures: a way for developers of bots, agents, and crawlers to clearly identify themselves by cryptographically signing requests originating from their service. </p><p>Since we published the blog post on Message Signatures and the <a href="https://datatracker.ietf.org/doc/html/draft-meunier-web-bot-auth-architecture"><u>IETF draft for Web Bot Auth</u></a> in May 2025, we’ve seen significant interest around implementing and deploying Message Signatures at scale. It’s clear that well-intentioned bot owners want a clear way to identify their bots to site owners, and site owners want a clear way to identify and manage bot traffic. Both parties seem to agree that deploying cryptography for the purposes of authentication is the right solution.     </p><p>Today, we’re announcing that we’re integrating HTTP Message Signatures directly into our <b>Verified Bots Program</b>. This announcement has two main parts: (1) for bots, crawlers, and agents, we’re simplifying enrollment into the Verified Bots program for those who sign requests using Message Signatures, and (2) we’re encouraging <i>all bot operators moving forward </i>to use Message Signatures over existing verification mechanisms. Because Verified Bots are considered authenticated, they do not face challenges from our Bot Management to identify as bots, given they’re already identified as such.</p><p>For site owners, no additional action is required – Cloudflare will automatically validate signatures on our edge, and if that validation is a success, that traffic will be marked as verified so that site owners can use the <a href="https://developers.cloudflare.com/bots/concepts/bot/verified-bots/categories/"><u>verified bot fields</u></a> to create Bot Management and <a href="https://developers.cloudflare.com/waf/custom-rules/"><u>WAF rules</u></a> based on it.  </p><p>This isn't just about simplifying things for bot operators — it’s about giving website owners unparalleled accuracy in identifying trusted bot traffic, cutting down on the overhead for cryptographic verification, and fundamentally transforming how we manage authentication across the Cloudflare network.</p>
    <div>
      <h2>Become a Verified Bot with Message Signatures</h2>
      <a href="#become-a-verified-bot-with-message-signatures">
        
      </a>
    </div>
    <p>Cloudflare’s existing <a href="https://developers.cloudflare.com/bots/concepts/bot/verified-bots/"><u>Verified Bots program</u></a> is for bots that are transparent about who they are and what they do, like indexing sites for search or scanning for security vulnerabilities. You can see a list of these verified bots in <a href="https://radar.cloudflare.com/bots#verified-bots"><u>Cloudflare Radar</u></a>:</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2lMYno3QOwtwfTDDgeqFx8/c69088229dcf9fc08f5a76ce7e0a0354/1.png" />
          </figure><p><sup><i>A preview of the Verified Bots page on Cloudflare Radar. </i></sup></p><p>In the past, in order to <a href="https://dash.cloudflare.com/?to=/:account/configurations/verified-bots"><u>apply</u></a> to be a verified bot, we used to ask for IP address ranges or reverse DNS names so that we could verify your identity. This required some manual steps like checking that the IP address range is valid and is associated with the appropriate <a href="https://www.cloudflare.com/learning/network-layer/what-is-an-autonomous-system/"><u>ASN</u></a>. </p><p>With the integration of Message Signatures, we’re aiming to streamline applications into our Verified Bot program. Bots applying with well-formed Message Signatures will be prioritized, and approved more quickly! </p>
    <div>
      <h2>Getting started</h2>
      <a href="#getting-started">
        
      </a>
    </div>
    <p>In order to make generating Message Signatures as easy as possible, Cloudflare is providing two open source libraries: a <a href="https://crates.io/crates/web-bot-auth"><u>web-bot-auth library in rust</u></a>, and a <a href="https://www.npmjs.com/package/web-bot-auth"><u>web-bot-auth npm package in TypeScript</u></a>. If you’re working on a different implementation, <a href="https://www.cloudflare.com/lp/verified-bots/"><u>let us know</u></a> – we’d love to add it to our <a href="https://developers.cloudflare.com/bots/concepts/bot/verified-bots/web-bot-auth/"><u>developer docs</u></a>!</p><p>At a high level, signing your requests with web bot auth consists of the following steps: </p><ul><li><p>Generate a valid signing key. See <a href="https://developers.cloudflare.com/bots/concepts/bot/verified-bots/web-bot-auth/#1-generate-a-valid-signing-key"><u>Signing Key section</u></a> for step-by-step instructions.</p></li><li><p>Host a JSON web key set containing your public key under <code>/.well-known/http-message-signature-directory</code> of your website.</p></li><li><p>Sign responses for that URL using a Web Bot Auth library, one signature for each key contained in it, to prove you own it. See the <a href="https://developers.cloudflare.com/bots/concepts/bot/verified-bots/web-bot-auth/#2-host-a-key-directory"><u>Hosting section</u></a> for step-by-step instructions.</p></li><li><p>Register that URL with us, using our Verified Bots form. This can be done directly in your Cloudflare account. See <a href="https://developers.cloudflare.com/bots/concepts/bot/verified-bots/overview/"><u>our documentation</u></a>.</p></li><li><p>Sign requests using a Web Bot Auth library. </p></li></ul><p>
As an example, <a href="https://radar.cloudflare.com/scan"><u>Cloudflare Radar's URL Scanner</u></a> lets you scan any URL and get a publicly shareable report with security, performance, technology, and network information. Here’s an example of what a well-formed signature looks like for requests coming from URL Scanner:</p>
            <pre><code>GET /path/to/resource HTTP/1.1
Host: www.example.com
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/119.0.0.0 Safari/537.36
Signature-Agent: "https://web-bot-auth-directory.radar-cfdata-org.workers.dev"
Signature-Input: sig=("@authority" "signature-agent");\
             	 created=1700000000;\
             	 expires=1700011111;\
             	 keyid="poqkLGiymh_W0uP6PZFw-dvez3QJT5SolqXBCW38r0U";\
             	 tag="web-bot-auth"
Signature:sig=jdq0SqOwHdyHr9+r5jw3iYZH6aNGKijYp/EstF4RQTQdi5N5YYKrD+mCT1HA1nZDsi6nJKuHxUi/5Syp3rLWBA==:</code></pre>
            <p>Since we’ve already registered URLScanner as a Verified Bot, Cloudflare will now automatically verify that the signature in the <code>Signature</code> header matches the request — more on that later.</p>
    <div>
      <h2>Register your bot</h2>
      <a href="#register-your-bot">
        
      </a>
    </div>
    <p>Access the <a href="https://dash.cloudflare.com/?to=/:account/configurations/verified-bots"><u>Verified Bots submission form</u></a> on your account. If that link does not immediately take you there, go to <i>your Cloudflare account</i> →  <i>Account Home</i>  → <i>the three dots next to your account name</i>  → <i>Configurations</i> → <i>Verified Bots.</i></p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/73yQcvLmiVDe19HJXYvBIc/ca2bdb2bb81addc29583568087c2ccc2/3.png" />
          </figure><p>If you do not have a Cloudflare account, you can <a href="https://dash.cloudflare.com/sign-up"><u>sign up for a free one</u></a>.</p><p>For the verification method, select "Request Signature", then enter the URL of your key directory in Validation Instructions. Specifying the User-Agent values is optional if you’re submitting a Request Signature bot. </p><p>Once your application has gone through our (now shortened) review process, you don’t need to take any further action.</p>
    <div>
      <h2>Message Signature verification for origins</h2>
      <a href="#message-signature-verification-for-origins">
        
      </a>
    </div>
    <p>Starting today, Cloudflare is ramping up verification of <a href="https://datatracker.ietf.org/doc/html/draft-meunier-web-bot-auth-architecture"><u>cryptographic signatures provided by automated crawlers and bots</u></a>. This is currently available for all Free and Pro plans, and as we continue to test and validate at scale, will be released to all Business and Enterprise plans. This means that as time passes, the number of unauthenticated web crawlers should diminish, ensuring most bot traffic is authenticated before it reaches your website’s servers, helping to prevent spoofing attacks. </p><p>At a high level, signature verification works like this: </p><ol><li><p>A bot or agent sends a request to a website behind Cloudflare.</p></li><li><p>Cloudflare’s Message Signature verification service checks for the <code>Signature</code>, <code>Signature-Input</code>, and <code>Signature-Agent</code> headers.</p></li><li><p>It checks that the incoming request presents a <code>keyid</code> parameter in your Signature-Input that points to a key we already know.</p></li><li><p>It looks at the <code>expires</code> parameter in the incoming bot request. If the current time is after expiration, verification fails. This guards against replay attacks, preventing malicious agents from trying to pass as a bot by retrying messages they captured in the past.</p></li><li><p>It checks that you’ve specified a <code>tag</code> parameter indicating <code>web-bot-auth</code>, to indicate your intent that the message be handled using web bot authentication specifically</p></li><li><p>It looks at all the <a href="https://www.rfc-editor.org/rfc/rfc9421#covered-components"><u>components</u></a> chosen in your <code>Signature-Input</code> header, and constructs <a href="https://www.rfc-editor.org/rfc/rfc9421#name-creating-the-signature-base"><u>a signature base</u></a> from it. </p></li><li><p>If all pre-flight checks pass, Cloudflare attempts to verify the signature base against the value in Signature field using an <a href="https://www.rfc-editor.org/rfc/rfc9421#name-eddsa-using-curve-edwards25"><u>ed25519 verification algorithm</u></a> and the key supplied in <code>keyid</code>.</p></li><li><p>Verified Bots and other systems at Cloudflare use a successful verification as proof of your identity, and apply rules corresponding to that identity. </p></li></ol><p>If any of the above steps fail, Cloudflare falls back to existing bot identification and mitigation mechanisms. As the system matures, we would strengthen these requirements, and limit the possibilities of a soft downgrade.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/128Ox15wBqBPVKUUzvn4gA/acca9b9e6df243b8317b8964285ce57c/2.png" />
          </figure><p>As a site owner, you can segment your Verified Bot traffic by its type and purpose by adding the <a href="https://developers.cloudflare.com/bots/concepts/bot/verified-bots/categories/"><u>Verified Bot Categories</u></a> field <code>cf.verified_bot_category</code> as a filter criterion in <a href="https://developers.cloudflare.com/waf/custom-rules/"><u>WAF Custom rules</u></a>, <a href="https://developers.cloudflare.com/waf/rate-limiting-rules/"><u>Advanced Rate Limiting</u></a>, and Late <a href="https://developers.cloudflare.com/rules/transform/"><u>Transform rules</u></a>. For instance, to allow the Bibliothèque nationale de France and the Library of Congress, and institutions dedicated to academic research, you can add a rule that allows bots in the <code>Academic Research</code> category.</p>
    <div>
      <h2>Where we’re going next</h2>
      <a href="#where-were-going-next">
        
      </a>
    </div>
    <p>HTTP Message Signatures is a primitive that is useful beyond Cloudflare – the IETF standardized it as part of <a href="https://datatracker.ietf.org/doc/html/rfc9421"><u>RFC 9421</u></a>.</p><p>As discussed in our <a href="https://blog.cloudflare.com/web-bot-auth/#introducing-http-message-signatures"><u>previous blog post</u></a>, Cloudflare believes that making Message Signatures a core component of bot authentication on the web should follow the same path. The <a href="https://www.ietf.org/archive/id/draft-meunier-web-bot-auth-architecture-02.html"><u>specifications</u></a> for the protocol are being built in the open, and they have already evolved following feedback.</p><p>Moreover, due to widespread interest, the IETF is considering forming a working group around <a href="https://datatracker.ietf.org/wg/webbotauth/about/"><u>Web Bot Auth</u></a>. Should you be a crawler, an origin, or even a CDN, we invite you to provide feedback to ensure the solution gets stronger, and suits your needs.</p>
    <div>
      <h2>A better, more trusted Internet</h2>
      <a href="#a-better-more-trusted-internet">
        
      </a>
    </div>
    <p>For bot, agent, and crawler operators that act transparently and provide vital services for the Internet, we’re providing a faster and more automated path to being recognized as a Verified Bot, reducing manual processes. We trust that this approach improves bot authentication from what were formerly brittle and unreliable authentication methods, to a secure and reliable alternative. It should reduce the overall volume of friction and hurdles genuinely useful bots face.</p><p>For site owners, Message Signatures provides better assurance that the bot traffic is legitimate — automatically recognized and allowed, minimizing disruption to essential services (e.g., search engine indexing, monitoring). In line with our commitments to making TLS/<a href="https://blog.cloudflare.com/introducing-universal-ssl/"><u>SSL</u></a> and <a href="https://blog.cloudflare.com/pt-br/post-quantum-zero-trust/"><u>Post-Quantum</u></a> certificates available for everyone, we’ll always offer the cryptographic verification of Message Signatures for all sites because we believe in a safer and more efficient Internet by fostering a trusted environment for both human and automated traffic.</p><p>If you have a feature request, feedback, or are interested in partnering with us, please <a href="https://www.cloudflare.com/lp/verified-bots/"><u>reach out</u></a>.</p> ]]></content:encoded>
            <category><![CDATA[Pay Per Crawl]]></category>
            <category><![CDATA[Research]]></category>
            <category><![CDATA[Bots]]></category>
            <category><![CDATA[Bot Management]]></category>
            <category><![CDATA[AI Bots]]></category>
            <category><![CDATA[Cryptography]]></category>
            <guid isPermaLink="false">5K5btgE8vXWGaGxCrs5yFH</guid>
            <dc:creator>Mari Galicer</dc:creator>
            <dc:creator>Akshat Mahajan</dc:creator>
            <dc:creator>Gauri Baraskar</dc:creator>
            <dc:creator>Helen Du</dc:creator>
        </item>
        <item>
            <title><![CDATA[Forget IPs: using cryptography to verify bot and agent traffic]]></title>
            <link>https://blog.cloudflare.com/web-bot-auth/</link>
            <pubDate>Thu, 15 May 2025 13:00:00 GMT</pubDate>
            <description><![CDATA[ Bots now browse like humans. We're proposing bots use cryptographic signatures so that website owners can verify their identity. Explanations and demonstration code can be found within the post. ]]></description>
            <content:encoded><![CDATA[ <p></p><p>With the rise of traffic from <a href="https://www.cloudflare.com/learning/ai/what-is-agentic-ai/">AI agents</a>, what’s considered a bot is no longer clear-cut. There are some clearly malicious bots, like ones that DoS your site or do <a href="https://www.cloudflare.com/learning/bots/what-is-credential-stuffing/">credential stuffing</a>, and ones that most site owners do want to interact with their site, like the bot that indexes your site for a search engine, or ones that fetch RSS feeds.      </p><p>Historically, Cloudflare has relied on two main signals to verify legitimate web crawlers from other types of automated traffic: user agent headers and IP addresses. The <a href="https://developer.mozilla.org/en-US/docs/Web/HTTP/Reference/Headers/User-Agent"><code><u>User-Agent</u></code><u> header</u></a> allows bot developers to identify themselves, i.e. <code>MyBotCrawler/1.1</code>. However, user agent headers alone are easily spoofed and are therefore insufficient for reliable identification. To address this, user agent checks are often supplemented with <a href="https://developers.cloudflare.com/bots/concepts/bot/verified-bots/policy/#ip-validation"><u>IP address validation</u></a>, the inspection of published IP address ranges to confirm a crawler's authenticity. However, the logic around IP address ranges representing a product or group of users is brittle – connections from the crawling service might be shared by multiple users, such as in the case of <a href="https://blog.cloudflare.com/icloud-private-relay/"><u>privacy proxies</u></a> and VPNs, and these ranges, often maintained by cloud providers, change over time.</p><p>Cloudflare will always try to block malicious bots, but we think our role here is to also provide an affirmative mechanism to authenticate desirable bot traffic. By using well-established cryptography techniques, we’re proposing a better mechanism for legitimate agents and bots to declare who they are, and provide a clearer signal for site owners to decide what traffic to permit. </p><p><b>Today, we’re introducing two proposals – HTTP message signatures and request mTLS – for </b><a href="https://blog.cloudflare.com/friendly-bots/"><b><u>friendly bots</u></b></a><b> to authenticate themselves, and for customer origins to identify them. </b>In this blog post, we’ll share how these authentication mechanisms work, how we implemented them, and how you can participate in our closed beta.</p>
    <div>
      <h2>Existing bot verification mechanisms are broken </h2>
      <a href="#existing-bot-verification-mechanisms-are-broken">
        
      </a>
    </div>
    <p>Historically, if you’ve worked on ChatGPT, Claude, Gemini, or any other agent, you’ve had several options to identify your HTTP traffic to other services: </p><ol><li><p>You define a <a href="https://www.rfc-editor.org/rfc/rfc9110#name-user-agent"><u>user agent</u></a>, an HTTP header described in <a href="https://www.rfc-editor.org/rfc/rfc9110.html#name-user-agent"><u>RFC 9110</u></a>. The problem here is that this header is easily spoofable and there’s not a clear way for agents to identify themselves as semi-automated browsers — agents often use the Chrome user agent for this very reason, which is discouraged. The RFC <a href="https://www.rfc-editor.org/rfc/rfc9110.html#section-10.1.5-9"><u>states</u></a>: 
<i>“If a user agent masquerades as a different user agent, recipients can assume that the user intentionally desires to see responses tailored for that identified user agent, even if they might not work as well for the actual user agent being used.” </i> </p></li><li><p>You publish your IP address range(s). This has limitations because the same IP address might be shared by multiple users or multiple services within the same company, or even by multiple companies when hosting infrastructure is shared (like <a href="https://www.cloudflare.com/developer-platform/products/workers/">Cloudflare Workers</a>, for example). In addition, IP addresses are prone to change as underlying infrastructure changes, leading services to use ad-hoc sharing mechanisms like <a href="https://www.cloudflare.com/ips-v4"><u>CIDR lists</u></a>. </p></li><li><p>You go to every website and share a secret, like a <a href="https://www.rfc-editor.org/rfc/rfc6750"><u>Bearer</u></a> token. This is impractical at scale because it requires developers to maintain separate tokens for each website their bot will visit.</p></li></ol><p>We can do better! Instead of these arduous methods, we’re proposing that developers of bots and agents cryptographically sign requests originating from their service. When protecting origins, <a href="https://www.cloudflare.com/learning/cdn/glossary/reverse-proxy/">reverse proxies</a> such as Cloudflare can then validate those signatures to confidently identify the request source on behalf of site owners, allowing them to take action as they see fit. </p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3yB6h6XcSWNQO5McRWIpL8/edf32f7938b01a4c8f5eedefee2b9328/image2.png" />
          </figure><p>A typical system has three actors:</p><ul><li><p>User: the entity that wants to perform some actions on the web. This may be a human, an automated program, or anything taking action to retrieve information from the web.</p></li><li><p>Agent: an orchestrated browser or software program. For example, Chrome on your computer, or OpenAI’s <a href="https://operator.chatgpt.com/"><u>Operator</u></a> with ChatGPT. Agents can interact with the web according to web standards (HTML rendering, JavaScript, subrequests, etc.).</p></li><li><p>Origin: the website hosting a resource. The user wants to access it through the browser. This is Cloudflare when your website is using our services, and it’s your own server(s) when exposed directly to the Internet.</p></li></ul><p>In the next section, we’ll dive into HTTP Message Signatures and request mTLS, two mechanisms a browser agent may implement to sign outgoing requests, with different levels of ease for an origin to adopt. </p>
    <div>
      <h2>Introducing HTTP Message Signatures</h2>
      <a href="#introducing-http-message-signatures">
        
      </a>
    </div>
    <p><a href="https://www.rfc-editor.org/rfc/rfc9421.html"><u>HTTP Message Signatures</u></a> is a standard that defines the cryptographic authentication of a request sender. It’s essentially a cryptographically sound way to say, “hey, it’s me!”. It’s not the only way that developers can sign requests from their infrastructure — for example, AWS has used <a href="https://docs.aws.amazon.com/AmazonS3/latest/API/sig-v4-authenticating-requests.html"><u>Signature v4</u></a>, and Stripe has a framework for <a href="https://docs.stripe.com/webhooks#verify-webhook-signatures-with-official-libraries"><u>authenticating webhooks</u></a> — but Message Signatures is a published standard, and the cleanest, most developer-friendly way to sign requests.  </p><p>We’re working closely with the wider industry to support these standards-based approaches. For example, OpenAI has started to sign their requests. In their own words:   </p><blockquote><p><i>"Ensuring the authenticity of Operator traffic is paramount. With HTTP Message Signatures (</i><a href="https://www.rfc-editor.org/rfc/rfc9421.html"><i><u>RFC 9421</u></i></a><i>), OpenAI signs all Operator requests so site owners can verify they genuinely originate from Operator and haven’t been tampered with” </i>– Eugenio, Engineer, OpenAI</p></blockquote><p>Without further delay, let’s dive in how HTTP Messages Signatures work to identify bot traffic.</p>
    <div>
      <h3>Scoping standards to bot authentication</h3>
      <a href="#scoping-standards-to-bot-authentication">
        
      </a>
    </div>
    <p>Generating a message signature works like this: before sending a request, the agent signs the target origin with a public key. When fetching <code>https://example.com/path/to/resource</code>, it signs <code>example.com</code>. This public key is known to the origin, either because the agent is well known, because it has previously registered, or any other method. Then, the agent writes a <b>Signature-Input</b> header with the following parameters:</p><ol><li><p>A validity window (<code>created</code> and <code>expires</code> timestamps)</p></li><li><p>A Key ID that uniquely identifies the key used in the signature. This is a <a href="https://www.rfc-editor.org/rfc/rfc7638.html"><u>JSON Web Key Thumbprint</u></a>.  </p></li><li><p>A tag that shows websites the signature’s purpose and validation method, i.e. <code>web-bot-auth</code> for bot authentication.</p></li></ol><p>In addition, the <code>Signature-Agent</code> header indicates where the origin can find the public keys the agent used when signing the request, such as in a directory hosted by <code>signer.example.com</code>. This header is part of the signed content as well.</p><p>Here’s an example:</p>
            <pre><code>GET /path/to/resource HTTP/1.1
Host: www.example.com
User-Agent: Mozilla/5.0 Chrome/113.0.0 MyBotCrawler/1.1
Signature-Agent: signer.example.com
Signature-Input: sig=("@authority" "signature-agent");\
             	 created=1700000000;\
             	 expires=1700011111;\
             	 keyid="ba3e64==";\
             	 tag="web-bot-auth"
Signature: sig=abc==</code></pre>
            <p>For those building bots, <a href="https://datatracker.ietf.org/doc/draft-meunier-web-bot-auth-architecture/"><u>we propose</u></a> signing the authority of the target URI, i.e. www.example.com, and a way to retrieve the bot public key in the form of <a href="https://datatracker.ietf.org/doc/draft-meunier-http-message-signatures-directory/"><u>signature-agent</u></a>, if present, i.e. <a href="http://crawler.search.google.com"><u>crawler.search.google.com</u></a> for Google Search, <a href="http://operator.openai.com"><u>operator.openai.com</u></a> for OpenAI Operator, workers.dev for Cloudflare Workers.</p><p>The <code>User-Agent</code> from the example above indicates that the software making the request is Chrome, because it is an agent that uses an orchestrated Chrome to browse the web. You should note that <code>MyBotCrawler/1.1</code> is still present. The <code>User-Agent</code> header can actually contain multiple products, in decreasing order of importance. If our agent is making requests via Chrome, that’s the most important product and therefore comes first.</p><p>At Internet-level scale, these signatures may add a notable amount of overhead to request processing. However, with the right cryptographic suite, and compared to the cost of existing bot mitigation, both technical and social, this seems to be a straightforward tradeoff. This is a metric we will monitor closely, and report on as adoption grows.</p>
    <div>
      <h3>Generating request signatures</h3>
      <a href="#generating-request-signatures">
        
      </a>
    </div>
    <p>We’re making several examples for generating Message Signatures for bots and agents <a href="https://github.com/cloudflareresearch/web-bot-auth/"><u>available on Github</u></a> (though we encourage other implementations!), all of which are standards-compliant, to maximize interoperability. </p><p>Imagine you’re building an agent using a managed Chromium browser, and want to sign all outgoing requests. To achieve this, the <a href="https://github.com/w3c/webextensions"><u>webextensions standard</u></a> provides <a href="https://developer.mozilla.org/en-US/docs/Mozilla/Add-ons/WebExtensions/API/webRequest/onBeforeSendHeaders"><u>chrome.webRequest.onBeforeSendHeaders</u></a>, where you can modify HTTP headers before they are sent by the browser. The event is <a href="https://developer.chrome.com/docs/extensions/reference/api/webRequest#life_cycle_of_requests"><u>triggered</u></a> before sending any HTTP data, and when headers are available.</p><p>Here’s what that code would look like: </p>
            <pre><code>chrome.webRequest.onBeforeSendHeaders.addListener(
  function (details) {
	// Signature and header assignment logic goes here
      // &lt;CODE&gt;
  },
  { urls: ["&lt;all_urls&gt;"] },
  ["blocking", "requestHeaders"] // requires "installation_mode": "force_installed"
);</code></pre>
            <p>Cloudflare provides a <a href="https://www.npmjs.com/package/web-bot-auth"><u>web-bot-auth</u></a> helper package on npm that helps you generate request signatures with the correct parameters. <code>onBeforeSendHeaders</code> is a Chrome extension hook that needs to be implemented synchronously. To do so, we <code>import {signatureHeadersSync} from “web-bot-auth”</code>. Once the signature completes, both <code>Signature</code> and <code>Signature-Input</code> headers are assigned. The request flow can then continue.</p>
            <pre><code>const request = new URL(details.url);
const created = new Date();
const expired = new Date(created.getTime() + 300_000)


// Perform request signature
const headers = signatureHeadersSync(
  request,
  new Ed25519Signer(jwk),
  { created, expires }
);
// `headers` object now contains `Signature` and `Signature-Input` headers that can be used</code></pre>
            <p>This extension code is available on <a href="https://github.com/cloudflareresearch/web-bot-auth/"><u>GitHub</u></a>, alongside a  debugging server, deployed at <a href="https://http-message-signatures-example.research.cloudflare.com"><u>https://http-message-signatures-example.research.cloudflare.com</u></a>. </p>
    <div>
      <h3>Validating request signatures </h3>
      <a href="#validating-request-signatures">
        
      </a>
    </div>
    <p>Using our <a href="https://http-message-signatures-example.research.cloudflare.com"><u>debug server</u></a>, we can now inspect and validate our request signatures from the perspective of the website we’d be visiting. We should now see the Signature and Signature-Input headers:  </p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/18P5OyGxu2fU0Dpyv70Gjz/d82d62355524ad1914deb41b601bcad2/image3.png" />
          </figure><p><sup><i>In this example, the homepage of the debugging server validates the signature from the RFC 9421 Ed25519 verifying key, which the extension uses for signing.</i></sup></p><p>The above demo and code walkthrough has been fully written in TypeScript: the verification website is on Cloudflare Workers, and the client is a Chrome browser extension. We are cognisant that this does not suit all clients and servers on the web. To demonstrate the proposal works in more environments, we have also implemented bot signature validation in Go with a <a href="https://github.com/cloudflareresearch/web-bot-auth/tree/main/examples/caddy-plugin"><u>plugin</u></a> for <a href="https://caddyserver.com/"><u>Caddy server</u></a>.</p>
    <div>
      <h2>Experimentation with request mTLS</h2>
      <a href="#experimentation-with-request-mtls">
        
      </a>
    </div>
    <p>HTTP is not the only way to convey signatures. For instance, one mechanism that has been used in the past to authenticate automated traffic against secured endpoints is <a href="https://www.cloudflare.com/learning/access-management/what-is-mutual-tls/"><u>mTLS</u></a>, the “mutual” presentation of <a href="https://www.cloudflare.com/application-services/products/ssl/">TLS certificates</a>. As described in our <a href="https://www.cloudflare.com/learning/access-management/what-is-mutual-tls/"><u>knowledge base</u></a>:</p><blockquote><p><i>Mutual TLS, or mTLS for short, is a method for</i><a href="https://www.cloudflare.com/learning/access-management/what-is-mutual-authentication/"><i> </i><i><u>mutual authentication</u></i></a><i>. mTLS ensures that the parties at each end of a network connection are who they claim to be by verifying that they both have the correct private</i><a href="https://www.cloudflare.com/learning/ssl/what-is-a-cryptographic-key/"><i> </i><i><u>key</u></i></a><i>. The information within their respective</i><a href="https://www.cloudflare.com/learning/ssl/what-is-an-ssl-certificate/"><i> </i><i><u>TLS certificates</u></i></a><i> provides additional verification.</i></p></blockquote><p>While mTLS seems like a good fit for bot authentication on the web, it has limitations. If a user is asked for authentication via the mTLS protocol but does not have a certificate to provide, they would get an inscrutable and unskippable error. Origin sites need a way to conditionally signal to clients that they accept or require mTLS authentication, so that only mTLS-enabled clients use it.</p>
    <div>
      <h3>A TLS flag for bot authentication</h3>
      <a href="#a-tls-flag-for-bot-authentication">
        
      </a>
    </div>
    <p>TLS flags are an efficient way to describe whether a feature, like mTLS, is supported by origin sites. Within the IETF, we have proposed a new TLS flag called <a href="https://datatracker.ietf.org/doc/draft-jhoyla-req-mtls-flag/"><code><u>req mTLS</u></code></a> to be sent by the client during the establishment of a connection that signals support for authentication via a client certificate. </p><p>This proposal leverages the <a href="https://www.ietf.org/archive/id/draft-ietf-tls-tlsflags-14.html"><u>tls-flags</u></a> proposal under discussion in the IETF. The TLS Flags draft allows clients and servers to send an array of one bit flags to each other, rather than creating a new extension (with its associated overhead) for each piece of information they want to share. This is one of the first uses of this extension, and we hope that by using it here we can help drive adoption.</p><p>When a client sends the <a href="https://datatracker.ietf.org/doc/draft-jhoyla-req-mtls-flag/"><code><u>req mTLS</u></code></a> flag to the server, they signal to the server that they are able to respond with a certificate if requested. The server can then safely request a certificate without risk of blocking ordinary user traffic, because ordinary users will never set this flag. </p><p>Let’s take a look at what an example of such a req mTLS would look like in <a href="https://www.wireshark.org/"><u>Wireshark</u></a>, a network protocol analyser. You can follow along in the packet capture <a href="https://github.com/cloudflareresearch/req-mtls/tree/main/assets/demonstration-capture.pcapng"><u>here</u></a>.</p>
            <pre><code>Extension: req mTLS (len=12)
	Type: req mTLS (65025)
	Length: 12
	Data: 0b0000000000000000000001</code></pre>
            <p>The extension number is 65025, or 0xfe01. This corresponds to an unassigned block of <a href="https://www.iana.org/assignments/tls-extensiontype-values/tls-extensiontype-values.xhtml#tls-extensiontype-values-1"><u>TLS extensions</u></a> that can be used to experiment with TLS Flags. Once the standard is adopted and published by the IETF, the number would be fixed. To use the <code>req mTLS</code> flag the client needs to set the 80<sup>th</sup> bit to true, so with our block length of 12 bytes, it should  contain the data 0b0000000000000000000001, which is the case here. The server then responds with a certificate request, and the request follows its course.</p>
    <div>
      <h3>Request mTLS in action</h3>
      <a href="#request-mtls-in-action">
        
      </a>
    </div>
    <p><i>Code for this section is available in GitHub under </i><a href="https://github.com/cloudflareresearch/req-mtls"><i><u>cloudflareresearch/req-mtls</u></i></a></p><p>Because mutual TLS is widely supported in TLS libraries already, the parts we need to introduce to the client and server are:</p><ol><li><p>Sending/parsing of TLS-flags</p></li><li><p>Specific support for the <code>req mTLS</code> flag</p></li></ol><p>To the best of our knowledge, there is no complete public implementation of either scheme. Using it for bot authentication may provide a motivation to do so.</p><p>Using <a href="https://github.com/cloudflare/go"><u>our experimental fork of Go</u></a>, a TLS client could support req mTLS as follows:</p>
            <pre><code>config := &amp;tls.Config{
    	TLSFlagsSupported:  []tls.TLSFlag{0x50},
    	RootCAs:       	rootPool,
    	Certificates:  	certs,
    	NextProtos:    	[]string{"h2"},
}
trans := http.Transport{TLSClientConfig: config, ForceAttemptHTTP2: true}</code></pre>
            <p>This example library allows you to configure Go to send <code>req mTLS 0x50</code> bytes in the <code>TLS Flags</code> extension. If you’d like to test your implementation out, you can prompt your client for certificates against <a href="http://req-mtls.research.cloudflare.com"><u>req-mtls.research.cloudflare.com</u></a> using the Cloudflare Research client <a href="https://github.com/cloudflareresearch/req-mtls"><u>cloudflareresearch/req-mtls</u></a>. For clients, once they set the TLS Flags associated with <code>req mTLS</code>, they are done. The code section taking care of normal mTLS will take over at that point, with no need to implement something new.</p>
    <div>
      <h2>Two approaches, one goal</h2>
      <a href="#two-approaches-one-goal">
        
      </a>
    </div>
    <p>We believe that developers of agents and bots should have a public, standard way to authenticate themselves to CDNs and website hosting platforms, regardless of the technology they use or provider they choose. At a high level, both HTTP Message Signatures and request mTLS achieve a similar goal: they allow the owner of a service to authentically identify themselves to a website. That’s why we’re participating in the standardizing effort for both of these protocols at the IETF, where many other authentication mechanisms we’ve discussed here — from TLS to OAuth Bearer tokens –— been developed by diverse sets of stakeholders and standardized as RFCs.   </p><p>Evaluating both proposals against each other, we’re prioritizing <a href="https://datatracker.ietf.org/doc/html/draft-meunier-web-bot-auth-architecture"><u>HTTP Message Signatures for Bots</u></a> because it relies on the previously adopted <a href="https://datatracker.ietf.org/doc/html/rfc9421"><u>RFC 9421</u></a> with several <a href="https://httpsig.org/"><u>reference implementations</u></a>, and works at the HTTP layer, making adoption simpler. <a href="https://datatracker.ietf.org/doc/draft-jhoyla-req-mtls-flag/"><u>request mTLS</u></a> may be a better fit for site owners with concerns about the additional bandwidth, but <a href="https://datatracker.ietf.org/doc/html/draft-ietf-tls-tlsflags"><u>TLS Flags</u></a> has fewer implementations, is still waiting for IETF adoption, and upgrading the TLS stack has proven to be more challenging than with HTTP. Both approaches share similar discovery and key management concerns, as highlighted in a <a href="https://datatracker.ietf.org/doc/draft-meunier-web-bot-auth-glossary/"><u>glossary</u></a> draft at the IETF. We’re actively exploring both options, and would love to <a href="https://www.cloudflare.com/lp/verified-bots/"><u>hear</u></a> from both site owners and bot developers about how you’re evaluating their respective tradeoffs.</p>
    <div>
      <h2>The bigger picture </h2>
      <a href="#the-bigger-picture">
        
      </a>
    </div>
    <p>In conclusion, we think request signatures and mTLS are promising mechanisms for bot owners and developers of AI agents to authenticate themselves in a tamper-proof manner, forging a path forward that doesn’t rely on ever-changing IP address ranges or spoofable headers such as <code>User-Agent</code>. This authentication can be consumed by Cloudflare when acting as a reverse proxy, or directly by site owners on their own infrastructure. This means that as a bot owner, you can now go to content creators and discuss crawling agreements, with as much granularity as the number of bots you have. You can start implementing these solutions today and test them against the research websites we’ve provided in this post.</p><p>Bot authentication also empowers site owners small and large to have more control over the traffic they allow, empowering them to continue to serve content on the public Internet while monitoring automated requests. Longer term, we will integrate these authentication mechanisms into our <a href="https://blog.cloudflare.com/cloudflare-ai-audit-control-ai-content-crawlers/"><u>AI Audit</u></a> and <a href="https://developers.cloudflare.com/bots/get-started/bot-management/"><u>Bot Management</u></a> products, to provide better visibility into the bots and agents that are willing to identify themselves.</p><p>Being able to solve problems for both origins and clients is key to helping build a better Internet, and we think identification of automated traffic is a step towards that. If you want us to start verifying your message signatures or client certificates, have a compelling use case you’d like us to consider, or any questions, please <a href="https://www.cloudflare.com/lp/verified-bots/"><u>reach out</u></a>.</p> ]]></content:encoded>
            <category><![CDATA[Research]]></category>
            <category><![CDATA[Bots]]></category>
            <category><![CDATA[Bot Management]]></category>
            <category><![CDATA[AI Bots]]></category>
            <category><![CDATA[Cryptography]]></category>
            <guid isPermaLink="false">2hUP3FdePgIYVDwhgJVLeV</guid>
            <dc:creator>Thibault Meunier</dc:creator>
            <dc:creator>Mari Galicer</dc:creator>
        </item>
        <item>
            <title><![CDATA[Making Application Security simple with a new unified dashboard experience]]></title>
            <link>https://blog.cloudflare.com/new-application-security-experience/</link>
            <pubDate>Thu, 20 Mar 2025 13:00:00 GMT</pubDate>
            <description><![CDATA[ We’re introducing a new Application Security experience in the Cloudflare dashboard, with a reworked UI organized by use cases, making it easier for customers to navigate and secure their accounts. ]]></description>
            <content:encoded><![CDATA[ <p>Over the years, we have framed our Application Security features against market-defined product groupings such as Web Application Firewall (WAF), DDoS Mitigation, Bot Management, API Security (API Shield), Client Side Security (Page Shield), and so forth. This has led to unnecessary artificial separation of what is, under the hood, a well-integrated single platform.</p><p>This separation, which has sometimes guided implementation decisions that have led to different systems being built for the same purpose, makes it harder for our users to adopt our features and implement a simple effective security posture for their environment.</p><p>Today, following user feedback and our drive to constantly innovate and simplify, we are going back to our roots by breaking these artificial product boundaries and revising our dashboard, so it highlights our strengths. The ultimate goal remains: to make it shockingly easy to secure your web assets.</p><p><b>Introducing a new unified Application Security experience.</b></p><p>If you are a Cloudflare Application Security user, log in <a href="http://dash.cloudflare.com/:account/:zone/security"><u>to the dashboard</u></a> today and try out the updated dashboard interface. To make the transition easier, you can toggle between old and new interfaces.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/iyOx4HWAdpFyp0W6nECvi/5f67090ee17c9db87ce2c130f80d493a/image5.png" />
          </figure>
    <div>
      <h2>Security, simplified</h2>
      <a href="#security-simplified">
        
      </a>
    </div>
    <p>Modern applications are built using a variety of technologies. Your app might include a web interface and a mobile version, both powered by an API, each with its own unique security requirements. As these technologies increasingly overlap, traditional security categories like Web, API, client-side, and bot protection start to feel artificial and disconnected when applied to real-world application security.</p><p>Consider scenarios where you want to secure your API endpoints with proper authentication, or prevent vulnerability scanners from probing for weaknesses. These tasks often require switching between multiple dashboards, creating different policies, and managing disjointed configurations. This fragmented approach not only complicates workflows but also increases the risk of overlooking a critical vulnerability. The result? A security posture that is harder to manage and potentially less effective.</p><p>When you zoom out, a pattern emerges. Whether it’s managing bots, securing APIs, or filtering web traffic, these solutions ultimately analyze incoming traffic looking for specific patterns, and the resulting signal is used to perform actions. The primary difference between these tools is the type of signal they generate, such as identifying bots, enforcing authorization, or flagging suspicious requests. </p><p>At Cloudflare, we saw an opportunity to address this complexity by unifying our application security tools into a single platform with one cohesive UI. A unified approach means security practitioners no longer have to navigate multiple interfaces or piece together different security controls. With a single UI, you can configure policies more efficiently, detect threats faster, and maintain consistent protection across all aspects of your application. This simplicity doesn’t just save time, it ensures that your applications remain secure, even as threats evolve.</p><p>At the end of the day, attackers won’t care which product you’re using. But by unifying application security, we ensure they’ll have a much harder time finding a way in.</p>
    <div>
      <h2>Many products, one common approach</h2>
      <a href="#many-products-one-common-approach">
        
      </a>
    </div>
    <p>To redefine the experience across Application Security products, we can start by defining three concepts that commonly apply:</p><ul><li><p>Web traffic (HTTP/S), which can be generalised even further as “data”</p></li><li><p>Signals and detections, which provide intelligence about the traffic. Can be generalised as “metadata”</p></li><li><p>Security rules that let you combine any signal or detection (metadata), to block, challenge or otherwise perform an action on the web traffic (data)</p></li></ul><p>We can diagram the above as follows:</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/46XB4bR8DSCZWe7PDQNDLp/4043ff4d123c4b8c5eafe0948c2fdefe/image1.png" />
          </figure><p>Using these concepts, all the product groupings that we offer can be converted to different types of signals or detections. All else remains the same. And if we are able to run and generate our signals on all traffic separately from the rule system, therefore generating all the metadata, we get what we call <a href="https://developers.cloudflare.com/waf/detections/"><b><u>always-on detections</u></b></a>, another vital benefit of a single platform approach. Also note that <a href="https://blog.cloudflare.com/traffic-sequence-which-product-runs-first/"><u>the order</u></a> in which we generate the signals becomes irrelevant.</p><p>In diagram form:</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5TgbZHUYCkztCPfpbk8rYQ/d2ec02dddb61b8b708019aade73b9623/image12.png" />
          </figure><p>The benefits are twofold. First, problem spaces (such as account takeover or web attacks) become signal groupings, and therefore metadata that can be queried to answer questions about your environment.</p><p>For example, let’s take our Bot Management signal, the <a href="https://developers.cloudflare.com/bots/concepts/bot-score/"><u>bot score</u></a>, and our <a href="https://developers.cloudflare.com/waf/detections/attack-score/"><u>WAF Attack Score</u></a> signal, the attack score. These already run as always-on detections at Cloudflare. By combining these two signals and filtering your traffic against them, you can gain powerful insights on who is accessing your application<b>*</b>:</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7xcSJtpZdY8L5svEob6dRW/f779872d453551b8d5ca11845a246fc5/image11.png" />
          </figure><p>Second, as everything is just a signal, the mitigation layer, driven by the optional rules, becomes detection agnostic. By providing the same signals as fields in a unified rule system, writing high level policies becomes a breeze. And as we said earlier, given the detection is <b>always-on </b>and fully separated from the mitigation rule system, exploring the data can be thought of as a powerful rule match preview engine. No need to deploy a rule in LOG mode to see what it matches!</p><p>We can now design a unified user experience that reflects Application Security as a single product.</p><p><sup><b><i>* note:</i></b></sup><sup><i> the example here is simplistic, and the use cases become a lot more powerful once you expand to the full set of potential signals that the platform can generate. Take, for example, our ability to detect file uploads. If you run a job application site, you may want to let crawlers access your site, but you may *</i></sup><sup><b><i>not</i></b></sup><sup><i>* want crawlers to submit applications on behalf of applicants. By combining the bot score signal with the file upload signal, you can ensure that rule is enforced.</i></sup></p>
    <div>
      <h2>Introducing a unified Application Security experience</h2>
      <a href="#introducing-a-unified-application-security-experience">
        
      </a>
    </div>
    <p>As signals are always-on, the user journey can now start from our <a href="https://blog.cloudflare.com/cloudflare-security-posture-management/#:~:text=protect%20what%20matters.-,Posture%20overview,-from%20attacks%20to"><u>new overview page</u></a> where we highlight security suggestions based on your traffic profile and configurations. Alternatively, you can jump straight into analytics where you can investigate your traffic using a combination of all available signals.</p><p>When a specific traffic pattern seems malicious, you can jump into the rule system to implement a security policy. As part of our new design, given the simplicity of the navigation, we also took advantage of the opportunity to introduce a <a href="https://blog.cloudflare.com/cloudflare-security-posture-management/#:~:text=Discovery%20of%20web%20assets"><u>new web assets page</u></a>, where we highlight discovery and attack surface management details.</p><p>Of course, reaching the final design required multiple iterations and feedback sessions. To best understand the balance of maintaining flexibility in the UI whilst reducing complexity, we focused on customer tasks to be done and documenting their processes while trying to achieve their intended actions in the dashboard. Reducing navigation items and using clear naming was one element, but we quickly learned that the changes needed to support ease of use for tasks across the platform.</p><p>Here is the end result:</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2IFfm5Q1D6sfGzqhasl2Pb/e2f078c281a4067f5bb624b0ca37509e/image8.png" />
          </figure><p>To recap, our new dashboard now includes:</p><ul><li><p>One overview page where misconfigurations, risks, and suggestions are aggregated</p></li><li><p>Simplified and redesigned security analytics that surfaces security signals from all Application Security capabilities, so you can easily identify and act on any suspicious activity</p></li><li><p>A new web assets page, where you can manage your attack surfaces, helping improve detection relevance</p></li><li><p>A single Security Rules page that provides a unified interface to manage, prioritise, and customise all mitigation rules in your zone, significantly streamlining your security configuration</p></li><li><p>A new settings page where advanced control is based on security needs, not individual products</p></li></ul><p>Let’s dive into each one.</p>
    <div>
      <h3>Overview</h3>
      <a href="#overview">
        
      </a>
    </div>
    <p>With the unified security approach, the new overview page aggregates and prioritizes security suggestions across all your web assets, helping you <a href="https://blog.cloudflare.com/cloudflare-security-posture-management/"><u>maintain a healthy security posture</u></a>. The suggestions span from detected (ongoing) attacks if there are any, to risks and misconfigurations to further solidify your protection. This becomes the daily starting point to manage your security posture.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/19dZ0olpyIjEjywnd5zRrh/32bd0b872ab3c00f411e5287a8e2b6ae/image6.png" />
          </figure>
    <div>
      <h3>Analytics</h3>
      <a href="#analytics">
        
      </a>
    </div>
    <p>Security Analytics and Events have been redesigned to make it easier to analyze your traffic. Suspicious activity detected by Cloudflare is surfaced at the top of the page, allowing you to easily filter and review related traffic. From the Traffic Analytics Sampled Log view, further below in the page, new workflows enable you to take quick action to craft a custom rule or review related security events in context.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/16cUGVu3gqAD8sm9OaAs6x/2ef0474f3d193d95f062ed33abae2d80/image3.png" />
          </figure>
    <div>
      <h3>Web assets</h3>
      <a href="#web-assets">
        
      </a>
    </div>
    <p>Web assets is a new concept introduced to bridge your business goals with threat detection capabilities. A web asset is any endpoint, file, document, or other related entity that we normally would act on from a security perspective. Within our new web asset page, you will be able to explore all relevant discovered assets by our system.</p><p>With our unified security platform, we are able to rapidly build new <a href="https://blog.cloudflare.com/cloudflare-security-posture-management/#securing-your-web-applications:~:text=Use%2Dcase%20driven%20threat%20detection"><u>use-case driven threat detections</u></a>. For example, to block automated actions across your e-commerce website, you can instruct Cloudflare’s system to block any fraudulent signup attempts, while allowing verified crawlers to index your product pages. This is made possible by labelling your web assets, which, where possible, is automated by Cloudflare, and then using those labels to power threat detections to protect your assets.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/nKFhkuUboScRORcWHgYJR/17d23bb52e532910da6b1cc868bf702e/image9.png" />
          </figure>
    <div>
      <h3>Security rules</h3>
      <a href="#security-rules">
        
      </a>
    </div>
    <p>The unified Security rules interface brings all mitigation rule types — including WAF custom rules, rate limiting rules, API sequence rules, and client side rules — together in one centralized location, eliminating the need to navigate multiple dashboards.</p><p>The new page gives you visibility into how Cloudflare mitigates both incoming traffic and blocks potentially malicious client side resources from loading, making it easier to understand your security posture at a glance. The page allows you to create customised mitigation rules by combining any detection signals, such as Bot Score, Attack Score, or signals from Leaked Credential Checks, enabling precise control over how Cloudflare responds to potential threats.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/63A8aBq9400mKEyHAJGAuk/447447977592218fbaa418b3735754c7/image10.png" />
          </figure>
    <div>
      <h3>Settings</h3>
      <a href="#settings">
        
      </a>
    </div>
    <p>Balancing guidance and flexibility was the key driver for designing the new Settings page. As much as Cloudflare <i>guides</i> you towards the optimal security posture through recommendations and alerts, customers that want the <i>flexibility</i> to proactively adjust these settings can find all of them here.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/pz3N1v2EOCj3V9sKF009F/a106b34e0039bebf6eecdfc5c1244f41/image7.png" />
          </figure>
    <div>
      <h2>Experience it today</h2>
      <a href="#experience-it-today">
        
      </a>
    </div>
    <p>This is the first of many enhancements we plan to make to the Application Security experience in the coming months. To check out the new navigation, log in to the <a href="https://dash.cloudflare.com/"><u>Cloudflare dashboard</u></a>, click on “Security” and choose “Check it out” when you see the message below. You will still have the option of opting out, if you so prefer.</p><div>
  
</div>
<p></p><p>Let us know what you think either by sharing feedback in our <a href="https://community.cloudflare.com/"><u>community forum</u></a> or by providing feedback directly in the dashboard (you will be prompted if you revert to the old design).</p>
    <div>
      <h2>Watch on Cloudflare TV</h2>
      <a href="#watch-on-cloudflare-tv">
        
      </a>
    </div>
    <div>
  
</div><p></p> ]]></content:encoded>
            <category><![CDATA[Security Week]]></category>
            <category><![CDATA[Application Security]]></category>
            <category><![CDATA[Dashboard]]></category>
            <category><![CDATA[WAF]]></category>
            <category><![CDATA[Bot Management]]></category>
            <guid isPermaLink="false">ktsrG1vJGggZ2JlL4cHxS</guid>
            <dc:creator>Michael Tremante</dc:creator>
            <dc:creator>Pete Thomas</dc:creator>
            <dc:creator>Jessica Tarasoff</dc:creator>
        </item>
        <item>
            <title><![CDATA[Improved Bot Management flexibility and visibility with new high-precision heuristics]]></title>
            <link>https://blog.cloudflare.com/bots-heuristics/</link>
            <pubDate>Wed, 19 Mar 2025 13:00:00 GMT</pubDate>
            <description><![CDATA[ By building and integrating a new heuristics framework into the Cloudflare Ruleset Engine, we now have a more flexible system to write rules and deploy new releases rapidly. ]]></description>
            <content:encoded><![CDATA[ <p>Within the Cloudflare Application Security team, every <a href="https://www.cloudflare.com/learning/ai/what-is-machine-learning/"><u>machine learning</u></a> model we use is underpinned by a rich set of static rules that serve as a ground truth and a baseline comparison for how our models are performing. These are called heuristics. Our Bot Management heuristics engine has served as an important part of eight global <a href="https://developers.cloudflare.com/bots/concepts/bot-score/#machine-learning"><u>machine learning (ML) models</u></a>, but we needed a more expressive engine to increase our accuracy. In this post, we’ll review how we solved this by moving our heuristics to the Cloudflare <a href="https://developers.cloudflare.com/ruleset-engine/"><u>Ruleset Engine</u></a>. Not only did this provide the platform we needed to write more nuanced rules, it made our platform simpler and safer, and provided <a href="https://www.cloudflare.com/application-services/products/bot-management/"><u>Bot Management</u></a> customers more flexibility and visibility into their bot traffic.   </p>
    <div>
      <h3>Bot detection via simple heuristics</h3>
      <a href="#bot-detection-via-simple-heuristics">
        
      </a>
    </div>
    <p>In Cloudflare’s bot detection, we build heuristics from attributes like software library fingerprints, HTTP request characteristics, and internal threat intelligence. Heuristics serve three separate purposes for bot detection: </p><ol><li><p>Bot identification: If traffic matches a heuristic, we can identify the traffic as definitely automated traffic (with a <a href="https://developers.cloudflare.com/bots/concepts/bot-score/"><u>bot score</u></a> of 1) without the need of a machine learning model. </p></li><li><p>Train ML models: When traffic matches our heuristics, we create labelled datasets of bot traffic to train new models. We’ll use many different sources of labelled bot traffic to train a new model, but our heuristics datasets are one of the highest confidence datasets available to us.   </p></li><li><p>Validate models: We benchmark any new model candidate’s performance against our heuristic detections (among many other checks) to make sure it meets a required level of accuracy.</p></li></ol><p>While the existing heuristics engine has worked very well for us, as bots evolved we needed the flexibility to write increasingly complex rules. Unfortunately, such rules were not easily supported in the old engine. Customers have also been asking for more details about which specific heuristic caught a request, and for the flexibility to enforce different policies per heuristic ID.  We found that by building a new heuristics framework integrated into the Cloudflare Ruleset Engine, we could build a more flexible system to write rules and give Bot Management customers the granular explainability and control they were asking for. </p>
    <div>
      <h3>The need for more efficient, precise rules</h3>
      <a href="#the-need-for-more-efficient-precise-rules">
        
      </a>
    </div>
    <p>In our previous heuristics engine, we wrote rules in <a href="https://www.lua.org/"><u>Lua</u></a> as part of our <a href="https://openresty.org/"><u>openresty</u></a>-based reverse proxy. The Lua-based engine was limited to a very small number of characteristics in a rule because of the high engineering cost we observed with adding more complexity.</p><p>With Lua, we would write fairly simple logic to match on specific characteristics of a request (i.e. user agent). Creating new heuristics of an existing class was fairly straight forward. All we’d need to do is define another instance of the existing class in our database. However, if we observed malicious traffic that required more than two characteristics (as a simple example, user-agent and <a href="https://en.wikipedia.org/wiki/Autonomous_system_(Internet)"><u>ASN</u></a>) to identify, we’d need to create bespoke logic for detections. Because our Lua heuristics engine was bundled with the code that ran ML models and other important logic, all changes had to go through the same review and release process. If we identified malicious traffic that needed a new heuristic class, and we were also blocked by pending changes in the codebase, we’d be forced to either wait or rollback the changes. If we’re writing a new rule for an “under attack” scenario, every extra minute it takes to deploy a new rule can mean an unacceptable impact to our customer’s business. </p><p>More critical than time to deploy is the complexity that the heuristics engine supports. The old heuristics engine only supported using specific request attributes when creating a new rule. As bots became more sophisticated, we found we had to reject an increasing number of new heuristic candidates because we weren’t able to write precise enough rules. For example, we found a <a href="https://go.dev/"><u>Golang</u></a> TLS fingerprint frequently used by bots and by a small number of corporate VPNs. We couldn’t block the bots without also stopping the legitimate VPN usage as well, because the old heuristics platform lacked the flexibility to quickly compile sufficiently nuanced rules. Luckily, we already had the perfect solution with Cloudflare Ruleset Engine. </p>
    <div>
      <h3>Our new heuristics engine</h3>
      <a href="#our-new-heuristics-engine">
        
      </a>
    </div>
    <p>The Ruleset Engine is familiar to anyone who has written a WAF rule, Load Balancing rule, or Transform rule, <a href="https://blog.cloudflare.com/announcing-firewall-rules/"><u>just to name a few</u></a>. For Bot Management, the Wireshark-inspired syntax allows us to quickly write heuristics with much greater flexibility to vastly improve accuracy. We can write a rule in <a href="https://yaml.org/"><u>YAML</u></a> that includes arbitrary sub-conditions and inherit the same framework the WAF team uses to both ensure any new rule undergoes a rigorous testing process with the ability to rapidly release new rules to stop attacks in real-time. </p><p>Writing heuristics on the Cloudflare Ruleset Engine allows our engineers and analysts to write new rules in an easy to understand YAML syntax. This is critical to supporting a rapid response in under attack scenarios, especially as we support greater rule complexity. Here’s a simple rule using the new engine, to detect empty user-agents restricted to a specific JA4 fingerprint (right), compared to the empty user-agent detection in the old Lua based system (left): </p><table><tr><td><p><b>Old</b></p></td><td><p><b>New</b></p></td></tr><tr><td><p><code>local _M = {}</code></p><p><code>local EmptyUserAgentHeuristic = {</code></p><p><code>   heuristic = {},</code></p><p><code>}</code></p><p><code>EmptyUserAgentHeuristic.__index = EmptyUserAgentHeuristic</code></p><p><code>--- Creates and returns empty user agent heuristic</code></p><p><code>-- @param params table contains parameters injected into EmptyUserAgentHeuristic</code></p><p><code>-- @return EmptyUserAgentHeuristic table</code></p><p><code>function _M.new(params)</code></p><p><code>   return setmetatable(params, EmptyUserAgentHeuristic)</code></p><p><code>end</code></p><p><code>--- Adds heuristic to be used for inference in `detect` method</code></p><p><code>-- @param heuristic schema.Heuristic table</code></p><p><code>function EmptyUserAgentHeuristic:add(heuristic)</code></p><p><code>   self.heuristic = heuristic</code></p><p><code>end</code></p><p><code>--- Detect runs empty user agent heuristic detection</code></p><p><code>-- @param ctx context of request</code></p><p><code>-- @return schema.Heuristic table on successful detection or nil otherwise</code></p><p><code>function EmptyUserAgentHeuristic:detect(ctx)</code></p><p><code>   local ua = ctx.user_agent</code></p><p><code>   if not ua or ua == '' then</code></p><p><code>      return self.heuristic</code></p><p><code>   end</code></p><p><code>end</code></p><p><code>return _M</code></p></td><td><p><code>ref: empty-user-agent</code></p><p><code>      description: Empty or missing </code></p><p><code>User-Agent header</code></p><p><code>      action: add_bot_detection</code></p><p><code>      action_parameters:</code></p><p><code>        active_mode: false</code></p><p><code>      expression: http.user_agent eq </code></p><p><code>"" and cf.bot_management.ja4 = "t13d1516h2_8daaf6152771_b186095e22b6"</code></p></td></tr></table><p>The Golang heuristic that captured corporate proxy traffic as well (mentioned above) was one of the first to migrate to the new Ruleset engine. Before the migration, traffic matching on this heuristic had a false positive rate of 0.01%. While that sounds like a very small number, this means for every million bots we block, 100 real users saw a Cloudflare challenge page unnecessarily. At Cloudflare scale, even small issues can have real, negative impact.</p><p>When we analyzed the traffic caught by this heuristic rule in depth, we saw the vast majority of attack traffic came from a small number of abusive networks. After narrowing the definition of the heuristic to flag the Golang fingerprint only when it’s sourced by the abusive networks, the rule now has a false positive rate of 0.0001% (One out of 1 million).  Updating the heuristic to include the network context improved our accuracy, while still blocking millions of bots every week and giving us plenty of training data for our bot detection models. Because this heuristic is now more accurate, newer ML models make more accurate decisions on what’s a bot and what isn’t.</p>
    <div>
      <h3>New visibility and flexibility for Bot Management customers </h3>
      <a href="#new-visibility-and-flexibility-for-bot-management-customers">
        
      </a>
    </div>
    <p>While the new heuristics engine provides more accurate detections for all customers and a better experience for our analysts, moving to the Cloudflare Ruleset Engine also allows us to deliver new functionality for Enterprise Bot Management customers, specifically by offering more visibility. This new visibility is via a new field for Bot Management customers called Bot Detection IDs. Every heuristic we use includes a unique Bot Detection ID. These are visible to Bot Management customers in analytics, logs, and firewall events, and they can be used in the firewall to write precise rules for individual bots. </p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3cYXHw8tFUjdJQGm93gFcE/0a3f6ab89a70410ebb7dd2c6f4f3a677/1.png" />
          </figure>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/9d7L1r7yN9AEhO26H7SgQ/434f0f934dd4f4498a8d13e85a7660ae/2.png" />
          </figure><p>Detections also include a specific tag describing the class of heuristic. Customers see these plotted over time in their analytics. </p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2UlkGQ070UWsmU76IXYqDd/6bca8f28959a8fe8f3013792a17b348a/image4.png" />
          </figure><p>To illustrate how this data can help give customers visibility into why we blocked a request, here’s an example request flagged by Bot Management (with the IP address, ASN, and country changed):</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3i6k9ejsBVwlbXUrZFIFJG/4c9cddd11d9f7a8ddf10eb5ff30a027b/4.png" />
          </figure><p>Before, just seeing that our heuristics gave the request a score of 1 was not very helpful in understanding why it was flagged as a bot. Adding our Detection IDs to Firewall Events helps to paint a better picture for customers that we’ve identified this request as a bot because that traffic used an empty user-agent. </p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5UQd20VnXCnIav1skHiX6i/18e0cf01106601ae7faf18be50d43365/5.png" />
          </figure><p>In addition to Analytics and Firewall Events, Bot Detection IDs are now available for Bot Management customers to use in Custom Rules, Rate Limiting Rules, Transform Rules, and Workers. </p>
    <div>
      <h4>Account takeover detection IDs</h4>
      <a href="#account-takeover-detection-ids">
        
      </a>
    </div>
    <p>One way we’re focused on improving Bot Management for our customers is by surfacing more attack-specific detections. During Birthday Week, we <a href="https://blog.cloudflare.com/a-safer-internet-with-cloudflare/"><u>launched Leaked Credentials Check</u></a> for all customers so that security teams could help <a href="https://www.cloudflare.com/zero-trust/solutions/account-takeover-prevention/"><u>prevent account takeover (ATO) attacks</u></a> by identifying accounts at risk due to leaked credentials. We’ve now added two more detections that can help Bot Management enterprise customers identify suspicious login activity via specific <a href="https://developers.cloudflare.com/bots/concepts/detection-ids/#account-takeover-detections"><u>detection IDs</u></a> that monitor login attempts and failures on the zone. These detection IDs are not currently affecting the bot score, but will begin to later in 2025. Already, they can help many customers detect more account takeover events now.</p><p>Detection ID <b>201326592 </b>monitors traffic on a customer website and looks for an anomalous rise in login failures (usually associated with brute force attacks), and ID <b>201326593 </b>looks for an anomalous rise in login attempts (usually associated with credential stuffing). </p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4a2LGgB1bXGwFgHQEKFSI9/ff4194a6a470e466274f7b7c51e9dc18/6.png" />
          </figure>
    <div>
      <h3>Protect your applications</h3>
      <a href="#protect-your-applications">
        
      </a>
    </div>
    <p>If you are a Bot Management customer, log in and head over to the Cloudflare dashboard and take a look in Security Analytics for bot detection IDs <code>201326592</code> and <code>201326593</code>.</p><p>These will highlight ATO attempts targeting your site. If you spot anything suspicious, or would like to be protected against future attacks, create a rule that uses these detections to keep your application safe.</p> ]]></content:encoded>
            <category><![CDATA[Security Week]]></category>
            <category><![CDATA[Bots]]></category>
            <category><![CDATA[Bot Management]]></category>
            <category><![CDATA[Application Security]]></category>
            <category><![CDATA[Machine Learning]]></category>
            <category><![CDATA[Heuristics]]></category>
            <category><![CDATA[Edge Rules]]></category>
            <category><![CDATA[Security]]></category>
            <guid isPermaLink="false">4IkgXzyemEEsN7A6Cd18hb</guid>
            <dc:creator>Curtis Lowder</dc:creator>
            <dc:creator>Brian Mitchell</dc:creator>
            <dc:creator>Adam Martinetti</dc:creator>
        </item>
        <item>
            <title><![CDATA[Trapping misbehaving bots in an AI Labyrinth]]></title>
            <link>https://blog.cloudflare.com/ai-labyrinth/</link>
            <pubDate>Wed, 19 Mar 2025 13:00:00 GMT</pubDate>
            <description><![CDATA[ How Cloudflare uses generative AI to slow down, confuse, and waste the resources of AI Crawlers and other bots that don’t respect “no crawl” directives. ]]></description>
            <content:encoded><![CDATA[ <p>Today, we’re excited to announce AI Labyrinth, a new mitigation approach that uses AI-generated content to slow down, confuse, and waste the resources of AI Crawlers and other bots that don’t respect “no crawl” directives. When you opt in, Cloudflare will automatically deploy an AI-generated set of linked pages when we detect inappropriate bot activity, without the need for customers to create any custom rules.</p><p>AI Labyrinth is available on an opt-in basis to all customers, including the<a href="https://www.cloudflare.com/plans/free/"> Free plan</a>.</p>
    <div>
      <h3>Using Generative AI as a defensive weapon</h3>
      <a href="#using-generative-ai-as-a-defensive-weapon">
        
      </a>
    </div>
    <p>AI-generated content has exploded, reportedly accounting for <a href="https://www.thetimes.co.uk/article/why-ai-content-everywhere-how-to-detect-l2m2kdx9p"><u>four of the top 20 Facebook posts</u></a> last fall. Additionally, Medium estimates that <a href="https://www.wired.com/story/ai-generated-medium-posts-content-moderation/"><u>47% of all content</u></a> on their platform is AI-generated. Like any newer tool it has both wonderful and <a href="https://www.npr.org/2024/12/24/nx-s1-5235265/how-to-protect-yourself-from-holiday-ai-scams"><u>malicious</u></a> uses.</p><p>At the same time, we’ve also seen an explosion of new crawlers used by AI companies to scrape data for model training. AI Crawlers generate more than 50 billion requests to the Cloudflare network every day, or just under 1% of all web requests we see. While Cloudflare has several tools for <a href="https://www.cloudflare.com/learning/ai/how-to-block-ai-crawlers/"><u>identifying and blocking unauthorized AI crawling</u></a>, we have found that blocking malicious <a href="https://www.cloudflare.com/learning/bots/what-is-a-bot/">bots</a> can alert the attacker that you are on to them, leading to a shift in approach, and a never-ending arms race. So, we wanted to create a new way to thwart these unwanted bots, without letting them know they’ve been thwarted.</p><p>To do this, we decided to use a new offensive tool in the bot creator’s toolset that we haven’t really seen used defensively: AI-generated content. When we detect unauthorized crawling, rather than blocking the request, we will link to a series of AI-generated pages that are convincing enough to entice a crawler to traverse them. But while real looking, this content is not actually the content of the site we are protecting, so the crawler wastes time and resources. </p><p>As an added benefit, AI Labyrinth also acts as a next-generation honeypot. No real human would go four links deep into a maze of AI-generated nonsense. Any visitor that does is very likely to be a bot, so this gives us a brand-new tool to identify and fingerprint bad bots, which we add to our list of known bad actors. Here’s how we do it…</p>
    <div>
      <h3>How we built the labyrinth </h3>
      <a href="#how-we-built-the-labyrinth">
        
      </a>
    </div>
    <p>When AI crawlers follow these links, they waste valuable computational resources processing irrelevant content rather than extracting your legitimate website data. This significantly reduces their ability to gather enough useful information to train their models effectively.</p><p>To generate convincing human-like content, we used <a href="https://developers.cloudflare.com/workers-ai/"><u>Workers AI</u></a> with an open source model to create unique HTML pages on diverse topics. Rather than creating this content on-demand (which could impact performance), we implemented a pre-generation pipeline that sanitizes the content to<a href="https://www.cloudflare.com/learning/security/how-to-prevent-xss-attacks/"> prevent any XSS vulnerabilities</a>, and stores it in <a href="https://www.cloudflare.com/developer-platform/products/r2/">R2</a> for faster retrieval. We found that generating a diverse set of topics first, then creating content for each topic, produced more varied and convincing results. It is important to us that we don’t generate inaccurate content that contributes to the spread of misinformation on the Internet, so the content we generate is real and related to scientific facts, just not relevant or proprietary to the site being crawled.</p><p>This pre-generated content is seamlessly integrated as hidden links on existing pages via our custom HTML transformation process, without disrupting the original structure or content of the page. Each generated page includes appropriate meta directives to protect SEO by preventing search engine indexing. We also ensured that these links remain invisible to human visitors through carefully implemented attributes and styling. To further minimize the impact to regular visitors, we ensured that these links are presented only to suspected AI scrapers, while allowing legitimate users and verified crawlers to browse normally.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2PHSCXVMFipAhGJ5IheXW3/a46aad93f2e60f6d892d4c597a752a58/image4.png" />
          </figure><p><sup><i>A graph of daily requests over time, comparing different categories of AI Crawlers.</i></sup></p><p>What makes this approach particularly effective is its role in our continuously evolving bot detection system. When these links are followed, we know with high confidence that it's automated crawler activity, as human visitors and legitimate browsers would never see or click them. This provides us with a powerful identification mechanism, generating valuable data that feeds into our <a href="https://www.cloudflare.com/learning/ai/what-is-machine-learning/">machine learning models</a>. By analyzing which crawlers are following these hidden pathways, we can identify new bot patterns and signatures that might otherwise go undetected. This proactive approach helps us <a href="https://www.cloudflare.com/learning/ai/how-to-prevent-web-scraping/">stay ahead of AI scrapers</a>, continuously improving our detection capabilities without disrupting the normal browsing experience.</p><p>By building this solution on our developer platform, we've created a system that serves convincing decoy content instantly while maintaining consistent quality - all without impacting your site's performance or user experience.</p>
    <div>
      <h3>How to use AI Labyrinth to stop AI crawlers</h3>
      <a href="#how-to-use-ai-labyrinth-to-stop-ai-crawlers">
        
      </a>
    </div>
    <p>Enabling AI Labyrinth is simple and requires just a single toggle in your Cloudflare dashboard. Navigate to the bot management section within your zone, and toggle the new AI Labyrinth setting to on:</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/q1ZQlnnMztSsK8PWD1h0S/ef02f081544dc751f754e9630f17261e/image1.png" />
          </figure>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/61qBVDv0WFh8YzrbVULtxq/13ec46d7651c59454f9fe3754e253b85/image3.png" />
          </figure><p>Once enabled, the AI Labyrinth begins working immediately with no additional configuration needed.</p>
    <div>
      <h3>AI honeypots, created by AI</h3>
      <a href="#ai-honeypots-created-by-ai">
        
      </a>
    </div>
    <p>The core benefit of AI Labyrinth is to confuse and distract bots. However, a secondary benefit is to serve as a next-generation honeypot. In this context, a honeypot is just an invisible link that a website visitor can’t see, but a bot parsing HTML would see and click on, therefore revealing itself to be a bot. Honeypots have been used to catch hackers as early as the late <a href="https://medium.com/@jcart657/the-cuckoos-egg-9b502442ea67"><u>1986 Cuckoo’s Egg incident</u></a>. And in 2004, <a href="https://www.projecthoneypot.org/"><u>Project Honeypot</u></a> was created by Cloudflare founders (prior to founding Cloudflare) to let everyone easily deploy free email honeypots, and receive lists of crawler IPs in exchange for contributing to the database. But as bots have evolved, they now proactively look for honeypot techniques like hidden links, making this approach less effective.</p><p>AI Labyrinth won’t simply add invisible links, but will eventually create whole networks of linked URLs that are much more realistic, and not trivial for automated programs to spot. The content on the pages is obviously content no human would spend time-consuming, but AI bots are programmed to crawl rather deeply to harvest as much data as possible. When bots hit these URLs, we can be confident they aren’t actual humans, and this information is recorded and automatically fed to our machine learning models to help improve our bot identification. This creates a beneficial feedback loop where each scraping attempt helps protect all Cloudflare customers.</p>
    <div>
      <h3>What’s next</h3>
      <a href="#whats-next">
        
      </a>
    </div>
    <p>This is only the first iteration of using generative AI to thwart bots for us. Currently, while the content we generate is convincingly human, it won’t conform to the existing structure of every website. In the future, we’ll continue to work to make these links harder to spot and make them fit seamlessly into the existing structure of the website they’re embedded in. You can help us by opting in now.</p><p>To take the next step in the fight against bots, <a href="https://dash.cloudflare.com/?to=/:account/:zone/security/detections/bot-traffic"><u>opt-in to AI Labyrinth</u></a> today.</p> ]]></content:encoded>
            <category><![CDATA[Security Week]]></category>
            <category><![CDATA[Bots]]></category>
            <category><![CDATA[Bot Management]]></category>
            <category><![CDATA[AI Bots]]></category>
            <category><![CDATA[AI]]></category>
            <category><![CDATA[Machine Learning]]></category>
            <category><![CDATA[Generative AI]]></category>
            <guid isPermaLink="false">1Zh4fm4BB1S3xuVwfETiTE</guid>
            <dc:creator>Reid Tatoris</dc:creator>
            <dc:creator>Harsh Saxena</dc:creator>
            <dc:creator>Luis Miglietti</dc:creator>
        </item>
        <item>
            <title><![CDATA[Grinch Bots strike again: defending your holidays from cyber threats]]></title>
            <link>https://blog.cloudflare.com/grinch-bot-2024/</link>
            <pubDate>Mon, 23 Dec 2024 14:01:00 GMT</pubDate>
            <description><![CDATA[ Cloudflare observed a 4x increase in bot-related traffic on Black Friday in 2024. 29% of all traffic on our network on Black Friday was Grinch Bots wreaking holiday havoc. ]]></description>
            <content:encoded><![CDATA[ 
    <div>
      <h2>Grinch Bots are still stealing Christmas</h2>
      <a href="#grinch-bots-are-still-stealing-christmas">
        
      </a>
    </div>
    <p>Back in 2021, we covered the antics of <a href="https://blog.cloudflare.com/grinch-bot/"><u>Grinch Bots</u></a> and how the combination of proposed regulation and technology could prevent these malicious programs from stealing holiday cheer.</p><p>Fast-forward to 2024 — the <a href="https://www.congress.gov/bill/117th-congress/senate-bill/3276/all-info#:~:text=%2F30%2F2021)-,Stopping%20Grinch%20Bots%20Act%20of%202021,or%20services%20in%20interstate%20commerce"><u>Stop Grinch Bots Act of 2021</u></a> has not passed, and bots are more active and powerful than ever, leaving businesses to fend off increasingly sophisticated attacks on their own. During Black Friday 2024, Cloudflare observed:</p><ul><li><p><b>29% of all traffic on Black Friday was Grinch Bots</b>. Humans still accounted for the majority of all traffic, but bot traffic was up 4x from three years ago in absolute terms. </p></li><li><p><b>1% of traffic on Black Friday came from AI bots. </b>The majority of it came from Claude, Meta, and Amazon. 71% of this traffic was given the green light to access the content requested. </p></li><li><p><b>63% of login attempts across our network on Black Friday were from bots</b>. While this number is high, it was down a few percentage points compared to a month prior, indicating that more humans accessed their accounts and holiday deals. </p></li><li><p><b>Human logins on e-commerce sites increased 7-8%</b> compared to the previous month. </p></li></ul><p>These days, holiday shopping doesn’t start on Black Friday and stop on Cyber Monday. Instead, it stretches through Cyber Week and beyond, including flash sales, pre-orders, and various other promotions. While this provides consumers more opportunities to shop, it also creates more openings for Grinch Bots to wreak havoc.</p>
    <div>
      <h2>Black Friday - Cyber Monday by the numbers</h2>
      <a href="#black-friday-cyber-monday-by-the-numbers">
        
      </a>
    </div>
    <p><a href="https://blog.cloudflare.com/the-truth-about-black-friday-and-cyber-monday/">Black Friday and Cyber Monday</a> in 2024 brought record-breaking shopping — and grinching. In addition to looking across our entire network, we also analyzed traffic patterns specifically on a cohort of e-commerce sites. </p><p>Legitimate shoppers flocked to e-commerce sites, with requests reaching an astounding 405 billion on Black Friday, accounting for 81% of the day’s total traffic to e-commerce sites. Retailers reaped the rewards of their deals and advertising, seeing a 50% surge in shoppers week-over-week and a 61% increase compared to the previous month.</p><p>Unfortunately, Grinch Bots were equally active. Total e-commerce bot activity surged to 103 billion requests, representing up to 19% of all traffic to e-commerce sites. Nearly one in every five requests to an online store was not a real customer. That’s a lot of resources to waste on bogus traffic. Cyber Week was a battleground, with bots hoarding inventory, exploiting deals, and disrupting genuine shopping experiences.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6XOfNPFabXiMzIrZrXGES2/9fa1eb6d1ca558ef9f405826999bcea2/image9.png" />
          </figure><p>The upside, if there is one, is that there was more human activity on e-commerce sites (81%) than observed on our network more broadly (71%). </p>
    <div>
      <h2>The Grinch Bot’s Modus Operandi</h2>
      <a href="#the-grinch-bots-modus-operandi">
        
      </a>
    </div>
    <p>Cloudflare saw 4x more bot requests than what we observed in 2021. Being able to observe and score all this traffic at scale means we can help customers keep the grinches away. We also got to see patterns that help us better identify the concentration of these attacks: </p><ul><li><p>19% of traffic on e-commerce sites was Grinch Bots</p></li><li><p>1% of traffic to e-commerce sites was from AI Bots. </p></li><li><p>63% of login attempt requests across our network were from bots </p></li><li><p>22% of bot activity originated from residential proxy networks</p></li></ul>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6Lo8y9dRRZGKujWxwfxBaf/4be5c93638b3dabe7acc823adac5ed6f/image3.png" />
          </figure><p>What are all of these bots up to? </p>
    <div>
      <h3><b>AI bots</b></h3>
      <a href="#ai-bots">
        
      </a>
    </div>
    <p>This year marked a breakthrough for AI-driven bots, agents, and models, with their impact spilling into Black Friday. AI bots went from zero to one, now making up 1% of all bot traffic on e-commerce sites. </p><p>AI-driven bots generated 29 billion requests on Black Friday alone, with Meta-external, Claudebot, and Amazonbot leading the pack. Based on their owners, these bots are meant to crawl to augment training data sets for Llama, Claude, and Alexa respectively. </p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6jzXxH2fR5KWRWc252SbNH/ad40f4a4cb8cf1fdc36558dc4324b717/image10.png" />
          </figure><p>We looked at e-commerce sites specifically to find out if these bots were treating all content equally. While Meta-External and Amazonbot were still in the Top 3 of AI bots reaching e-commerce sites, Bytedance’s Bytespider crawled the most shopping sites.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6jOsfPtliY01FSNVVgb2bZ/f444cee15d3818d24ef2e3e53731896c/image2.png" />
          </figure>
    <div>
      <h3><b>Account Takeover (ATO) bots</b></h3>
      <a href="#account-takeover-ato-bots">
        
      </a>
    </div>
    <p>In addition to <a href="https://www.cloudflare.com/learning/ai/how-to-prevent-web-scraping/">scraping</a>, crawling, and shopping, bots also targeted customer accounts on Black Friday. We saw 14.1 billion requests from bots to /login endpoints, accounting for 63% of that day’s login attempts. </p><p>While this number seems high, intuitively it makes sense, given that humans don’t log in to accounts every day, but bots definitely try to crack accounts every day. Interestingly, while humans only accounted for 36% of traffic to login pages on Black Friday, this number was <b><i>up 7-8% compared to the prior month</i></b>. This suggests that more shoppers logged in to capitalize on deals and discounts on Black Friday than in preceding weeks. Human logins peaked at around 40% of all traffic to login sites on the Monday before Thanksgiving, and again on Cyber Monday.  </p><p>Separately, we also saw a 37% increase in leaked passwords used in login requests compared to the prior month. During Birthday Week, we shared <a href="https://blog.cloudflare.com/a-safer-internet-with-cloudflare/#account-takeover-detection"><u>how 65% of internet users are at risk of ATO due to re-use of leaked passwords</u></a>. This surge, coinciding with heightened human and bot traffic, underscores a troubling pattern: both humans and bots continue to depend on common and compromised passwords, amplifying security risks.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1DDDcH41X7fhmfk2VdNaNV/de40dce017459752ba366e5fea331377/image7.png" />
          </figure><p><b>Proxy bots: </b>Regardless of whether they’re crawling your content or hoarding your wares, 22% of bot traffic originated from <a href="https://blog.cloudflare.com/residential-proxy-bot-detection-using-machine-learning/"><u>residential proxy networks</u></a>. This obfuscation makes these requests look like legitimate customers browsing from their homes rather than large cloud networks. The large pool of IP addresses and the diversity of networks poses a challenge to traditional bot defense mechanisms that rely on IP reputation and rate limiting. </p><p>Moreover, the diversity of IP addresses enables the attackers to rotate through them indefinitely. This shrinks the window of opportunity for bot detection systems to effectively detect and stop the attacks. The use of residential proxies is a trend we have been tracking for months now and Black Friday traffic was within the range we’ve seen throughout this year.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1Rk3c2sq0kLpl9Y71NdL8K/1186c343f760dda1f64adb4dd8716170/image1.png" />
          </figure><p>If you’re using Cloudflare’s <a href="https://www.cloudflare.com/en-gb/application-services/products/bot-management/"><u>Bot Management</u></a>, your site is already protected from these bots since we update our bot score based on these types of network fingerprints. In May 2024, we <a href="https://blog.cloudflare.com/residential-proxy-bot-detection-using-machine-learning/"><u>introduced</u></a> our latest model optimized for detecting residential proxies. Early results show promising declines in this type of activity, indicating that bot operators may be reducing their reliance on residential proxies. </p>
    <div>
      <h2>The Christmas “Yule” log: how customers can protect themselves</h2>
      <a href="#the-christmas-yule-log-how-customers-can-protect-themselves">
        
      </a>
    </div>
    <p>35% of all traffic on Black Friday was Grinch Bots. To keep Grinch Bots at bay, businesses need year-round bot protection and proactive strategies tailored to the unique challenges of holiday shopping.</p><p>Here are 4 yules (aka “rules”) for the season:</p><p><b>(1) Block bots</b>: 22% of bot traffic originated from residential proxy networks. Our bot management <a href="https://blog.cloudflare.com/residential-proxy-bot-detection-using-machine-learning/"><u>score automatically adjusts based on these network signals</u></a>. Use our <a href="https://developers.cloudflare.com/bots/concepts/bot-score/"><u>Bot Score</u></a> in rules to challenge sensitive actions. </p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3yNb5lIJ0kUuUkwrwRIFV4/21bd315b382663d5601d4629c8bec3a2/image8.png" />
          </figure><p><b>(2) Monitor potential Account Takeover (ATO) attacks</b>: Bots often test <a href="https://blog.cloudflare.com/helping-keep-customers-safe-with-leaked-password-notification/"><u>stolen credentials</u></a> in the months leading up to Cyber Week to refine their strategies. Re-use of stolen credentials makes businesses even more vulnerable. Our account abuse detections help customers monitor login paths for leaked credentials and traffic anomalies.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/20ICo3eYcY6BTdqAJE5Kj9/90b1373280aaeef54dfbed3c41e5ea8c/Screenshot_2024-12-21_at_17.52.17.png" />
          </figure><p>Check out more examples of related <a href="https://developers.cloudflare.com/waf/detections/leaked-credentials/examples/"><u>rules</u></a> you can create.</p><p><b>(3) Rate limit account and purchase paths: </b>Apply rate-limiting <a href="https://developers.cloudflare.com/waf/rate-limiting-rules/best-practices/"><u>best practices</u></a> on critical application paths. These include limiting new account access/creation from previously seen IP addresses, and leveraging other network fingerprints, to help prevent promo code abuse and inventory hoarding, as well as identifying account takeover attempts through the application of <a href="https://developers.cloudflare.com/bots/concepts/detection-ids/"><u>detection IDs</u></a> and <a href="https://developers.cloudflare.com/waf/managed-rules/check-for-exposed-credentials/"><u>leaked credential checks</u></a>.</p><p><b>(4) Block AI bots</b> abusing shopping features to maintain fair access for human users. If you’re using Cloudflare, you can quickly <a href="https://www.cloudflare.com/learning/ai/how-to-block-ai-crawlers/">block all AI bots</a> by <a href="https://blog.cloudflare.com/declaring-your-aindependence-block-ai-bots-scrapers-and-crawlers-with-a-single-click/"><u>enabling our automatic AI bot blocking</u></a> feature.  </p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5vfiWs6fEkloTwGblzkltR/53c9a89a323a4bb9107ea8f3e83e9b49/image11.png" />
          </figure>
    <div>
      <h2>What to expect in 2025? </h2>
      <a href="#what-to-expect-in-2025">
        
      </a>
    </div>
    <p>Over the next year, e-commerce sites should expect to see more humans shopping for longer periods. As sale periods lengthen (like they did in 2024) we expect more peaks in human activity on e-commerce sites across November and December. This is great for consumers and great for merchants.</p><p>More AI bots and agents will be integrated into e-commerce journeys in 2025. AI bots will not only be crawling sites for training data, but will also integrate into the shopping experience. AI bots did not exist in 2021, but now make up 1% of all bot traffic. This is only the tip of the iceberg and their growth will explode in the next year. We expect this to pose new risks as bots mimic and act on behalf of humans.</p><p>More sophisticated automation through network, device, and cookie cycling will also become a bigger threat. Bot operators will continue to employ advanced evasion tactics like rotating devices, IP addresses, and cookies to bypass detection.</p><p>Grinch Bots are evolving, and regulation may be slowing, but businesses don’t have to face them alone. We remain resolute in our mission to help build a better Internet ... and holiday shopping experience.</p><p>Even though the holiday season is closing out soon, bots are never on vacation. It’s never too late or too early to start protecting your customers and your business from grinches that work all year round.</p><p>Wishing you all happy holidays and a bot-free new year!</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1E192osHHmrXszsfbE2fIa/f7d04ea59ef6355eb78fed4e2fa15bd4/image6.png" />
          </figure><p></p> ]]></content:encoded>
            <category><![CDATA[AI Bots]]></category>
            <category><![CDATA[Grinch]]></category>
            <category><![CDATA[Bots]]></category>
            <category><![CDATA[Bot Management]]></category>
            <category><![CDATA[Application Security]]></category>
            <category><![CDATA[Application Services]]></category>
            <guid isPermaLink="false">5yiWFM9NumXY8HARoEP8x6</guid>
            <dc:creator>Avi Jaisinghani</dc:creator>
            <dc:creator>Adam Martinetti</dc:creator>
            <dc:creator>Brian Mitchell</dc:creator>
        </item>
        <item>
            <title><![CDATA[AI Everywhere with the WAF Rule Builder Assistant, Cloudflare Radar AI Insights, and updated AI bot protection]]></title>
            <link>https://blog.cloudflare.com/bringing-ai-to-cloudflare/</link>
            <pubDate>Fri, 27 Sep 2024 13:00:00 GMT</pubDate>
            <description><![CDATA[ This year for Cloudflare’s birthday, we’ve extended our AI Assistant capabilities to help you build new WAF rules, added new AI bot & crawler traffic insights to Radar, and given customers new AI bot  ]]></description>
            <content:encoded><![CDATA[ <p>The continued growth of AI has fundamentally changed the Internet over the past 24 months. AI is increasingly ubiquitous, and Cloudflare is leaning into the new opportunities and challenges it presents in a big way. This year for Cloudflare’s birthday, we’ve extended our AI Assistant capabilities to help you build new WAF rules, added AI bot traffic insights on Cloudflare Radar, and given customers new <a href="https://www.cloudflare.com/learning/ai/how-to-block-ai-crawlers/">AI bot blocking capabilities</a>.  </p>
    <div>
      <h2>AI Assistant for WAF Rule Builder</h2>
      <a href="#ai-assistant-for-waf-rule-builder">
        
      </a>
    </div>
    
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5RYC4wmCDbs0axY92FfkFk/a728906cb6a902dd1c78ec93a0f650c2/BLOG-2564_1.png" />
          </figure><p>At Cloudflare, we’re always listening to your feedback and striving to make our products as user-friendly and powerful as possible. One area where we've heard your feedback loud and clear is in the complexity of creating custom and rate-limiting rules for our Web Application Firewall (WAF). With this in mind, we’re excited to introduce a new feature that will make rule creation easier and more intuitive: the AI Assistant for WAF Rule Builder. </p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7avSjubqlfg7L8ymKEztgk/7c3c31e50879ec64bccc384bdfcd5524/BLOG-2564_2.png" />
          </figure><p>By simply entering a natural language prompt, you can generate a custom or rate-limiting rule tailored to your needs. For example, instead of manually configuring a complex rule matching criteria, you can now type something like, "Match requests with low bot score," and the assistant will generate the rule for you. It’s not about creating the perfect rule in one step, but giving you a strong foundation that you can build on. </p><p>The assistant will be available in the Custom and Rate Limit Rule Builder for all WAF users. We’re launching this feature in Beta for all customers, and we encourage you to give it a try. We’re looking forward to hearing your feedback (via the UI itself) as we continue to refine and enhance this tool to meet your needs.</p>
    <div>
      <h2>AI bot traffic insights on Cloudflare Radar</h2>
      <a href="#ai-bot-traffic-insights-on-cloudflare-radar">
        
      </a>
    </div>
    <p>AI platform providers use bots to crawl and scrape websites, vacuuming up data to use for model training. This is frequently done without the permission of, or a business relationship with, the content owners and providers. In July, Cloudflare urged content owners and providers to <a href="https://blog.cloudflare.com/declaring-your-aindependence-block-ai-bots-scrapers-and-crawlers-with-a-single-click/"><u>“declare their AIndependence”</u></a>, providing them with a way to block AI bots, <a href="https://www.cloudflare.com/learning/ai/how-to-prevent-web-scraping/">scrapers</a>, and crawlers with a single click. In addition to this so-called “easy button” approach, sites can provide more specific guidance to these bots about what they are and are not allowed to access through directives in a <a href="https://www.cloudflare.com/en-gb/learning/bots/what-is-robots-txt/"><u>robots.txt</u></a> file. Regardless of whether a customer chooses to block or allow requests from AI-related bots, Cloudflare has insight into request activity from these bots, and associated traffic trends over time.</p><p>Tracking traffic trends for AI bots can help us better understand their activity over time — which are the most aggressive and have the highest volume of requests, which launch crawls on a regular basis, etc. The new <a href="https://radar.cloudflare.com/traffic#ai-bot-crawler-traffic"><b><u>AI bot &amp; crawler traffic </u></b><u>graph on Radar’s Traffic page</u></a> provides insight into these traffic trends gathered over the selected time period for the top known AI bots. The associated list of bots tracked here is based on the <a href="https://github.com/ai-robots-txt/ai.robots.txt"><u>ai.robots.txt list</u></a>, and will be updated with new bots as they are identified. <a href="https://developers.cloudflare.com/api/operations/radar-get-ai-bots-timeseries-group-by-user-agent"><u>Time series</u></a> and <a href="https://developers.cloudflare.com/api/operations/radar-get-ai-bots-summary-by-user-agent"><u>summary</u></a> data is available from the Radar API as well. (Traffic trends for the full set of AI bots &amp; crawlers <a href="https://radar.cloudflare.com/explorer?dataSet=ai.bots"><u>can be viewed in the new Data Explorer</u></a>.)</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5tYefQaBhTPYpqZPtE6KPu/f60694d0b24de2acba13fe0944589885/BLOG-2564_3.png" />
          </figure>
    <div>
      <h2>Blocking more AI bots</h2>
      <a href="#blocking-more-ai-bots">
        
      </a>
    </div>
    
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/UiFu8l6K4Pm3ulxTK3XU0/541d109e29a9ae94e4792fdf94f7e4aa/BLOG-2564_4.png" />
          </figure><p>For Cloudflare’s birthday, we’re following up on our previous blog post, <a href="https://blog.cloudflare.com/declaring-your-aindependence-block-ai-bots-scrapers-and-crawlers-with-a-single-click/"><i><u>Declaring Your AIndependence</u></i></a>, with an update on the new detections we’ve added to stop AI bots. Customers who haven’t already done so can simply <a href="https://dash.cloudflare.com/?to=/:account/:zone/security/bots/configure"><u>click the button</u></a> to block AI bots to gain more protection for their website. </p>
    <div>
      <h3>Enabling dynamic updates for the AI bot rule</h3>
      <a href="#enabling-dynamic-updates-for-the-ai-bot-rule">
        
      </a>
    </div>
    <p>The old button allowed customers to block <i>verified</i> AI crawlers, those that respect robots.txt and crawl rate, and don’t try to hide their behavior. We’ve added new crawlers to that list, but we’ve also expanded the previous rule to include 27 signatures (and counting) of AI bots that <i>don’t </i>follow the rules. We want to take time to say “thank you” to everyone who took the time to use our “<a href="https://docs.google.com/forms/d/14bX0RJH_0w17_cAUiihff5b3WLKzfieDO4upRlo5wj8"><u>tip line</u></a>” to point us towards new AI bots. These tips have been extremely helpful in finding some bots that would not have been on our radar so quickly. </p><p>For each bot we’ve added, we’re also adding them to our “Definitely automated” definition as well. So, if you’re a self-service plan customer using <a href="https://blog.cloudflare.com/super-bot-fight-mode/"><u>Super Bot Fight Mode</u></a>, you’re already protected. Enterprise Bot Management customers will see more requests shift from the “Likely Bot” range to the “Definitely automated” range, which we’ll discuss more below.</p><p>Under the hood, we’ve converted this rule logic to a <a href="https://developers.cloudflare.com/waf/managed-rules/"><u>Cloudflare managed rule</u></a> (the same framework that powers our WAF). This enables our security analysts and engineers to safely push updates to the rule in real-time, similar to how new WAF rule changes are rapidly delivered to ensure our customers are protected against the latest CVEs. If you haven’t logged back into the Bots dashboard since the previous version of our AI bot protection was announced, click the button again to update to the latest protection. </p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2tI8Yqxt1S0UPapImb32J4/6cb9e9bf423c370383edb820e5722929/BLOG-2564_5.png" />
          </figure>
    <div>
      <h3>The impact of new fingerprints on the model </h3>
      <a href="#the-impact-of-new-fingerprints-on-the-model">
        
      </a>
    </div>
    <p>One hidden beneficiary of fingerprinting new AI bots is our ML model. <a href="https://blog.cloudflare.com/cloudflare-bot-management-machine-learning-and-more/"><u>As we’ve discussed before</u></a>, our global ML model uses supervised machine learning and greatly benefits from more sources of labeled bot data. Below, you can see how well our ML model recognized these requests as automated, before and after we updated the button, adding new rules. To keep things simple, we have shown only the top 5 bots by the volume of requests on the chart. With the introduction of our new managed rule, we have observed an improvement in our detection capabilities for the majority of these AI bots. Button v1 represents the old option that let customers block only verified AI crawlers, while Button v2 is the newly introduced feature that includes managed rule detections.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2CZVGyDCp9ZtMrZdIi49fE/aacd04d240e9348b5a9b65bad4b470e2/BLOG-2564_6.jpg" />
          </figure><p>So how did we make our detections more robust? As we have mentioned before, sometimes <a href="https://blog.cloudflare.com/cloudflare-bot-management-machine-learning-and-more/"><i><u>a single attribute can give a bot away</u></i></a>. We developed a sophisticated set of heuristics tailored to these AI bots, enabling us to effortlessly and accurately classify them as such. Although our ML model was already detecting the vast majority of these requests, the integration of additional heuristics has resulted in a noticeable increase in detection rates for each bot, and ensuring we score every request correctly 100% of the time. Transitioning from a purely machine learning approach to incorporating heuristics offers several advantages, including faster detection times and greater certainty in classification. While deploying a machine learning model is complex and time-consuming, new heuristics can be created in minutes. </p><p>The initial launch of the AI bots block button was well-received and is now used by over 133,000 websites, with significant adoption even among our Free tier customers. The newly updated button, launched on August 20, 2024, is rapidly gaining traction. Over 90,000 zones have already adopted the new rule, with approximately 240 new sites integrating it every hour. Overall, we are now helping to protect the intellectual property of more than 146,000 sites from AI bots, and we are currently blocking 66 million requests daily with this new rule. Additionally, we’re excited to announce that support for configuring AI bots protection via Terraform will be available by the end of this year, providing even more flexibility and control for managing your bot protection settings.</p>
    <div>
      <h3>Bot behavior</h3>
      <a href="#bot-behavior">
        
      </a>
    </div>
    <p>With the enhancements to our detection capabilities, it is essential to assess the impact of these changes to bot activity on the Internet. Since the launch of the updated AI bots block button, we have been closely monitoring for any shifts in bot activity and adaptation strategies. The most basic fingerprinting technique we use to identify AI bot looking for simple user-agent matches. User-agent matches are important to monitor because they indicate the bot is transparently announcing who they are when they’re crawling a website. </p><p>The graph below shows a volume of traffic we label as AI bot over the past two months. The blue line indicates the daily request count, while the red line represents the monthly average number of requests. In the past two months, we have seen an average reduction of nearly 30 million requests, with a decrease of 40 million in the most recent month.This decline coincides with the release of Button v1 and Button v2. Our hypothesis is that with the new AI bots blocking feature, Cloudflare is blocking a majority of these bots, which is discouraging them from crawling. </p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/23ULxmxBIRskEONlWVIvlA/1dbd3d03239047492c2d4f7307217d97/BLOG-2564_7.jpg" />
          </figure><p>This hypothesis is supported by the observed decline in requests from several top AI crawlers. Specifically, the Bytespider bot reduced its daily requests from approximately 100 million to just 50 million between the end of June and the end of August (see graph below). This reduction could be attributed to several factors, including our new AI bots block button and changes in the crawler's strategy.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5UwtyZSXULrVzIqLcICGKd/fdf02c15d17e1d7ed248ba5f8a97eb54/BLOG-2564_8.jpg" />
          </figure><p>We have also observed an increase in the accountability of some AI crawlers. The most basic fingerprinting technique we use to identify AI bot looking for simple user-agent matches. User-agent matches are important to monitor because they indicate the bot is transparently announcing who they are when they’re crawling a website. These crawlers are now more frequently using their agents, reflecting a shift towards more transparent and responsible behavior. Notably, there has been a dramatic surge in the number of requests from the Perplexity user agent. This increase might be linked to <a href="https://rknight.me/blog/perplexity-ai-is-lying-about-its-user-agent/">previous accusations<u> </u></a>that Perplexity did not properly present its user agent, which could have prompted a shift in their approach to ensure better identification and compliance. </p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7Hq2vUMqqdNCyaxNTCg3JD/610ad53d57203203c5176229245c8086/BLOG-2564_9.jpg" />
          </figure><p>These trends suggest that our updates are likely affecting how AI crawlers interact with content. We will continue to monitor AI bot activity to help users control who accesses their content and how. By keeping a close watch on emerging patterns, we aim to provide users with the tools and insights needed to make informed decisions about managing their traffic. </p>
    <div>
      <h2>Wrap up</h2>
      <a href="#wrap-up">
        
      </a>
    </div>
    <p>We’re excited to continue to explore the AI landscape, whether we’re finding more ways to make the Cloudflare dashboard usable or new threats to guard against. Our AI insights on Radar update in near real-time, so please join us in watching as new trends emerge and discussing them in the <a href="https://community.cloudflare.com/"><u>Cloudflare Community</u></a>. </p> ]]></content:encoded>
            <category><![CDATA[Birthday Week]]></category>
            <category><![CDATA[Bot Management]]></category>
            <category><![CDATA[Bots]]></category>
            <category><![CDATA[AI Bots]]></category>
            <category><![CDATA[AI]]></category>
            <category><![CDATA[Machine Learning]]></category>
            <category><![CDATA[Generative AI]]></category>
            <category><![CDATA[Security]]></category>
            <category><![CDATA[Developers]]></category>
            <category><![CDATA[Developer Platform]]></category>
            <category><![CDATA[Application Services]]></category>
            <category><![CDATA[Product News]]></category>
            <guid isPermaLink="false">6HqKUMoXg0wFIQg9howLMX</guid>
            <dc:creator>Adam Martinetti</dc:creator>
            <dc:creator>Harsh Saxena</dc:creator>
            <dc:creator>Gauri Baraskar</dc:creator>
            <dc:creator>Carlos Azevedo</dc:creator>
            <dc:creator>David Belson</dc:creator>
        </item>
        <item>
            <title><![CDATA[Advancing Threat Intelligence: JA4 fingerprints and inter-request signals]]></title>
            <link>https://blog.cloudflare.com/ja4-signals/</link>
            <pubDate>Mon, 12 Aug 2024 14:00:00 GMT</pubDate>
            <description><![CDATA[ Explore how Cloudflare's JA4 fingerprinting and inter-request signals provide robust and scalable insights for advanced web security and threat detection.
 ]]></description>
            <content:encoded><![CDATA[ <p>For many years, Cloudflare has used advanced fingerprinting techniques to help block online threats, in products like our <a href="https://blog.cloudflare.com/meet-gatebot-a-bot-that-allows-us-to-sleep"><u>DDoS engine</u></a>, <a href="https://blog.cloudflare.com/patching-the-internet-fixing-the-wordpress-br/"><u>our WAF</u></a>, and <a href="https://www.cloudflare.com/application-services/products/bot-management/"><u>Bot Management</u></a>. For the purposes of Bot Management, fingerprinting characteristic elements of client software help us quickly identify what kind of software is making an HTTP request. It’s an efficient and accurate way to differentiate a browser from a Python script, while preserving user privacy. These fingerprints are used on their own for simple rules, and they underpin complex machine learning models as well. </p><p>Making sure our fingerprints keep pace with the pace of change on the Internet is a constant and critical task. Bots will always adapt to try and look more browser-like. Less frequently, browsers will introduce major changes to their behavior and affect the entire Internet landscape. Last year, Google <a href="https://chromestatus.com/feature/5124606246518784"><u>did exactly that</u></a>, making older TLS fingerprints almost useless for identifying the latest version of Chrome.</p>
    <div>
      <h2>JA3 Fingerprint </h2>
      <a href="#ja3-fingerprint">
        
      </a>
    </div>
    <p>JA3 fingerprint introduced by <a href="https://github.com/salesforce/ja3"><u>Salesforce researchers</u></a> in 2017 and later adopted by Cloudflare, involves creating a hash of the TLS ClientHello message. This hash includes the ordered list of TLS cipher suites, extensions, and other parameters, providing a unique identifier for each client. Cloudflare customers can use JA3 to build detection rules and gain insight into their network traffic.</p><p>In early 2023, Google <a href="https://chromestatus.com/feature/5124606246518784"><u>implemented a change in Chromium-based browsers</u></a> to shuffle the order of TLS extensions – a strategy aimed at disrupting the detection capabilities of JA3 and enhancing the robustness of the TLS ecosystem. This modification was prompted by concerns that fixed fingerprint patterns could lead to rigid server implementations, potentially causing complications each time Chrome updates were rolled out. Over time, JA3 became less useful due to the following reasons:</p><ul><li><p><b>Randomization of TLS extensions:</b> Browsers began randomizing the order of TLS extensions in their ClientHello messages. This change meant that the JA3 fingerprints, which relied on the sequential order of these extensions, would vary with each connection, making it unreliable for identifying unique clients​. (Further information can be found at <a href="https://www.stamus-networks.com/blog/ja3-fingerprints-fade-browsers-embrace-tls-extension-randomization"><u>Stamus Networks</u></a>.)​</p></li><li><p><b>Inconsistencies across tools</b>: Different tools and databases that implemented JA3 fingerprinting often produced varying results due to discrepancies in how they handled TLS extensions and other protocol elements. This inconsistency hindered the effectiveness of JA3 fingerprints for reliable cross-organization sharing and threat intelligence.​ (Further information can be found at <a href="https://fingerprint.com/blog/limitations-ja3-fingerprinting-accurate-device-identification/"><u>Fingerprint</u></a>.)​</p></li><li><p><b>Limited scope and lack of adaptability</b>: JA3 focused solely on elements within the TLS ClientHello packet, covering only a narrow portion of the OSI model’s layers. This limited scope often missed crucial context about a client's environment. Additionally, as newer transport layer protocols like QUIC became popular, JA3’s methodology – originally designed for older client implementations of TLS and not including modern protocols – proved ineffective.</p></li></ul>
    <div>
      <h2>Enter JA4 fingerprint</h2>
      <a href="#enter-ja4-fingerprint">
        
      </a>
    </div>
    <p>In response to these challenges, <a href="https://foxio.io/"><u>FoxIO</u></a> developed JA4, a successor to JA3 that offers a more robust, adaptable, and reliable method for fingerprinting TLS clients across various protocols, including emerging standards like QUIC. Officially launched in September 2023, JA4 is part of the broader <a href="https://blog.foxio.io/ja4%2B-network-fingerprinting"><u>JA4+ suite</u></a> that includes fingerprints for multiple protocols such as TLS, HTTP, and SSH. This suite is designed to be interpretable by both humans and machines, thereby enhancing threat detection and security analysis capabilities.</p><p>JA4 fingerprint is resistant to the randomization of TLS extensions and incorporates additional useful dimensions, such as Application Layer Protocol Negotiation (ALPN), which were not part of JA3. The introduction of JA4 has been met with positive reception in the cybersecurity community, with several open-source tools and commercial products beginning to incorporate it into their systems, including <a href="https://developers.cloudflare.com/bots/concepts/ja3-ja4-fingerprint/"><u>Cloudflare</u></a>. The JA4 fingerprint is available under the <a href="https://github.com/FoxIO-LLC/ja4/blob/main/License%20FAQ.md"><u>BSD 3-Clause license</u></a>, promoting seamless upgrades from JA3. Other fingerprints within the suite, such as JA4S (TLS Server Response) and JA4H (HTTP Client Fingerprinting), are licensed under the proprietary FoxIO License, which is designed for broader use but requires specific arrangements for commercial monetization.</p><p>Let’s take a look at specific JA4 fingerprint example, representing the latest version of Google Chrome on Linux:</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7gjWV3tr6fAzSFNq9Z8Xeu/360f0079d987ebc8f8c61f4596b158be/2361-2.png" />
          </figure><ol><li><p><b>Protocol Identifier (t): </b>Indicates the use of TLS over TCP. This identifier is crucial for determining the underlying protocol, distinguishing it from <i>q</i> for QUIC or <i>d</i> for DTLS.</p></li><li><p><b>TLS Version (13): </b>Represents TLS version 1.3, confirming that the client is using one of the latest secure protocols. The version number is derived from analyzing the highest version supported in the ClientHello, excluding any <a href="https://www.rfc-editor.org/rfc/rfc8701.html"><u>GREASE</u></a> values.</p></li><li><p><b>SNI Presence (d): </b>The presence of a domain name in the <a href="https://www.cloudflare.com/en-gb/learning/ssl/what-is-sni/"><u>Server Name Indication</u></a>. This indicates that the client specifies a domain (d), rather than an IP address (i would indicate the absence of SNI).</p></li><li><p><b>Cipher Suites Count (15): </b>Reflects the total number of cipher suites included in the ClientHello, excluding any GREASE values. It provides insight into the cryptographic options the client is willing to use.</p></li><li><p><b>Extensions Count (16): </b>Indicates the count of distinct extensions presented by the client in the ClientHello. This measure helps identify the range of functionalities or customizations the client supports.</p></li><li><p><b>ALPN Values (h2): </b>Represents the Application-Layer Protocol Negotiation protocol, in this case, HTTP/2, which indicates the protocol preferences of the client for optimized web performance.</p></li><li><p><b>Cipher Hash (8daaf6152771): </b>A truncated SHA256 hash of the list of cipher suites, sorted in hexadecimal order. This unique hash serves as a compact identifier for the client’s cipher suite preferences.</p></li><li><p><b>Extension Hash (02713d6af862): </b>A truncated SHA256 hash of the sorted list of extensions combined with the list of signature algorithms. This hash provides a unique identifier that helps differentiate clients based on the extensions and signature algorithms they support.</p></li></ol><p>Here is a <a href="https://www.wireshark.org/"><u>Wireshark</u></a> example of TLS ClientHello from the latest Chrome on Linux querying <a href="https://www.cloudflare.com"><u>https://www.cloudflare.com</u></a>:</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3a1jNGnnYTNZbyshIvWhtb/ead13d6dfdcef44a433bdd3f9c72952e/2361-3.png" />
          </figure><p>Integrating JA4 support into Cloudflare required rethinking our approach to parsing TLS ClientHello messages, which were previously handled in separate implementations across C, Lua, and Go. Recognizing the need to boost performance and ensure memory safety, we developed a new Rust-based crate, client-hello-parser. This unified parser not only simplifies modifications by centralizing all related logic but also prepares us for future transitions, such as replacing nginx with an upcoming Rust-based service. Additionally, this streamlined parser facilitates the exposure of JA4 fingerprints across our platform, improving the integration with Cloudflare's firewall rules, Workers, and analytics systems.</p>
    <div>
      <h2>Parsing ClientHello</h2>
      <a href="#parsing-clienthello">
        
      </a>
    </div>
    <p>client-hello-parser is an internal Rust crate designed for parsing TLS ClientHello messages. It aims to simplify the process of analyzing TLS traffic by providing a straightforward way to decode and inspect the initial handshake messages sent by clients when establishing TLS connections. This crate efficiently populates a ClientHelloParsed struct with relevant parsed fields, including version 1 and version 2 fingerprints, and JA3 and JA4 hashes, which are essential for network traffic analysis and fingerprinting.</p><p>Key benefits of the client-hello-parser library include:</p><ul><li><p><b>Optimized memory usage</b>: The library achieves amortized zero heap allocations, verified through extensive testing with the <a href="https://crates.io/crates/dhat"><u>dhat</u></a> crate to track memory allocations. Utilizing the <a href="https://crates.io/crates/tinyvec"><u>tiny_vec</u></a> crate, it begins with stack allocations for small vectors backed by fixed-size arrays, resorting to heap allocations only when these vectors exceed their initial size. This method ensures efficient reuse of all vectors, maintaining amortized zero heap allocations.</p></li><li><p><b>Memory safety:</b> Reinforced by Rust's robust borrow checker and complemented by extensive fuzzing, which has helped identify and resolve potential security vulnerabilities previously undetected in C implementations.</p></li><li><p><b>Ultra-low latency</b>: The parser benefits from using <a href="https://crates.io/crates/faster-hex"><u>faster_hex</u></a> for efficient hex encoding/decoding, which utilizes SIMD instructions to speed up processing. The use of Rust iterators also helps in optimizing performance, often allowing the compiler to generate SIMD-optimized assembly code. This efficiency is further enhanced through the use of BigEndianIterator, which allows for efficient streaming-like processing of TLS ClientHello bytes in a single pass.</p></li></ul><p>Parser benchmark results:</p>
            <pre><code>client_hello_benchmark/parse/parse-short-502
                        time:   [497.15 ns 497.23 ns 497.33 ns]
                        thrpt:  [2.0107 Melem/s 2.0111 Melem/s 2.0115 Melem/s]
client_hello_benchmark/parse/parse-long-1434
                        time:   [992.82 ns 993.55 ns 994.99 ns]
                        thrpt:  [1.0050 Melem/s 1.0065 Melem/s 1.0072 Melem/s]</code></pre>
            <p>
The benchmark results demonstrate that the parser efficiently handles different sizes of ClientHello messages, with shorter messages being processed at a rate of approximately 2 million elements per second, and longer messages at around 1 million elements per second, showcasing the effectiveness of SIMD optimizations and Rust's iterator performance in real-world applications.</p><p><b>Robust testing suite:</b> Includes dozens of real-life TLS ClientHello message examples, with parsed components verified against Wireshark with <a href="https://github.com/fullylegit/ja3"><u>JA3</u></a> and <a href="https://github.com/FoxIO-LLC/ja4/tree/main/wireshark"><u>JA4</u></a> plugins. Additionally, <a href="https://github.com/rust-fuzz/cargo-fuzz"><u>Cargo fuzzer</u></a> with memory sanitizer ensures no memory leaks or edge cases leading to core dumps. Backward compatibility tests with the legacy C parser, imported as a dependency and called via FFI, confirm that both parsers yield equivalent results.</p><p><b>Seamless integration with nginx</b>: The crate, compiled as a dynamic library, is linked to the nginx binary, ensuring a smooth transition from the legacy parser to the new Rust-based parser through backwards compatibility tests.</p><p>The transition to a new Rust-based parser has enabled the retirement of multiple implementations across different languages (C, Lua, and Go), significantly enhancing performance and parser robustness against edge cases. This shift also facilitates the easier integration of new features and business logic for parsing TLS ClientHello messages, streamlining future expansions and security updates.</p><p>With Cloudflare JA4 fingerprints implemented on our network, we were left with another problem to solve. When JA3 was released, we saw some scenarios where customers were surprised by traffic from a new JA3 fingerprint and blocked it, only to find the fingerprint was a new browser release, or an OS update had caused a change in the fingerprint used by their mobile device. By giving customers just a hash, customers still lack context. We wanted to give our customers the necessary context to help them make informed decisions about the safety of a fingerprint, so they can act quickly and confidently on it. As more of our customers embrace AI, we’ve heard more demand from our customers to break out the signals that power our bot detection. These customers want to run complex models on proprietary data that has to stay in their control, but they want to have Cloudflare’s unique perspective on Internet traffic when they do it. To us, both use cases sounded like the same problem. </p>
    <div>
      <h2>Enter JA4 Signals </h2>
      <a href="#enter-ja4-signals">
        
      </a>
    </div>
    <p>In the ever-evolving landscape of web security, traditional fingerprinting techniques like JA3 and JA4 have proven invaluable for identifying and managing web traffic. However, these methods alone are not sufficient to address the sophisticated tactics employed by malicious agents. Fingerprints can be easily spoofed, they change frequently, and traffic patterns and behaviors are constantly evolving. This is where JA4 Signals come into play, providing a robust and comprehensive approach to traffic analysis.</p><p>JA4 Signals are inter-request features computed based on the last hour of all traffic that Cloudflare sees globally. On a daily basis, we analyze over <b>15 million</b> unique JA4 fingerprints generated from more than 500 million user agents and billions of IP addresses. This breadth of data enables JA4 Signals to provide aggregated statistics that offer deeper insights into global traffic patterns – far beyond what single-request or connection fingerprinting can achieve. These signals are crucial for enhancing security measures, whether through simple firewall rules, Workers scripts, or advanced machine learning models.</p><p>Let's consider a specific example of JA4 Signals from a Firewall events activity log, which involves the latest version of Chrome:</p><p>This example highlights that a particular HTTP request received a Bot Score of 95, suggesting it likely originated from a human user operating a browser rather than an automated program or a bot. Analyzing JA4 Signals in this context provides deeper insight into the behavior of this client (latest Linux Chrome) in comparison to other network clients and their respective JA4 fingerprints. Here are a few examples of the signals our customers can see on any request:</p><table><tr><td><p><b><u>JA4 Signal</u></b></p></td><td><p><b><u>Description</u></b></p></td><td><p><b><u>Value example</u></b></p></td><td><p><b><u>Interpretation</u></b></p></td></tr><tr><td><p>browser_ratio_1h</p></td><td><p>The ratio of requests originating from browser-based user agents for the JA4 fingerprint in the last hour. Higher values suggest a higher proportion of browser-based requests.</p></td><td><p>0.942</p></td><td><p>Indicates a 94.2% browser-based request rate for this JA4.</p></td></tr><tr><td><p>cache_ratio_1h</p></td><td><p>The ratio of cacheable responses for the JA4 fingerprint in the last hour. Higher values suggest a higher proportion of responses that can be cached.</p></td><td><p>0.534</p></td><td><p>Shows a 53.4% cacheable response rate for this JA4.</p></td></tr><tr><td><p>h2h3_ratio_1h</p></td><td><p>The ratio of HTTP/2 and HTTP/3 requests combined with the total number of requests for the JA4 fingerprint in the last hour. Higher values indicate a higher proportion of HTTP/2 and HTTP/3 requests compared to other protocol versions.</p></td><td><p>0.987</p></td><td><p>Reflects a 98.7% rate of HTTP/2 and HTTP/3 requests.</p></td></tr><tr><td><p>reqs_quantile_1h</p></td><td><p>The quantile position of the JA4 fingerprint based on the number of requests across all fingerprints in the last hour. Higher values indicate a relatively higher number of requests compared to other fingerprints.</p></td><td><p>1</p></td><td><p>High volume of requests compared to other JA4s.</p></td></tr></table><p>The JA4 fingerprint and JA4 Signals are now available in the Firewall Rules UI, Bot Analytics and Workers. Customers can now use these fields to write custom rules, rate-limiting rules, transform rules, or Workers logic using JA4 fingerprint and JA4 Signals. </p><p>Let's demonstrate how to use JA4 Signals with the following Worker example. This script processes incoming requests by parsing and categorizing JA4 Signals, providing a clear structure for further analysis or rule application within Cloudflare Workers:</p>
            <pre><code>/**
 * Event listener for 'fetch' events. This triggers on every request to the worker.
 */
addEventListener('fetch', event =&gt; {
  event.respondWith(handleRequest(event.request))
})

/**
 * Main handler for incoming requests.
 * @param {Request} request - The incoming request object from the fetch event.
 * @returns {Response} A response object with JA4 Signals in JSON format.
 */
async function handleRequest(request) {
  // Safely access the ja4Signals object using optional chaining, which prevents errors if properties are undefined.
  const ja4Signals = request.cf?.botManagement?.ja4Signals || {};

  // Construct the response content, including both the original ja4Signals and the parsed signals.
  const responseContent = {
    ja4Signals: ja4Signals,
    jaSignalsParsed: parseJA4Signals(ja4Signals)
  };

  // Return a JSON response with appropriate headers.
  return new Response(JSON.stringify(responseContent), {
    status: 200,
    headers: {
      "content-type": "application/json;charset=UTF-8"
    }
  })
}

/**
 * Parses the JA4 Signals into categorized groups based on their names.
 * @param {Object} ja4Signals - The JA4 Signals object that may contain various metrics.
 * @returns {Object} An object with categorized JA4 Signals: ratios, ranks, and quantiles.
 */
function parseJA4Signals(ja4Signals) {
  // Define the keys for each category of signals.
  const ratios = ['h2h3_ratio_1h', 'heuristic_ratio_1h', 'browser_ratio_1h', 'cache_ratio_1h'];
  const ranks = ['uas_rank_1h', 'paths_rank_1h', 'reqs_rank_1h', 'ips_rank_1h'];
  const quantiles = ['reqs_quantile_1h', 'ips_quantile_1h'];

  // Return an object with each category containing only the signals that are present.
  return {
    ratios: filterKeys(ja4Signals, ratios),
    ranks: filterKeys(ja4Signals, ranks),
    quantiles: filterKeys(ja4Signals, quantiles)
  };
}

/**
 * Filters the keys in the ja4Signals object that match the list of specified keys and are not undefined.
 * @param {Object} ja4Signals - The JA4 Signals object.
 * @param {Array&lt;string&gt;} keys - An array of keys to filter from the ja4Signals object.
 * @returns {Object} A filtered object containing only the specified keys that are present in ja4Signals.
 */
function filterKeys(ja4Signals, keys) {
  const filtered = {};
  // Iterate over the specified keys and add them to the filtered object if they exist in ja4Signals.
  keys.forEach(key =&gt; {
    // Check if the key exists and is not undefined to handle optional presence of each signal.
    if (ja4Signals &amp;&amp; ja4Signals[key] !== undefined) {
      filtered[key] = ja4Signals[key];
    }
  });
  return filtered;
}</code></pre>
            
    <div>
      <h2><b>Benefits of JA4 Signals</b></h2>
      <a href="#benefits-of-ja4-signals">
        
      </a>
    </div>
    <ul><li><p><b>Comprehensive traffic analysis</b>: JA4 Signals aggregate data over an hour to provide a holistic view of traffic patterns. This method enhances the ability to identify emerging threats and abnormal behaviors by analyzing changes over time rather than in isolation.</p></li><li><p><b>Precision in anomaly detection</b>: Leveraging detailed inter-request features, JA4 Signals enable the precise detection of anomalies that may be overlooked by single-request fingerprinting. This leads to more accurate identification of sophisticated cyber threats.</p></li><li><p><b>Globally scalable insights</b>: By synthesizing data at a global scale, JA4 Signals harness the strength of Cloudflare’s network intelligence. This extensive analysis makes the system less susceptible to manipulation and provides a resilient foundation for security protocols.</p></li><li><p><b>Dynamic security enforcement</b>: JA4 Signals can dynamically inform security rules, from simple firewall configurations to complex machine learning algorithms. This adaptability ensures that security measures evolve in tandem with changing traffic patterns and emerging threats.</p></li><li><p><b>Reduction in false positives and negatives</b>: With the detailed insights provided by JA4 Signals, security systems can distinguish between legitimate and malicious traffic more effectively, reducing the occurrence of false positives and negatives and improving overall system reliability.</p></li></ul>
    <div>
      <h2>Conclusion</h2>
      <a href="#conclusion">
        
      </a>
    </div>
    <p>The introduction of JA4 fingerprint and JA4 Signals marks a significant milestone in advancing Cloudflare’s security offerings, including Bot Management and <a href="https://www.cloudflare.com/ddos/"><u>DDoS protection</u></a>. These tools not only enhance the robustness of our traffic analysis but also showcase the continuous evolution of our network fingerprinting techniques. The efficiency of computing JA4 fingerprints enables real-time detection and response to emerging threats. Similarly, by leveraging aggregated statistics and inter-request features, JA4 Signals provide deep insights into traffic patterns at speeds measured in microseconds, ensuring that no detail is too small to be captured and analyzed.</p><p>These security features are underpinned by the scalable techniques and open-sourced libraries outlined in <a href="https://blog.cloudflare.com/scalable-machine-learning-at-cloudflare"><u>"Every request, every microsecond: scalable machine learning at Cloudflare"</u></a>. This discussion highlights how Cloudflare's innovations not only analyze vast amounts of data but also transform this analysis into actionable, reliable, and dynamically adaptable security measures.</p><p>Any Enterprise business with a bot problem will benefit from Cloudflare’s unique JA4 implementation and our perspective on bot traffic, but customers who run their own internal threat models will also benefit from access to data insights from a network that processes over 50 million requests per second. Please <a href="https://www.cloudflare.com/plans/enterprise/contact/"><u>get in touch</u></a> with us to learn more about our Bot Management offering.</p> ]]></content:encoded>
            <category><![CDATA[Bot Management]]></category>
            <category><![CDATA[Threat Intelligence]]></category>
            <category><![CDATA[WAF]]></category>
            <category><![CDATA[Application Services]]></category>
            <guid isPermaLink="false">4sRriOEqIpi6j3IvpnSB6B</guid>
            <dc:creator>Alex Bocharov</dc:creator>
            <dc:creator>Adam Martinetti</dc:creator>
        </item>
        <item>
            <title><![CDATA[Application Security report: 2024 update]]></title>
            <link>https://blog.cloudflare.com/application-security-report-2024-update/</link>
            <pubDate>Thu, 11 Jul 2024 17:00:00 GMT</pubDate>
            <description><![CDATA[ Cloudflare’s updated 2024 view on Internet cyber security trends spanning global traffic insights, bot traffic insights, API traffic insights, and client-side risks ]]></description>
            <content:encoded><![CDATA[ <p></p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/QISWKhi85GFq3Aqcj9NSX/a983a69c4df14e83712ecc0beb71117a/AD_4nXftYZ9tWp6nRYAEltNHH2LVZZDWKRMZn4Y8oTwdLKuFY-wcPHiULhXzJouGXdjVVDpCeR9T63J_cCxqSzKoq4QsgeXVxQ7MmkL5GS0muw5jhWFRr1fhfpVoH314" />
            
            </figure><p>Over the last twelve months, the Internet security landscape has changed dramatically. Geopolitical uncertainty, coupled with an active 2024 voting season in many countries across the world, has led to a substantial increase in malicious traffic activity across the Internet. In this report, we take a look at Cloudflare’s perspective on Internet application security.</p><p>This report is the fourth edition of our Application Security Report and is an official update to our <a href="/application-security-report-q2-2023">Q2 2023 report</a>. New in this report is a section focused on client-side security within the context of web applications.</p><p>Throughout the report we discuss various insights. From a global standpoint, mitigated traffic across the whole network now averages 7%, and WAF and Bot mitigations are the source of over half of that. While DDoS attacks remain the number one attack vector used against web applications, targeted CVE attacks are also worth keeping an eye on, as we have seen exploits as fast as 22 minutes after a proof of concept was released.</p><p>Focusing on bots, about a third of all traffic we observe is automated, and of that, the vast majority (93%) is not generated by bots in Cloudflare’s verified list and is potentially malicious.</p><p>API traffic is also still growing, now accounting for 60% of all traffic, and maybe more concerning, is that organizations have up to a quarter of their API endpoints not accounted for.</p><p>We also touch on client side security and the proliferation of third-party integrations in web applications. On average, enterprise sites integrate 47 third-party endpoints according to Page Shield data.</p><p>It is also worth mentioning that since the last report, our network, from which we gather the data and insights, is bigger and faster: we are now processing an average of 57 million HTTP requests/second (<b>+23.9%</b> YoY) and 77 million at peak (<b>+22.2%</b> YoY). From a DNS perspective, we are handling 35 million DNS queries per second (<b>+40%</b> YoY). This is the sum of authoritative and resolver requests served by our infrastructure.</p><p>Maybe even more noteworthy, is that, focusing on HTTP requests only, in Q1 2024 Cloudflare blocked an average of 209 billion cyber threats each day (<b>+86.6%</b> YoY). That is a substantial increase in relative terms compared to the same time last year.</p><p>As usual, before we dive in, we need to define our terms.</p>
    <div>
      <h2>Definitions</h2>
      <a href="#definitions">
        
      </a>
    </div>
    <p>Throughout this report, we will refer to the following terms:</p><ul><li><p><b>Mitigated traffic:</b> any eyeball HTTP* request that had a “terminating” action applied to it by the Cloudflare platform. These include the following actions: <code>BLOCK</code>, <a href="https://developers.cloudflare.com/fundamentals/get-started/concepts/cloudflare-challenges/#legacy-captcha-challenge"><code>CHALLENGE</code></a>, <a href="https://developers.cloudflare.com/fundamentals/get-started/concepts/cloudflare-challenges/#js-challenge"><code>JS_CHALLENGE</code></a> and <a href="https://developers.cloudflare.com/fundamentals/get-started/concepts/cloudflare-challenges/#managed-challenge-recommended"><code>MANAGED_CHALLENGE</code></a>. This does not include requests that had the following actions applied: <code>LOG</code>, <code>SKIP</code>, <code>ALLOW</code>. They also accounted for a relatively small percentage of requests. Additionally, we improved our calculation regarding the <code>CHALLENGE</code> type actions to ensure that only unsolved challenges are counted as mitigated. A detailed <a href="https://developers.cloudflare.com/ruleset-engine/rules-language/actions/">description of actions</a> can be found in our developer documentation. This has not changed from last year’s report.</p></li><li><p><b>Bot traffic/automated traffic</b>: any HTTP* request identified by Cloudflare’s <a href="https://www.cloudflare.com/products/bot-management/">Bot Management</a> system as being generated by a bot. This includes requests with a <a href="https://developers.cloudflare.com/bots/concepts/bot-score/">bot score</a> between <a href="https://developers.cloudflare.com/bots/concepts/bot-score/">1 and 29</a> inclusive. This has not changed from last year’s report.</p></li><li><p><b>API traffic</b>: any HTTP* request with a response content type of XML or JSON. Where the response content type is not available, such as for mitigated requests, the equivalent Accept content type (specified by the user agent) is used instead. In this latter case, API traffic won’t be fully accounted for, but it still provides a good representation for the purposes of gaining insights. This has not changed from last year’s report.</p></li></ul><p>Unless otherwise stated, the time frame evaluated in this post is the period from April 1, 2023, through March 31, 2024, inclusive.</p><p>Finally, please note that the data is calculated based only on traffic observed across the Cloudflare network and does not necessarily represent overall HTTP traffic patterns across the Internet.</p><p><sup>*When referring to HTTP traffic we mean both HTTP and HTTPS.</sup></p>
    <div>
      <h2>Global traffic insights</h2>
      <a href="#global-traffic-insights">
        
      </a>
    </div>
    
    <div>
      <h3>Average mitigated daily traffic increases to nearly 7%</h3>
      <a href="#average-mitigated-daily-traffic-increases-to-nearly-7">
        
      </a>
    </div>
    <p>Compared to the prior 12-month period, Cloudflare mitigated a higher percentage of application layer traffic and layer 7 (L7) DDoS attacks between Q2 2023 and Q1 2024, growing from 6% to 6.8%.</p><p><b>Figure 1:</b> Percent of mitigated HTTP traffic increasing over the last 12 months</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5HrbJsLZMv12tBdVLAJEwk/56519fe3c06a1996324ba7a0e710fe5e/unnamed-6.png" />
            
            </figure><p>During large global attack events, we can observe spikes of mitigated traffic approaching 12% of all HTTP traffic. These are much larger spikes than we have ever observed across our entire network.</p>
    <div>
      <h3>WAF and Bot mitigations accounted for 53.9% of all mitigated traffic</h3>
      <a href="#waf-and-bot-mitigations-accounted-for-53-9-of-all-mitigated-traffic">
        
      </a>
    </div>
    <p>As the Cloudflare platform continues to expose additional signals to identify potentially malicious traffic, customers have been actively using these signals in WAF Custom Rules to improve their security posture. Example signals include our <a href="https://developers.cloudflare.com/waf/about/waf-attack-score/">WAF Attack Score</a>, which identifies malicious payloads, and our <a href="https://developers.cloudflare.com/bots/concepts/bot-score/">Bot Score</a>, which identifies automated traffic.</p><p>After WAF and Bot mitigations, HTTP DDoS rules are the second-largest contributor to mitigated traffic. IP reputation, that uses our <a href="https://developers.cloudflare.com/waf/tools/security-level/">IP threat score</a> to block traffic, and access rules, which are simply IP and country blocks, follow in third and fourth place.</p><p><b>Figure 2: Mitigated traffic by Cloudflare product group</b></p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/l9emHl05MUfrpqsLehyMr/51e8d8d327a5d78d90126de82bebcc38/unnamed--5--3.png" />
            
            </figure>
    <div>
      <h3>CVEs exploited as fast as 22 minutes after proof-of-concept published</h3>
      <a href="#cves-exploited-as-fast-as-22-minutes-after-proof-of-concept-published">
        
      </a>
    </div>
    <p><a href="https://www.cloudflare.com/learning/security/threats/zero-day-exploit/">Zero-day exploits</a> (also called zero-day threats) are increasing, as is the speed of weaponization of disclosed CVEs. In 2023, 97 zero-days were <a href="https://cloud.google.com/blog/topics/threat-intelligence/2023-zero-day-trends">exploited in the wild</a>, and that’s along with a 15% increase of disclosed <a href="https://www.cve.org/About/Overview">CVEs</a> between 2022 and 2023.</p><p>Looking at CVE exploitation attempts against customers, Cloudflare mostly observed scanning activity, followed by command injections, and some exploitation attempts of vulnerabilities that had PoCs available online, including Apache <a href="https://nvd.nist.gov/vuln/detail/CVE-2023-50164">CVE-2023-50164</a> and <a href="https://nvd.nist.gov/vuln/detail/cve-2022-33891">CVE-2022-33891</a>, Coldfusion <a href="https://nvd.nist.gov/vuln/detail/CVE-2023-29298">CVE-2023-29298</a> <a href="https://nvd.nist.gov/vuln/detail/CVE-2023-38203">CVE-2023-38203</a> and <a href="https://nvd.nist.gov/vuln/detail/CVE-2023-26360">CVE-2023-26360</a>, and MobileIron <a href="https://nvd.nist.gov/vuln/detail/CVE-2023-35082">CVE-2023-35082</a>.</p><p>This trend in CVE exploitation attempt activity indicates that attackers are going for the easiest targets first, and likely having success in some instances given the continued activity around old vulnerabilities.</p><p>As just one example, Cloudflare observed exploitation attempts of <a href="https://nvd.nist.gov/vuln/detail/CVE-2024-27198">CVE-2024-27198</a> (JetBrains TeamCity authentication bypass) at 19:45 UTC on March 4, just 22 minutes after proof-of-concept code was published.</p><p><b>Figure 3:</b> JetBrains TeamCity authentication bypass timeline</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2193b87OL8QLpaE3hxF7RP/06b61f3bdcac2d4ce8364a4b408e35f4/image8-2.png" />
            
            </figure><p>The speed of exploitation of disclosed CVEs is often quicker than the speed at which humans can create WAF rules or create and deploy patches to mitigate attacks. This also applies to our own internal security analyst team that maintains the WAF Managed Ruleset, which has led us to <a href="/detecting-zero-days-before-zero-day">combine the human written signatures with an ML-based approach</a> to achieve the best balance between low false positives and speed of response.</p><p>CVE exploitation campaigns from specific threat actors are clearly visible when we focus on a subset of CVE categories. For example, if we filter on CVEs that result in remote code execution (RCE), we see clear attempts to exploit Apache and Adobe installations towards the end of 2023 and start of 2024 along with a notable campaign targeting Citrix in May of this year.</p><p><b>Figure 4:</b> Worldwide daily number of requests for Code Execution CVEs</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5A2f3Shrcp7rw6zmE9VoNa/90dd0ab1a3e9a80dddfc3c893bebb283/unnamed--1--4.png" />
            
            </figure><p>Similar views become clearly visible when focusing on other CVEs or specific attack categories.</p>
    <div>
      <h3>DDoS attacks remain the most common attack against web applications</h3>
      <a href="#ddos-attacks-remain-the-most-common-attack-against-web-applications">
        
      </a>
    </div>
    <p>DDoS attacks remain the most common attack type against web applications, with DDoS comprising 37.1% of all mitigated application traffic over the time period considered.</p><p><b>Figure 5:</b> Volume of HTTP DDoS attacks over time</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3578XXGADnTHyYT6Kdf6ez/19dc83ad5c2b4989d0542f39fb755fa5/unnamed--6--3.png" />
            
            </figure><p>We saw a large increase in volumetric attacks in February and March 2024. This was partly the result of improved detections deployed by our teams, in addition to increased attack activity. In the first quarter of 2024 alone, Cloudflare’s automated defenses mitigated 4.5 million unique DDoS attacks, an amount equivalent to 32% of all the DDoS attacks Cloudflare mitigated in 2023. Specifically, application layer HTTP DDoS attacks increased by 93% YoY and 51% quarter-over-quarter (QoQ).</p><p>Cloudflare correlates DDoS attack traffic and defines unique attacks by looking at event start and end times along with target destination.</p><p>Motives for launching DDoS attacks range from targeting specific organizations for financial gains (ransom), to testing the capacity of botnets, to targeting institutions and countries for political reasons. As an example, Cloudflare observed a 466% increase in DDoS attacks on Sweden after its acceptance to the NATO alliance on March 7, 2024. This mirrored the DDoS pattern observed during Finland’s NATO acceptance in 2023. The size of DDoS attacks themselves are also increasing.</p><p>In August 2023, Cloudflare mitigated a hyper-volumetric <a href="/zero-day-rapid-reset-http2-record-breaking-ddos-attack">HTTP/2 Rapid Reset</a> DDoS attack that peaked at 201 million requests per second (rps) – three times larger than any previously observed attack. In the attack, threat actors exploited a zero-day vulnerability in the HTTP/2 protocol that had the potential to incapacitate nearly any server or application supporting HTTP/2. This underscores how menacing DDoS vulnerabilities are for unprotected organizations.</p><p>Gaming and gambling became the most targeted sector by DDoS attacks, followed by Internet technology companies and cryptomining.</p><p><b>Figure 6:</b> Largest HTTP DDoS attacks as seen by Cloudflare, by year</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1FYxByBvHz2MQQ8WhKErWk/a7f757447c04820ea5a642838b3e5e10/image1.jpg" />
            
            </figure>
    <div>
      <h2>Bot traffic insights</h2>
      <a href="#bot-traffic-insights">
        
      </a>
    </div>
    <p>Cloudflare has continued to invest heavily in our bot detection systems. In early July, we declared <a href="/declaring-your-aindependence-block-ai-bots-scrapers-and-crawlers-with-a-single-click">AIndependence</a> to help preserve a safe Internet for content creators, offering a brand new “easy button” to <a href="https://www.cloudflare.com/learning/ai/how-to-block-ai-crawlers/">block all AI bots</a>. It’s available for all customers, including those on our free tier.</p><p>Major progress has also been made in other complementary systems such as our Turnstile offering, a user-friendly, privacy-preserving alternative to CAPTCHA.</p><p>All these systems and technologies help us better identify and differentiate human traffic from automated bot traffic.</p>
    <div>
      <h3>On average, bots comprise one-third of all application traffic</h3>
      <a href="#on-average-bots-comprise-one-third-of-all-application-traffic">
        
      </a>
    </div>
    <p>31.2% of all application traffic processed by Cloudflare is bot traffic. This percentage has stayed relatively consistent (hovering at about 30%) over the past three years.</p><p>The term bot traffic may carry a negative connotation, but in reality bot traffic is not necessarily good or bad; it all depends on the purpose of the bots. Some are “good” and perform a needed service, such as customer service chatbots and authorized search engine crawlers. But some bots misuse an online product or service and need to be blocked.</p><p>Different application owners may have different criteria for what they deem a “bad” bot. For example, some organizations may want to <a href="https://www.cloudflare.com/learning/ai/how-to-prevent-web-scraping/">block a content scraping bot</a> that is being deployed by a competitor to undercut on prices, whereas an organization that does not sell products or services may not be as concerned with content scraping. Known, good bots are classified by Cloudflare as “verified bots.”</p>
    <div>
      <h3>93% of bots we identified were unverified bots, and potentially malicious</h3>
      <a href="#93-of-bots-we-identified-were-unverified-bots-and-potentially-malicious">
        
      </a>
    </div>
    <p>Unverified bots are often created for disruptive and harmful purposes, such as hoarding inventory, launching DDoS attacks, or attempting to take over an account via brute force or credential stuffing. Verified bots are those that are known to be safe, such as search engine crawlers, and Cloudflare aims to verify all major legitimate bot operators. <a href="https://radar.cloudflare.com/traffic/verified-bots">A list of all verified bots</a> can be found in our documentation.</p><p>Attackers leveraging bots focus most on industries that could bring them large financial gains. For example, consumer goods websites are often the target of inventory hoarding, price scraping run by competition or automated applications aimed at exploiting some sort of arbitrage (for example, sneaker bots). This type of abuse can have a significant financial impact on the target organization.</p><p><b>Figure 8:</b> Industries with the highest median daily share of bot traffic</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/XIeyHN59gsaqxq0OQvCKV/4e23e2b081263ae09c6ca3de6aac2cdd/unnamed--7--3.png" />
            
            </figure>
    <div>
      <h2>API traffic insights</h2>
      <a href="#api-traffic-insights">
        
      </a>
    </div>
    <p>Consumers and end users expect dynamic web and mobile experiences powered by APIs. For businesses, APIs fuel competitive advantages, greater business intelligence, faster cloud deployments, integration of new AI capabilities, and more.</p><p>However, APIs introduce new risks by providing outside parties additional attack surfaces with which to access applications and databases which also need to be secured. As a consequence, numerous attacks we observe are now targeting API endpoints first rather than the traditional web interfaces.</p><p>The additional security concerns are of course not slowing down adoption of API first applications.</p>
    <div>
      <h3>60% of dynamic (non cacheable) traffic is API-related</h3>
      <a href="#60-of-dynamic-non-cacheable-traffic-is-api-related">
        
      </a>
    </div>
    <p>This is a two percentage point increase compared to last year’s report. Of this 60%, about 4% on average is mitigated by our security systems.</p><p><b>Figure 9</b>: Share of mitigated API traffic</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/YmPWYVWpG250GqjO2noHu/63582e53d5f43cda2068385ea9713976/unnamed--3--3.png" />
            
            </figure><p>A substantial spike is visible around January 11-17 that accounts for almost a 10% increase in traffic share alone for that period. This was due to a specific customer zone receiving attack traffic that was mitigated by a WAF Custom Rule.</p><p>Digging into mitigation sources for API traffic, we see the WAF being the largest contributor, as standard malicious payloads are commonly applicable to both API endpoints and standard web applications.</p><p><b>Figure 10:</b> API mitigated traffic broken down by product group</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/76D0iSJOzvx4ArJYIXIfLI/a4bf06a320cc8e9cc03b7da2ab6f35b3/unnamed--4--3.png" />
            
            </figure>
    <div>
      <h2>A quarter of APIs are “shadow APIs”</h2>
      <a href="#a-quarter-of-apis-are-shadow-apis">
        
      </a>
    </div>
    <p>You cannot protect what you cannot see. And, many organizations lack accurate API inventories, even when they believe they can correctly identify API traffic.</p><p>Using our proprietary machine learning model that scans not just known API calls, but all HTTP requests (identifying API traffic that may be going unaccounted for), we found that organizations had 33% more public-facing API endpoints than they knew about. This number was the median, and it was calculated by comparing the number of API endpoints detected through machine learning based discovery vs. customer-provided session identifiers.</p><p>This suggests that nearly a quarter of APIs are “shadow APIs” and may not be properly inventoried and secured.</p>
    <div>
      <h2>Client-side risks</h2>
      <a href="#client-side-risks">
        
      </a>
    </div>
    <p>Most organizations’ web apps rely on separate programs or pieces of code from third-party providers (usually coded in JavaScript). The use of third-party scripts accelerates modern web app development and allows organizations to ship features to market faster, without having to build all new app features in-house.</p><p>Using Cloudflare's client side security product, <a href="https://developers.cloudflare.com/page-shield/">Page Shield</a>, we can get a view on the popularity of third party libraries used on the Internet and the risk they pose to organizations. This has become very relevant recently due to the <a href="http://polyfill.io">Polyfill.io incident</a> that affected more than one hundred thousand sites.</p>
    <div>
      <h3>Enterprise applications use 47 third-party scripts on average</h3>
      <a href="#enterprise-applications-use-47-third-party-scripts-on-average">
        
      </a>
    </div>
    <p>Cloudflare’s typical enterprise customer uses an average of 47 third-party scripts, and a median of 20 third-party scripts. The average is much higher than the median due to SaaS providers, who often have thousands of subdomains which may all use third-party scripts. Here are some of the top third-party script providers Cloudflare customers commonly use:</p><ul><li><p>Google (Tag Manager, Analytics, Ads, Translate, reCAPTCHA, YouTube)</p></li><li><p>Meta (Facebook Pixel, Instagram)</p></li><li><p>Cloudflare (Web Analytics)</p></li><li><p>jsDelivr</p></li><li><p>New Relic</p></li><li><p>Appcues</p></li><li><p>Microsoft (Clarity, Bing, LinkedIn)</p></li><li><p>jQuery</p></li><li><p>WordPress (Web Analytics, hosted plugins)</p></li><li><p>Pinterest</p></li><li><p>UNPKG</p></li><li><p>TikTok</p></li><li><p>Hotjar</p></li></ul><p>While useful, third-party software dependencies are often loaded directly by the end-user’s browser (i.e. they are loaded client-side) placing organizations and their customers at risk given that organizations have no direct control over third-party security measures. For example, in the retail sector, 18% of all data breaches <a href="https://www.verizon.com/business/resources/reports/dbir/">originate from Magecart style attacks</a>, according to Verizon’s 2024 Data Breach Investigations Report.</p>
    <div>
      <h3>Enterprise applications connect to nearly 50 third-parties on average</h3>
      <a href="#enterprise-applications-connect-to-nearly-50-third-parties-on-average">
        
      </a>
    </div>
    <p>Loading a third-party script into your website poses risks, even more so when that script “calls home” to submit data to perform the intended function. A typical example here is Google Analytics: whenever a user performs an action, the Google Analytics script will submit data back to the Google servers. We identify these as connections.</p><p>On average, each enterprise website connects to 50 separate third-party destinations, with a median of 15. Each of these connections also poses a potential client-side security risk as attackers will often use them to exfiltrate additional data going unnoticed.</p><p>Here are some of the top third-party connections Cloudflare customers commonly use:</p><ul><li><p>Google (Analytics, Ads)</p></li><li><p>Microsoft (Clarity, Bing, LinkedIn)</p></li><li><p>Meta (Facebook Pixel)</p></li><li><p>Hotjar</p></li><li><p>Kaspersky</p></li><li><p>Sentry</p></li><li><p>Criteo</p></li><li><p>tawk.to</p></li><li><p>OneTrust</p></li><li><p>New Relic</p></li><li><p>PayPal</p></li></ul>
    <div>
      <h2>Looking forward</h2>
      <a href="#looking-forward">
        
      </a>
    </div>
    <p>This application security report is also <a href="https://www.cloudflare.com/2024-application-security-trends/">available in PDF format</a> with additional recommendations on how to address many of the concerns raised, along with additional insights.</p><p>We also publish many of our reports with dynamic charts on <a href="https://radar.cloudflare.com/reports">Cloudflare Radar</a>, making it an excellent resource to keep up to date with the state of the Internet.</p> ]]></content:encoded>
            <category><![CDATA[Application Security]]></category>
            <category><![CDATA[WAF]]></category>
            <category><![CDATA[Bot Management]]></category>
            <category><![CDATA[API]]></category>
            <guid isPermaLink="false">78VdVl96em2bFvHmZ4jeHj</guid>
            <dc:creator>Michael Tremante</dc:creator>
            <dc:creator>Sabina Zejnilovic</dc:creator>
            <dc:creator>Catherine Newcomb</dc:creator>
        </item>
        <item>
            <title><![CDATA[Declare your AIndependence: block AI bots, scrapers and crawlers with a single click]]></title>
            <link>https://blog.cloudflare.com/declaring-your-aindependence-block-ai-bots-scrapers-and-crawlers-with-a-single-click/</link>
            <pubDate>Wed, 03 Jul 2024 13:00:26 GMT</pubDate>
            <description><![CDATA[ To help preserve a safe Internet for content creators, we’ve just launched a brand new “easy button” to block all AI bots. It’s available for all customers, including those on our free tier ]]></description>
            <content:encoded><![CDATA[ <p></p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/D59Fq5QkC4J7Jjo5lM4Fm/fcc55b665562d321bd84f88f53f46b22/image7-1.png" />
            
            </figure><p>To help preserve a safe Internet for content creators, we’ve just launched a brand new “easy button” to <a href="https://www.cloudflare.com/learning/ai/how-to-block-ai-crawlers/">block all AI bots</a>. It’s available for all customers, including those on our free tier.</p><p>The popularity of <a href="https://www.cloudflare.com/learning/ai/what-is-generative-ai/">generative AI</a> has made the demand for content used to train models or run inference on skyrocket, and, although some AI companies clearly identify their web scraping bots, not all AI companies are being transparent. Google reportedly <a href="https://www.reuters.com/technology/reddit-ai-content-licensing-deal-with-google-sources-say-2024-02-22/">paid $60 million a year</a> to license Reddit’s user generated content, and most recently, Perplexity has been <a href="https://rknight.me/blog/perplexity-ai-is-lying-about-its-user-agent/">accused of impersonating legitimate visitors</a> in order to scrape content from websites. The value of original content in bulk has never been higher.</p><p>Last year, <a href="/ai-bots">Cloudflare announced the ability for customers to easily block AI bots</a> that behave well. These bots follow <a href="https://www.cloudflare.com/learning/bots/what-is-robots-txt/">robots.txt</a>, and don’t use unlicensed content to train their models or run inference for <a href="https://blogs.nvidia.com/blog/what-is-retrieval-augmented-generation/">RAG</a> applications using website data. Even though these AI bots follow the rules, Cloudflare customers overwhelmingly opt to <a href="https://www.cloudflare.com/learning/ai/how-to-prevent-web-scraping/">block them</a>.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5aAA77Hl9OM2vtI611QcUI/0992e096262e348b451efd8be296fa27/image9.png" />
            
            </figure><p>We hear clearly that customers don’t want AI bots visiting their websites, and especially those that do so dishonestly. To help, we’ve added a brand new one-click to block all AI bots. It’s available for all customers, including those on the free tier. To enable it, simply navigate to the <a href="https://dash.cloudflare.com/?to=/:account/:zone/security/bots/configure">Security &gt; Bots</a> section of the Cloudflare dashboard, and click the toggle labeled AI Scrapers and Crawlers.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/xD0lhy89vZb34dtIWukt1/3e0e1ed979a33d344e53d4da2a819e1e/image2.png" />
            
            </figure><p>This feature will automatically be updated over time as we see new fingerprints of offending bots we identify as widely scraping the web for model training. To ensure we have a comprehensive understanding of all AI crawler activity, we surveyed traffic across our network.</p>
    <div>
      <h3>AI bot activity today</h3>
      <a href="#ai-bot-activity-today">
        
      </a>
    </div>
    <p>The graph below illustrates the most popular AI bots seen on Cloudflare’s network in terms of their request volume. We looked at common AI crawler user agents and aggregated the number of requests on our platform from these AI user agents over the last year:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/13pNq4MJJB92Dcs1ghxC6k/b7e7acc7e65e9e0958eed5d4b4cb0594/image6.png" />
            
            </figure><p>When looking at the number of requests made to Cloudflare sites, we see that <i>Bytespider</i>, <i>Amazonbot</i>, <i>ClaudeBot</i>, and <i>GPTBot</i> are the top four AI crawlers. Operated by ByteDance, the Chinese company that owns TikTok, <i>Bytespider</i> is reportedly used to gather training data for its large language models (LLMs), including those that support its ChatGPT rival, Doubao. <i>Amazonbot</i> and <i>ClaudeBot</i> follow <i>Bytespider</i> in request volume. <i>Amazonbot</i>, reportedly used to index content for Alexa’s question-answering, sent the second-most number of requests and <i>ClaudeBot</i>, used to train the Claude chat bot, has recently increased in request volume.</p><p>Among the top AI bots that we see, <i>Bytespider</i> not only leads in terms of number of requests but also in both the extent of its Internet property crawling and the frequency with which it is blocked. Following closely is <i>GPTBot</i>, which ranks second in both crawling and being blocked. <i>GPTBot</i>, managed by OpenAI, collects training data for its LLMs, which underpin AI-driven products such as ChatGPT. In the table below, “Share of websites accessed” refers to the proportion of websites protected by Cloudflare that were accessed by the named AI bot.</p>
<table><thead>
  <tr>
    <th><span>AI Bot</span></th>
    <th><span>Share of Websites Accessed</span></th>
  </tr></thead>
<tbody>
  <tr>
    <td><span>Bytespider</span></td>
    <td><span>40.40%</span></td>
  </tr>
  <tr>
    <td><span>GPTBot</span></td>
    <td><span>35.46%</span></td>
  </tr>
  <tr>
    <td><span>ClaudeBot</span></td>
    <td><span>11.17%</span></td>
  </tr>
  <tr>
    <td><span>ImagesiftBot</span></td>
    <td><span>8.75%</span></td>
  </tr>
  <tr>
    <td><span>CCBot</span></td>
    <td><span>2.14%</span></td>
  </tr>
  <tr>
    <td><span>ChatGPT-User</span></td>
    <td><span>1.84%</span></td>
  </tr>
  <tr>
    <td><span>omgili</span></td>
    <td><span>0.10%</span></td>
  </tr>
  <tr>
    <td><span>Diffbot</span></td>
    <td><span>0.08%</span></td>
  </tr>
  <tr>
    <td><span>Claude-Web</span></td>
    <td><span>0.04%</span></td>
  </tr>
  <tr>
    <td><span>PerplexityBot</span></td>
    <td><span>0.01%</span></td>
  </tr>
</tbody></table><p>While our analysis identified the most popular crawlers in terms of request volume and number of Internet properties accessed, many customers are likely not aware of the more popular AI crawlers actively crawling their sites. Our Radar team performed an analysis of the top robots.txt entries across the <a href="https://radar.cloudflare.com/domains">top 10,000 Internet domains</a> to identify the most commonly actioned AI bots, then looked at how frequently we saw these bots on sites protected by Cloudflare.</p><p>In the graph below, which looks at disallowed crawlers for these sites, we see that customers most often reference <i>GPTBot, CCBot</i>, and <i>Google</i> in robots.txt, but do not specifically disallow popular AI crawlers like <i>Bytespider</i> and <i>ClaudeBot</i>.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6m4jV8g9sQ0BLR7OIonsoB/a4c3100a34160c96aea07c4ed4bc6a8d/image3.png" />
            
            </figure><p>With the Internet now flooded with these AI bots, we were curious to see how website operators have already responded. In June, AI bots accessed around 39% of the top one million Internet properties using Cloudflare, but only 2.98% of these properties took measures to block or challenge those requests. Moreover, the higher-ranked (more popular) an Internet property is, the more likely it is to be targeted by AI bots, and correspondingly, the more likely it is to block such requests.</p>
<table><thead>
  <tr>
    <th><span>Top N Internet properties by number of visitors seen by Cloudflare</span></th>
    <th><span>% accessed by AI bots</span></th>
    <th><span>% blocking AI bots</span></th>
  </tr></thead>
<tbody>
  <tr>
    <td><span>10</span></td>
    <td><span>80.0%</span></td>
    <td><span>40.0%</span></td>
  </tr>
  <tr>
    <td><span>100</span></td>
    <td><span>63.0%</span></td>
    <td><span>16.0%</span></td>
  </tr>
  <tr>
    <td><span>1,000</span></td>
    <td><span>53.2%</span></td>
    <td><span>8.8%</span></td>
  </tr>
  <tr>
    <td><span>10,000</span></td>
    <td><span>47.99%</span></td>
    <td><span>8.92%</span></td>
  </tr>
  <tr>
    <td><span>100,000</span></td>
    <td><span>44.53%</span></td>
    <td><span>6.36%</span></td>
  </tr>
  <tr>
    <td><span>1,000,000</span></td>
    <td><span>38.73%</span></td>
    <td><span>2.98%</span></td>
  </tr>
</tbody></table>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6gCWVJMv9GajRT3H8BQ5EM/effb5e2b52c0bdecb99f5f4e339c8d1d/image4.png" />
            
            </figure><p>We see website operators completely block access to these AI crawlers using robots.txt. However, these blocks are reliant on the bot operator respecting robots.txt and adhering to <a href="https://www.rfc-editor.org/rfc/rfc9309.html#name-the-user-agent-line">RFC9309</a> (ensuring variations on user against all match the product token) to honestly identify who they are when they visit an Internet property, but user agents are trivial for bot operators to change.</p>
    <div>
      <h3>How we find AI bots pretending to be real web browsers</h3>
      <a href="#how-we-find-ai-bots-pretending-to-be-real-web-browsers">
        
      </a>
    </div>
    <p>Sadly, we’ve observed bot operators attempt to appear as though they are a real browser by using a spoofed user agent. We’ve monitored this activity over time, and we’re proud to say that our global machine learning model has always recognized this activity as a bot, even when operators lie about their user agent.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4JpBRAGuQ1DOCTSFu9yHbH/9c11b569a30f68ddb1b4c197054ed1c8/image1.png" />
            
            </figure><p>Take one example of a specific bot that <a href="https://rknight.me/blog/perplexity-ai-is-lying-about-its-user-agent/">others</a> observed to be <a href="https://www.wired.com/story/perplexity-is-a-bullshit-machine/">hiding their activity</a>. We ran an analysis to see how our machine learning models scored traffic from this bot. In the diagram below, you can see that all <a href="https://developers.cloudflare.com/bots/concepts/bot-score/">bot scores</a> are firmly below 30, indicating that our scoring thinks this activity is likely to be coming from a bot.</p><p>The diagram reflects scoring of the requests using <a href="/residential-proxy-bot-detection-using-machine-learning">our newest model</a>, where “hotter” colors indicate more requests falling in that band, and “cooler” colors meaning fewer requests did. We can see the vast majority of requests fell into the bottom two bands, showing that Cloudflare’s model gave the offending bot a score of 9 or less. The user agent changes have no effect on the score, because this is the very first thing we expect bot operators to do.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1y0G6D2b512V1UAgR6sooD/4cc9b659f091e84facbec66a30baafad/image5.png" />
            
            </figure><p>Any customer with an existing WAF rule set to challenge visitors with a bot score below 30 (our recommendation) automatically blocked all of this AI bot traffic with no new action on their part. The same will be true for future AI bots that use similar techniques to hide their activity.</p><p>We leverage Cloudflare global signals to calculate our Bot Score, which for AI bots like the one above, reflects that we correctly identify and score them as a “likely bot.”</p><p>When bad actors attempt to crawl websites at scale, they generally use tools and frameworks that we are able to fingerprint. For every fingerprint we see, we use Cloudflare’s network, which sees over 57 million requests per second on average, to understand how much we should trust this fingerprint. To power our models, we compute global aggregates across many signals. Based on these signals, our models were able to appropriately flag traffic from evasive AI bots, like the example mentioned above, as bots.</p><p>The upshot of this globally aggregated data is that we can immediately detect new scraping tools and their behavior without needing to manually fingerprint the bot, ensuring that customers stay protected from the newest waves of bot activity.</p><p>If you have a tip on an AI bot that’s not behaving, we’d love to investigate. There are two options you can use to report misbehaving AI crawlers:</p><p>1. Enterprise Bot Management customers can submit a False Negative <a href="https://developers.cloudflare.com/bots/concepts/feedback-loop/">Feedback Loop</a> report via Bot Analytics by simply selecting the segment of traffic where they noticed misbehavior:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/iwX6mvdnqg3KGRN0dMHou/a7c0a39275680db58f49c9292ca180c7/image8.png" />
            
            </figure><p>2. We’ve also set up a <a href="https://docs.google.com/forms/d/14bX0RJH_0w17_cAUiihff5b3WLKzfieDO4upRlo5wj8/edit">reporting tool</a> where any Cloudflare customer can submit reports of an AI bot scraping your website without permission.</p><p>We fear that some AI companies intent on circumventing rules to access content will persistently adapt to evade bot detection. We will continue to keep watch and add more bot blocks to our AI Scrapers and Crawlers rule and evolve our machine learning models to help keep the Internet a place where content creators can thrive and keep full control over which models their content is used to train or run inference on.</p> ]]></content:encoded>
            <category><![CDATA[Bots]]></category>
            <category><![CDATA[Bot Management]]></category>
            <category><![CDATA[AI Bots]]></category>
            <category><![CDATA[AI]]></category>
            <category><![CDATA[Machine Learning]]></category>
            <category><![CDATA[Generative AI]]></category>
            <guid isPermaLink="false">4iUvyS3jKebfV9pHwg7pol</guid>
            <dc:creator>Alex Bocharov</dc:creator>
            <dc:creator>Santiago Vargas</dc:creator>
            <dc:creator>Adam Martinetti</dc:creator>
            <dc:creator>Reid Tatoris</dc:creator>
            <dc:creator>Carlos Azevedo</dc:creator>
        </item>
        <item>
            <title><![CDATA[Using machine learning to detect bot attacks that leverage residential proxies]]></title>
            <link>https://blog.cloudflare.com/residential-proxy-bot-detection-using-machine-learning/</link>
            <pubDate>Mon, 24 Jun 2024 13:00:17 GMT</pubDate>
            <description><![CDATA[ Cloudflare's Bot Management team has released a new Machine Learning model for bot detection (v8), focusing on bots and abuse from residential proxies ]]></description>
            <content:encoded><![CDATA[ <p></p><p>Bots using residential proxies are a major source of frustration for security engineers trying to fight online abuse. These engineers often see a similar pattern of abuse when well-funded, modern botnets target their applications. Advanced bots bypass country blocks, <a href="https://www.cloudflare.com/en-gb/learning/network-layer/what-is-an-autonomous-system/">ASN</a> blocks, and rate-limiting. Every time, the bot operator moves to a new IP address space until they blend in perfectly with the “good” traffic, mimicking real users’ behavior and request patterns. Our new Bot Management machine learning model (v8) identifies residential proxy abuse without resorting to IP blocking, which can cause false positives for legitimate users.  </p>
    <div>
      <h2>Background</h2>
      <a href="#background">
        
      </a>
    </div>
    <p>One of the main sources of Cloudflare’s <a href="https://developers.cloudflare.com/bots/concepts/bot-score/">bot score</a> is our bot detection machine learning model which analyzes, on average, over 46 million HTTP requests per second in real time. Since our first Bot Management ML model was released in 2019, we have continuously evolved and improved the model. Nowadays, our models leverage features based on request fingerprints, behavioral signals, and global statistics and trends that we see across our network.</p><p>Each iteration of the model focuses on certain areas of improvement. This process starts with a rigorous R&amp;D phase to identify the emerging patterns of <a href="https://www.cloudflare.com/learning/bots/what-is-a-bot-attack/">bot attacks</a> by reviewing <a href="https://developers.cloudflare.com/bots/concepts/feedback-loop/">feedback from our customers</a> and reports of missed attacks. In v8, we mainly focused on two areas of abuse. First, we analyzed the campaigns that leverage residential IP proxies, which are proxies on residential networks commonly used to launch widely distributed attacks against high profile targets. In addition to that, we improved model accuracy for detecting attacks that originate from cloud providers.</p>
    <div>
      <h3>Residential IP proxies</h3>
      <a href="#residential-ip-proxies">
        
      </a>
    </div>
    <p>Proxies allow attackers to hide their identity and distribute their attack. Moreover, IP address rotation allows attackers to directly bypass traditional defenses such as IP reputation and IP rate limiting. Knowing this, defenders use a plethora of signals to identify malicious use of proxies. In its simplest forms, IP reputation signals (e.g., data center IP addresses, known open proxies, etc.) can lead to the detection of such distributed attacks.</p><p>However, in the past few years, bot operators have started favoring proxies operating in residential network IP address space. By using residential IP proxies, attackers can masquerade as legitimate users by sending their traffic through residential networks. Nowadays, residential IP proxies are offered by companies that facilitate access to large pools of IP addresses for attackers. Residential proxy providers claim to offer 30-100 million IPs belonging to residential and mobile networks across the world. Most commonly, these IPs are sourced by partnering with free VPN providers, as well as including the proxy SDKs into popular browser extensions and mobile applications. This allows residential proxy providers to gain a foothold on victims’ devices and abuse their residential network connections.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/Oob9FiycD6yIfYf8Xtdny/06e74f2dd0032ea4610dcab55b0ec38a/residential-proxy-architecture.jpg" />
            
            </figure><p>Figure 1: Architecture of a residential proxy network</p><p>Figure 1 depicts the architecture of a residential proxy. By subscribing to these services, attackers gain access to an authenticated proxy gateway address commonly using the HTTPS/<a href="https://datatracker.ietf.org/doc/html/rfc1928">SOCKS5</a> proxy protocol. Some residential proxy providers allow their users to select the country or region for the proxy exit nodes. Alternatively, users can choose to keep the same IP address throughout their session or rotate to a new one for each outgoing request. Residential proxy providers then identify active exit nodes on their network (on devices that they control within residential networks across the world) and route the proxied traffic through them.</p><p>The large pool of IP addresses and the diversity of networks poses a challenge to traditional bot defense mechanisms that rely on IP reputation and rate limiting. Moreover, the diversity of IPs enables the attackers to rotate through them indefinitely. This shrinks the window of opportunity for bot detection systems to effectively detect and stop the attacks. Effective defense against residential proxy attacks should be able to detect this type of bot traffic either based on single request features to stop the attack immediately, or identify unique fingerprints from the browsing agent to track and mitigate the bot traffic regardless of the IP source. Overly broad blocking actions, such as IP block-listing, by definition, would result in blocking legitimate traffic from residential networks where at least one device is acting as a residential proxy node.</p>
    <div>
      <h3>ML model training</h3>
      <a href="#ml-model-training">
        
      </a>
    </div>
    <p>At its heart, our model is built using a chain of modules that work together. Initially, we fetch and prepare training and validation datasets from our Clickhouse data storage. We use datasets with high confidence labels as part of our training. For model validation, we use datasets consisting of missed attacks reported by our customers, known sources of bot traffic (e.g., <a href="https://developers.cloudflare.com/bots/reference/verified-bots-policy/">verified bots</a>), and high confidence detections from other bot management modules (e.g., heuristics engine). We orchestrate these steps using Apache Airflow, which enables us to customize each stage of the ML model training and define the interdependencies of our training, validation, and reporting modules in the form of directed acyclic graphs (DAGs).</p><p>The first step of training a new model is fetching labeled training data from our data store. Under the hood, our dataset definitions are SQL queries that will materialize by fetching data from our Clickhouse cluster where we store feature values and calculate aggregates from the traffic on our network. Figure 2 depicts these steps as train and validation dataset fetch operations. Introducing new datasets can be as straightforward as writing the SQL queries to filter the desired subset of requests.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3xmgPc689UWAHZi29CfIrp/c0d5dca3b3a497423b00c9456ea5dc7c/airflow-dag.jpg" />
            
            </figure><p>Figure 2: Airflow DAG for model training and validation</p><p>After fetching the datasets, we train our <a href="https://catboost.ai/">Catboost model</a> and tune its <a href="https://catboost.ai/en/docs/references/training-parameters/">hyper parameters</a>. During evaluation, we compare the performance of the newly trained model against the current default version running for our customers. To capture the intricate patterns in subsets of our data, we split certain validation datasets into smaller slivers called specializations. For instance, we use the detections made by our heuristics engine and managed rulesets as ground truth for bot traffic. To ensure that larger sources of traffic (large <a href="https://www.cloudflare.com/learning/network-layer/what-is-an-autonomous-system/">ASNs</a>, different HTTP versions, etc.) do not mask our visibility into patterns for the rest of the traffic, we define specializations for these sources of traffic. As a result, improvements in accuracy of the new model can be evaluated for common patterns (e.g., HTTP/1.1 and HTTP/2) as well as less common ones. Our model training DAG will provide a breakdown report for the accuracy, score distribution, feature importance, and <a href="https://shap.readthedocs.io/en/latest/generated/shap.Explainer.html">SHAP explainers</a> for each validation dataset and its specializations.</p><p>Once we are happy with the validation results and model accuracy, we evaluate our model against a checklist of steps to ensure the correctness and validity of our model. We start by ensuring that our results and observations are reproducible over multiple non-overlapping training and validation time ranges. Moreover, we check for the following factors:</p><ul><li><p>Check for the distribution of feature values to identify irregularities such as missing or skewed values.</p></li><li><p>Check for overlaps between training and validation datasets and feature values.</p></li><li><p>Verify the diversity of training data and the balance between labels and datasets.</p></li><li><p>Evaluate performance changes in the accuracy of the model on validation datasets based on their order of importance.</p></li><li><p>Check for model overfitting by evaluating the feature importance and SHAP explainers.</p></li></ul><p>After the model passes the readiness checks, we deploy it in shadow mode. We can observe the behavior of the model on live traffic in log-only mode (i.e., without affecting the <a href="https://developers.cloudflare.com/bots/concepts/bot-score/">bot score</a>). After gaining confidence in the model's performance on live traffic, we start onboarding beta customers, and gradually switch the model to active mode all while closely <a href="/monitoring-machine-learning-models-for-bot-detection">monitoring the real-world performance of our new model</a>.</p>
    <div>
      <h3>ML features for bot detection</h3>
      <a href="#ml-features-for-bot-detection">
        
      </a>
    </div>
    <p>Each of our models uses a set of features to make inferences about the incoming requests. We compute our features based on single request properties (single request features) and patterns from multiple requests (i.e., inter-request features). We can categorize these features into the following groups:</p><ul><li><p><b>Global features:</b> inter-request features that are computed based on global aggregates for different types of fingerprints and traffic sources (e.g., for an ASN) seen across our global network. Given the relatively lower cardinality of these features, we can scalably calculate global aggregates for each of them.</p></li><li><p><b>High cardinality features:</b> inter-request features focused on fine-grained aggregate data from local traffic patterns and behaviors (e.g., for an individual IP address)</p></li><li><p><b>Single request features:</b> features derived from each individual request (e.g., user agent).</p></li></ul><p>Our Bot Management system (named <a href="/scalable-machine-learning-at-cloudflare">BLISS</a>) is responsible for fetching and computing these feature values and making them available on our servers for inference by active versions of our ML models.</p>
    <div>
      <h2>Detecting residential proxies using network and behavioral signals</h2>
      <a href="#detecting-residential-proxies-using-network-and-behavioral-signals">
        
      </a>
    </div>
    <p>Attacks originating from residential IP addresses are commonly characterized by a spike in the overall traffic towards sensitive endpoints on the target websites from a large number of residential ASNs. Our approach for detecting residential IP proxies is twofold. First, we start by comparing direct vs proxied requests and looking for network level discrepancies. Revisiting Figure 1, we notice that a request routed through residential proxies (red dotted line) has to traverse through multiple hops before reaching the target, which affects the network latency of the request.</p><p>Based on this observation alone, we are able to characterize residential proxy traffic with a high true positive rate (i.e., all residential proxy requests have high network latency). While we were able to replicate this in our lab environment, we quickly realized that at the scale of the Internet, we run into numerous exceptions with false positive detections (i.e., non-residential proxy traffic with high latency). For instance, countries and regions that predominantly use satellite Internet would exhibit a high network latency for the majority of their requests due to the use of <a href="https://datatracker.ietf.org/doc/html/rfc3135">performance enhancing proxies</a>.</p><p>Realizing that relying solely on network characteristics of connections to detect residential proxies is inadequate given the diversity of the connections on the Internet, we switched our focus to the behavior of residential IPs. To that end, we observe that the IP addresses from residential proxies express a distinct behavior during periods of peak activity. While this observation singles out highly active IPs over their peak activity time, given the pool size of residential IPs, it is not uncommon to only observe a small number of requests from the majority of residential proxy IPs.</p><p>These periods of inactivity can be attributed to the temporary nature of residential proxy exit nodes. For instance, when the client software (i.e., browser or mobile application) that runs the exit nodes of these proxies is closed, the node leaves the residential proxy network. One way to filter out periods of inactivity is to increase the monitoring time and punish each IP address that exhibits residential proxy behavior for a period of time. This block-listing approach, however, has certain limitations. Most importantly, by relying only on IP-based behavioral signals, we would block traffic from legitimate users that may unknowingly run mobile applications or browser extensions that turn their devices into proxies. This is further detrimental for mobile networks where many users share their IPs behind <a href="https://en.wikipedia.org/wiki/Carrier-grade_NAT">CGNATs</a>. Figure 3 demonstrates this by comparing the share of direct vs proxied requests that we received from active residential proxy IPs over a 24-hour period. Overall, we see that 4 out of 5 requests from these networks belong to direct and benign connections from residential devices.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5vdkbz94Am2EHlL7tu9t3t/3256ae94bfee66c60ad71c6a6dbc7fc0/mlv8-blog-proxy-vs-direct.jpg" />
            
            </figure><p>Figure 3: Percentage of direct vs proxied requests from residential proxy IPs.</p><p>Using this insight, we combined behavioral and latency-based features along with new datasets to train a new <a href="https://www.cloudflare.com/learning/ai/what-is-machine-learning/">machine learning model</a> that detects residential proxy traffic on a per-request basis. This scheme allows us to block residential proxy traffic while allowing benign residential users to visit Cloudflare-protected websites from the same residential network.</p>
    <div>
      <h2>Detection results and case studies</h2>
      <a href="#detection-results-and-case-studies">
        
      </a>
    </div>
    <p>We started testing v8 in shadow mode in March 2024. Every hour, v8 is classifying more than 17 million unique IPs that participate in residential proxy attacks. Figure 4 shows the geographic distribution of IPs with residential proxy activity belonging to more than 45 thousand ASNs in 237 countries/regions. Among the most commonly requested endpoints from residential proxies, we observe patterns of account takeover attempts, such as requests to /login, /auth/login, and /api/login.  </p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3gGGhqngSzEkThZUoV3Fiw/d807f5456e3277e4aa24972588c59cf7/mlv8-blog-map-1.jpg" />
            
            </figure><p>Figure 4: Countries and regions with residential network activity. Size of markers are proportionate to the number of IPs with residential proxy activity.</p><p>Furthermore, we see significant improvements when evaluating our new machine learning model on previously missed attacks reported by our customers. In one case, v8 was able to correctly classify 95% of requests from distributed residential proxy attacks targeting the voucher redemption endpoint of the customer’s website. In another case, our new model successfully detected a previously missed <a href="https://www.cloudflare.com/learning/bots/what-is-content-scraping/">content scraping attack</a> evident by increased detection during traffic spikes depicted in Figure 5. We are continuing to monitor the behavior of residential proxy attacks in the wild and work with our customers to ensure that we can provide robust detection against these distributed attacks.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1jstvGe6HudxtAS54UJPW6/9bde592cb0efc64fb7a86ba33bb25047/shadowmode-v8.jpg" />
            
            </figure><p>Figure 5: Spikes in bot requests from residential proxies detected by ML v8</p>
    <div>
      <h2>Improving detection for bots from cloud providers</h2>
      <a href="#improving-detection-for-bots-from-cloud-providers">
        
      </a>
    </div>
    <p>In addition to residential IP proxies, bot operators commonly use cloud providers to host and run bot scripts that attack our customers. To combat these attacks, we improved our ground truth labels for cloud provider attacks in our latest ML training datasets. Early results show that v8 detects 20% more bots from cloud providers, with up to 70% more bots detected on zones that are marked as <a href="https://developers.cloudflare.com/fundamentals/reference/under-attack-mode/">under attack</a>. We further plan to expand the list of cloud providers that v8 detects as part of our ongoing updates.</p>
    <div>
      <h2>Check out ML v8</h2>
      <a href="#check-out-ml-v8">
        
      </a>
    </div>
    <p>For existing Bot Management customers we recommend <a href="https://developers.cloudflare.com/bots/reference/machine-learning-models/#enable-auto-updates-to-the-machine-learning-models">toggling “Auto-update machine learning model”</a> to instantly gain the benefits of ML v8 and its residential proxy detection, and to stay up to date with our future ML model updates. If you’re not a Cloudflare Bot Management customer, <a href="https://www.cloudflare.com/application-services/products/bot-management/">contact our sales team</a> to try out <a href="https://www.cloudflare.com/application-services/products/bot-management/">Bot Management</a>.</p> ]]></content:encoded>
            <category><![CDATA[Product News]]></category>
            <category><![CDATA[Machine Learning]]></category>
            <category><![CDATA[AI]]></category>
            <category><![CDATA[Proxying]]></category>
            <category><![CDATA[Bots]]></category>
            <category><![CDATA[Bot Management]]></category>
            <category><![CDATA[Application Services]]></category>
            <guid isPermaLink="false">2EZrHNgKqLkaTGqoRM9pMS</guid>
            <dc:creator>Bob AminAzad</dc:creator>
            <dc:creator>Santiago Vargas</dc:creator>
            <dc:creator>Adam Martinetti</dc:creator>
        </item>
    </channel>
</rss>