
<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:media="http://search.yahoo.com/mrss/">
    <channel>
        <title><![CDATA[ The Cloudflare Blog ]]></title>
        <description><![CDATA[ Get the latest news on how products at Cloudflare are built, technologies used, and join the teams helping to build a better Internet. ]]></description>
        <link>https://blog.cloudflare.com</link>
        <atom:link href="https://blog.cloudflare.com/" rel="self" type="application/rss+xml"/>
        <language>en-us</language>
        
        <lastBuildDate>Mon, 13 Apr 2026 21:43:16 GMT</lastBuildDate>
        <item>
            <title><![CDATA[Announcing the public launch of Cloudflare's bug bounty program]]></title>
            <link>https://blog.cloudflare.com/cloudflare-bug-bounty-program/</link>
            <pubDate>Tue, 01 Feb 2022 17:28:25 GMT</pubDate>
            <description><![CDATA[ Today we are launching Cloudflare’s paid public bug bounty program. We believe bug bounties are a vital part of every security team’s toolbox. ]]></description>
            <content:encoded><![CDATA[ <p></p><p>Today we are launching Cloudflare’s paid public bug bounty program. We believe bug bounties are a vital part of every security team’s toolbox and have been working hard on improving and expanding our private bug bounty program over the last few years. The first iteration of our bug bounty was a pure vulnerability disclosure program without cash bounties. In 2018, we added a private bounty program and are now taking the next step to a public program.</p><p>Starting today, anyone can report vulnerabilities related to any Cloudflare product to our <a href="https://hackerone.com/cloudflare">public bug bounty program</a>, hosted on HackerOne’s platform.</p><p>Let's walk through our journey so far.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3IXWfOzxYVXqsvYi4r1VVe/091be47eba4f4061af6731a968337140/image5.png" />
            
            </figure>
    <div>
      <h3>Step 1: starting a vulnerability disclosure program</h3>
      <a href="#step-1-starting-a-vulnerability-disclosure-program">
        
      </a>
    </div>
    <p>In 2014, when the company had fewer than 100 employees, we created a responsible disclosure policy to provide a safe place for security researchers to submit potential vulnerabilities to our security team, with some established rules of engagement. A vulnerability disclosure policy is an important first step for a company to take because it is an invitation to researchers to look at company assets without fear of repercussions, provided the researchers follow certain guidelines intended to protect everyone involved. We still stand by that policy and welcome reports related to all of our services through that <a href="https://hackerone.com/cloudflare?type=team">program</a>.</p><p>Over the years, we received many great reports through that program which led to improvements in our products. However, one early challenge we faced was that researchers struggled to understand our infrastructure and products. Unlike most of the public programs of that era when we launched, our services were not made up primarily of public facing web applications or even mobile applications; our products were primarily <a href="https://www.cloudflare.com/learning/network-layer/network-security/">network security</a> and performance solutions that operated as a proxy layer in front of customer resources.</p><p>Understanding where Cloudflare fits into the HTTP request/response pipeline can get very challenging with multiple products enabled. And because we did not provide much supporting documentation about how our products worked, and had scoped the program to broadly encompass everything, we left researchers to figure out our complicated products on their own. As a result, most of the reports we received over those early years came from people who saw something that seemed atypical to them, but in our view was not actually a vulnerability in need of repair. We dedicated a tremendous amount of time to triaging false positive reports and helping the researchers understand their errors.</p><p>Lesson #1 from that experience: we needed to provide much more detail about our products, so researchers could understand how to dig into our products and identify true vulnerabilities. For example, when a zone is being onboarded to Cloudflare, even before ownership is determined, Cloudflare will display the DNS records for the zone. These DNS records are public information, but researchers have filed reports claiming that this is an information leakage issue. The same results can be obtained with open-source tools. This does not affect existing Cloudflare zones since Cloudflare protects the actual origin IPs from being leaked.</p><p>We see the same types of issues come up regularly with the platforms that some companies use to assess the security of their vendors. Off-the-shelf scanners will inaccurately detect vulnerabilities in our platforms because our services not only sit in front of our environment, but the environments of many thousands of customers of all shapes and sizes. We encourage researchers not to use those tools and instead learn more about how our services work.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1WbpfWCedyyGZUIBsvEuDX/63fdbcd3f4a33ec7c5f4f6356f3daf0d/image3.png" />
            
            </figure><p>Leaving researchers to their own devices, and failing to invest in helping them understand how our product worked, led to a poor signal-to-noise ratio. At the time of writing, 1,197 reports have been submitted to our vulnerability disclosure program. Of those, only 158 resulted in points going to the researcher for a valid report. With only 13% of reports being valid, we needed to invest in improving the signal-to-noise ratio by helping our researchers before we could expand our program.</p>
    <div>
      <h3>Rewards: T-shirts?</h3>
      <a href="#rewards-t-shirts">
        
      </a>
    </div>
    <p>Early on, we rewarded researchers with a unique “Cloudflare bug hunter” T-shirt after we validated a vulnerability report. We thought it was a nice gesture at a low cost for people who found security bugs. In practice, when we factored in shipping issues, passing through customs, and wrong sizes, it was a nightmare. Shipping turned out to be such a challenge that we sometimes resorted to hand-delivering T-shirts to researchers when attending conferences. It was nice to meet the researchers in person, but not a scalable solution!</p>
    <div>
      <h3>Step 2: private bounty program</h3>
      <a href="#step-2-private-bounty-program">
        
      </a>
    </div>
    <p>We have always felt that rewarding good security research deserved more than a T-shirt. We also believed that financially supporting researchers would incentivize higher quality reports and deeper security research.</p><blockquote><p>Righhhhhhht. I think most of us would rather have the cash.</p><p>— Mrs. Y. (@MrsYisWhy) <a href="https://twitter.com/MrsYisWhy/status/835843779462639616?ref_src=twsrc%5Etfw">February 26, 2017</a></p></blockquote><p>In order to learn the ropes of operating a paid bounty, in 2018 we opened a private bug bounty program and spent the past few years optimizing.</p><p>Our end goal has always been to reach a level of maturity that would allow us to operate a paid public program. To reach that goal we needed to learn how to best support the researchers and improve the signal-to-noise ratio of reports, while building our internal processes to track and remediate a stream of reported vulnerabilities with our engineering teams.</p><p>When we launched the private bug bounty we included all Cloudflare products eligible for rewards, and by mid-January 2022 we had paid out \$211,512 in bounties. We started the program by inviting a few researchers and slowly added more overtime. This helped us fine tune our policies and documentation and create a more scalable vulnerability management process internally. Our most prolific participant has earned \$54,800 in bounty rewards, and the signal-to-noise ratio has improved to 68%, with 292 of our 430 total reports receiving a reward.</p><p>The success of our private program has largely been the result of consistent effort from the members of our Product Security team to improve our internal handling of issues, and through projects to improve the researcher experience. All bug bounty reports are triaged and validated by members of the security team along with some initial support from HackerOne. Once triaged, the security issues flow through our vulnerability management program, where unique issues are tracked in a single ticket that can be shared outside the security team. To support a scaling program and company, automation was baked into this process and integrations were implemented with our ticketing system.</p><p>In this example of a valid report, a ticket was filed containing all the information the reporter provided to HackerOne. Once a report is acknowledged and a VULN ticket is filed, we pay out the security researcher to ensure they receive the reward in a timely fashion.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/EBzAyBVBpN4CDZ9TFiCE7/95e5401bb78da9be04dd264ffe909990/image4.png" />
            
            </figure><p>Each ticket is assigned to an Engineering Owner and Security Owner who share the responsibility for remediating the vulnerability. Early on in the process, a service-level agreement (SLA) and remediation timeline are determined by the severity of the issue. If the bug is determined to be of critical severity, it's all hands on deck to fix the issue.</p><p>After initial assignment and SLA determination, the open tickets are reviewed weekly by both Engineering and Security to ensure that problems are being addressed in line with the SLA.</p><p>We’ve been working hard to improve the researcher experience. We’ve seen that immediately paying researchers led to a huge improvement in satisfaction compared to waiting weeks or months for a T-shirt. Likewise, it was even more frustrating for security researchers to work hard on an issue and then find out it was out of scope. To address this we constantly update our scope section as we get more out-of-scope reports. Our policy page is now much clearer. We also have treasure maps for some products pointing at major risk areas, and even put together a test site where researchers can test theories.</p><p>Ultimately, the success of our private bug bounty came down to the researchers who put in the effort to look for issues. Cloudflare thanks all 419 researchers who have participated in our bug bounty program so far, with a special shout out to the top 10 researchers in the program:</p><ul><li><p>zeroxyele</p></li><li><p>esswhy</p></li><li><p>turla</p></li><li><p>ginkoid</p></li><li><p>albertspedersen</p></li><li><p>ryotak</p></li><li><p>base_64</p></li><li><p>dward84</p></li><li><p>ninetynine</p></li><li><p>albinowax</p></li></ul><p>Here’s how our total bounty amounts grew as we improved our program:</p><p>2018 - $4,500
2019 - $25,425
2020 - $78,877
2021 - $101,075</p><p>The current breakdown of bounty awards for primary targets based on issue severity is listed below. (All amounts in USD)</p><table><tr><td><p><b>Severity</b></p></td><td><p><b>Bounty</b></p></td></tr><tr><td><p>Critical</p></td><td><p>$3,000</p></td></tr><tr><td><p>High</p></td><td><p>$1,000</p></td></tr><tr><td><p>Medium</p></td><td><p>$500</p></td></tr><tr><td><p>Low</p></td><td><p>$250</p></td></tr></table>
    <div>
      <h3>Lesson learned: making it easier for researchers with Cloudflare’s testing sandbox</h3>
      <a href="#lesson-learned-making-it-easier-for-researchers-with-cloudflares-testing-sandbox">
        
      </a>
    </div>
    <p>Because of how our services work, you need to have our products deployed in a test environment in order to explore their capabilities and limitations. And many of those products offer features that are not free to use. So, to make vulnerability research more accessible, we created <a href="https://cumulusfire.net/">CumulusFire</a> to showcase Cloudflare features that typically require a paid level of service. We created this site for two reasons: to provide a standardized playground where researchers can test their exploits, and to make it easier for our teams to reproduce them while triaging.</p><p>CumulusFire has already helped us address the constant trickle of reports in which researchers would configure their origin server in an obviously insecure way, beyond default or expected settings, and then report that Cloudflare’s WAF does not block an attack. By policy, we will now only consider WAF bypasses a vulnerability if it is reproducible on CumulusFire.</p><p>As we expand our public program we will add additional services to the testing playground. Since we love dogfooding our own products, the entire sandbox is built on Cloudflare Workers.</p>
    <div>
      <h3>Next steps</h3>
      <a href="#next-steps">
        
      </a>
    </div>
    <p>Just as we grew our private program, we will continue to evolve our public bug bounty program to provide the best experience for researchers. We aim to add more documentation, testing platforms and a way to interact with our security teams so that researchers can be confident that their submissions represent valid security issues.</p><p>Look forward to us sharing more of our learnings as we grow the program.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5OHiIcLro7I0Iq7eqLlYxc/63991f09db8075a4c4c6437e18512a14/image1.png" />
            
            </figure><p></p> ]]></content:encoded>
            <category><![CDATA[Security]]></category>
            <category><![CDATA[Vulnerabilities]]></category>
            <guid isPermaLink="false">7e4fyGKOxRTzl0tBCat0hg</guid>
            <dc:creator>Rushil Shah</dc:creator>
        </item>
        <item>
            <title><![CDATA[How Cloudflare security responded to Log4j 2 vulnerability]]></title>
            <link>https://blog.cloudflare.com/how-cloudflare-security-responded-to-log4j2-vulnerability/</link>
            <pubDate>Fri, 10 Dec 2021 23:39:00 GMT</pubDate>
            <description><![CDATA[ Yesterday, December 9, 2021, when a serious vulnerability in the popular Java-based logging package log4j was publicly disclosed, our security teams jumped into action to help respond to the first question and answer the second question. This post explores the second. ]]></description>
            <content:encoded><![CDATA[ <p>At Cloudflare, when we learn about a new security vulnerability, we quickly bring together teams to answer two distinct questions: (1) what can we do to ensure our customers’ infrastructures are protected, and (2) what can we do to ensure that our own environment is secure. Yesterday, December 9, 2021, when a serious vulnerability in the popular Java-based logging package <a href="https://logging.apache.org/log4j/2.x/index.html">Log4j</a> was publicly disclosed, our security teams jumped into action to help respond to the first question and answer the second question. This post explores the second.</p><p>We cover the details of how this vulnerability works in a separate blog post: <a href="/inside-the-log4j2-vulnerability-cve-2021-44228/">Inside the Log4j2 vulnerability (CVE-2021-44228)</a>, but in summary, this vulnerability allows an attacker to execute code on a remote server. Because of the widespread use of Java and Log4j, this is likely one of the most serious vulnerabilities on the Internet since both <a href="/searching-for-the-prime-suspect-how-heartbleed-leaked-private-keys/">Heartbleed</a> and <a href="/inside-shellshock/">ShellShock</a>. The vulnerability is listed as <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-44228">CVE-2021-44228</a>. The CVE description states that the vulnerability affects Log4j2 &lt;=2.14.1 and is patched in 2.15. The vulnerability additionally <a href="https://github.com/apache/logging-log4j2/pull/608#issuecomment-990494126">impacts all versions of log4j 1.x</a>; however, it is End of Life and has other security vulnerabilities that will not be fixed. Upgrading to 2.15 is the recommended action to take. You can also read about how we updated our WAF rules to help protect our customers in this post: <a href="/cve-2021-44228-log4j-rce-0-day-mitigation/">CVE-2021-44228 - Log4j RCE 0-day mitigation</a></p>
    <div>
      <h3>Timeline</h3>
      <a href="#timeline">
        
      </a>
    </div>
    <p>One of the first things we do whenever we respond to an incident is start drafting a timeline of events we need to review and understand within the context of the situation. Some examples from our timeline here include:</p><ul><li><p>2021-12-09 16:57 UTC - Hackerone report received regarding Log4j RCE on developers.cloudflare.com</p></li><li><p>2021-12-10 09:56 UTC - First WAF rule shipped to Cloudflare Specials ruleset</p></li><li><p>2021-12-10 10:00 UTC - Formal engineering INCIDENT is opened and work begins to identify areas we need to patch Log4j</p></li><li><p>2021-12-10 10:33 UTC - Logstash deployed with patch to mitigate vulnerability.</p></li><li><p>2021-12-10 10:44 UTC - Second WAF rule is live as part of Cloudflare managed rules</p></li><li><p>2021-12-10 10:50 UTC - ElasticSearch restart begins with patch to mitigate vulnerability</p></li><li><p>2021-12-10 11:05 UTC - ElasticSearch restart concludes and is no longer vulnerable</p></li><li><p>2021-12-10 11:45 UTC - Bitbucket is patched and no longer vulnerable</p></li><li><p>2021-12-10 21:22 UTC - Hackerone report closed as Informative after it was unable to be reproduced</p></li></ul>
    <div>
      <h3>Addressing internal impact</h3>
      <a href="#addressing-internal-impact">
        
      </a>
    </div>
    <p>An important question when dealing with any software vulnerability, and what may actually be the hardest question that every company has to answer in this particular case is: where are all the places that the vulnerable software is actually running?</p><p>If the vulnerability is in a proprietary piece of software licensed by one company to the rest of the world, that is easy to answer - you just find that one piece of software. But in this case that was much harder. Log4j is a widely used piece of software but not one that people who are not Java developers are likely to be familiar with. Our first action was to refamiliarize ourselves with all places in our infrastructure where we were running software on the JVM, in order to determine which software components could be vulnerable to this issue.</p><p>We were able to create an inventory of all software we have running on the JVM using our centralized code repositories. We used this information to research and determine each individual Java application we had, whether it contained Log4j, and which version of Log4j was compiled into it.</p><p>We discovered that our ElasticSearch, LogStash, and Bitbucket contained instances of the vulnerable Log4j package that was between versions 2.0 and 2.14.1. We were able to use the mitigation strategies described in the official Log4j security documentation to patch the issue. For each instance of Log4j we either removed the JndiLookup class from the classpath:</p><p><code>zip -q -d log4j-core-*.jar org/apache/logging/log4j/core/lookup/JndiLookup.class</code></p><p>Or we set the mitigating system property in the log4j configuration:</p><p><code>log4j2.formatMsgNoLookups</code></p><p>We were able to quickly mitigate this issue in these packages using these strategies while waiting for new versions of the packages to be released.</p>
    <div>
      <h3>Reviewing External Reports</h3>
      <a href="#reviewing-external-reports">
        
      </a>
    </div>
    <p>Even before we were done making the list of internal places where the vulnerable software was running, we started by looking at external reports - from HackerOne, our bug bounty program, and a public post in GitHub suggesting that we might be at risk.</p><p>We identified at least two reports that seemed to indicate that Cloudflare was vulnerable and compromised. In one of the reports was the following screenshot:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2qkLHVM69u2bAYEKAWf0UV/34d05486738526729369652f219e4aeb/65876613-4802-4014-B265-A28C1B807847.png" />
            
            </figure><p>This example is targeting our developer documentation hosted at <a href="https://developer.cloudflare.com/">https://developer.cloudflare.com</a>. On the right-hand side, the attacker demonstrates that a DNS query was received for the payload he sent to our server. However, the IP address flagged here is <code>173.194.95.134</code>, a member of a Google owned IPv4 subnet (<code>173.194.94.0/23</code>).</p><p>Cloudflare’s developer documentation is hosted as a Cloudflare Worker and only serves static assets. The repository is <a href="https://github.com/cloudflare/cloudflare-docs">public</a>. The Worker relies on Google’s analytics library as seen <a href="https://github.com/cloudflare/cloudflare-docs/blob/production/developers.cloudflare.com/workers-site/index.js#L48">here</a>, therefore, we hypothesize that the attacker was not receiving a request from Cloudflare but through Google's servers.</p><p>Our backend servers receive logging from Workers, but exploitation was also not possible in this instance as we leverage robust Kubernetes egress policies that prevent calling out to the Internet. The only communication allowed is to a curated set of internal services.</p><p>When we received a similar report in our vulnerability disclosure program while gathering more information, the researcher was unable to reproduce the issue. This further enforced our hypothesis that it was third party servers, and they may have patched the issue.</p>
    <div>
      <h3>Was Cloudflare compromised?</h3>
      <a href="#was-cloudflare-compromised">
        
      </a>
    </div>
    <p>While we were running versions of the software as described above, thanks to our speed of response and defense in depth approach, we do not believe Cloudflare was compromised. We have invested significant efforts into validating this, and we will continue working on this effort until everything is known about this vulnerability. Here is a bit about that part of our efforts.</p><p>As we were working to evaluate and isolate all the contexts in which the vulnerable software might be running and remediate them, we started a separate workstream to analyze whether any of those instances had been exploited.  Our detection and response methodology follows industry standard Incident Response practices and was thoroughly deployed to validate whether any of our assets were indeed compromised. We followed a multi-pronged approach described next.</p>
    <div>
      <h3>Reviewing Internal Data</h3>
      <a href="#reviewing-internal-data">
        
      </a>
    </div>
    <p>Our asset inventory and code scanning tooling allowed us to identify all applications and services reliant on Apache Log4j. While these applications were being reviewed and upgraded if needed, we were performing a thorough scan of these services and hosts. Specifically, the exploit for CVE-2021-44228 relies on particular patterns in log messages and parameters, for example <code>\$\{jndi:(ldap[s]?|rmi|dns):/[^\n]+</code>. For each potentially impacted service, we performed a log analysis to expose any attempts at exploitation.</p>
    <div>
      <h3>Reviewing Network Analytics</h3>
      <a href="#reviewing-network-analytics">
        
      </a>
    </div>
    <p>Our network analytics allow us to identify suspicious network behavior that may be indicative of attempted or actual exploitation of our infrastructure. We scrutinised our network data to identify the following:</p><ol><li><p>Suspicious Inbound and Outbound ActivityBy analyzing suspicious inbound and outbound connections, we were able to sweep our environment and identify whether any of our systems were displaying signs of active compromise.</p></li><li><p>Targeted Systems &amp; ServicesBy leveraging pattern analytics against our network data, we uncovered systems and services targeted by threat-actors. This allowed us to perform correlative searches against our asset inventory, and drill down to each host to determine if any of those machines were exposed to the vulnerability or actively exploited.</p></li><li><p>Network IndicatorsFrom the aforementioned analysis, we gained insight into the infrastructure of various threat actors and identified network indicators being utilized in attempted exploitation of this vulnerability. Outbound activity to these indicators was blocked in Cloudflare Gateway.</p></li></ol>
    <div>
      <h3>Reviewing endpoints</h3>
      <a href="#reviewing-endpoints">
        
      </a>
    </div>
    <p>We were able to correlate our log analytics and network analytics workflows to supplement our endpoint analysis. From our findings from both of those analyses, we were able to craft endpoint scanning criteria to identify any additional potentially impacted systems and analyze individual endpoints for signs of active compromise. We utilized the following techniques:</p>
    <div>
      <h5>Signature Based Scanning</h5>
      <a href="#signature-based-scanning">
        
      </a>
    </div>
    <p>We are in the process of deploying custom Yara detection rules to alert on exploitation of the vulnerability. These rules will be deployed in the Endpoint Detection and Response agent running on all of our infrastructure and our centralized Security Information and Events Management (SIEM) tool.</p>
    <div>
      <h5>Anomalous Process Execution and Persistence Analysis</h5>
      <a href="#anomalous-process-execution-and-persistence-analysis">
        
      </a>
    </div>
    <p>Cloudflare continuously collects and analyzes endpoint process events from our infrastructure. We used these events to search for post-exploitation techniques like download of second stage exploits, anomalous child processes, etc.</p><p>Using all of these approaches, we have found no evidence of compromise.</p>
    <div>
      <h3>Third-Party risk</h3>
      <a href="#third-party-risk">
        
      </a>
    </div>
    <p>In the analysis above, we focused on reviewing code and data we generate ourselves. But like most companies, we also rely on software that we have licensed from third parties. When we started our investigation into this matter, we also partnered with the company’s information technology team to pull together a list of each and every primary third-party provider and all sub-processors to inquire about whether they were affected. We’re in the process of receiving and reviewing responses from the providers. Any providers who we deem critical that are impacted by this vulnerability will be disabled and blocked until they are fully remediated.</p>
    <div>
      <h3>Validation that our defense-in depth approach worked</h3>
      <a href="#validation-that-our-defense-in-depth-approach-worked">
        
      </a>
    </div>
    <p>As we responded to this incident, we found several places where our defense in depth approach worked.</p><ol><li><p>Restricting outbound traffic</p></li></ol><p>Restricting the ability to <i>call home</i> is an essential part of the <i>kill-chain</i> to make exploitation of vulnerabilities much harder. As noted above, we leverage Kubernetes network policies to restrict egress to the Internet on our deployments. In this context, it prevents the next-stage of the attack, and the network connection to attacker controlled resources is dropped.</p><p>All of our externally facing services are protected by Cloudflare. The origin servers for these services are set up via authenticated origin pulls. This means that none of the servers are exposed directly to the Internet.</p><p>2.   Using Cloudflare to secure Cloudflare</p><p>All of our internal services are protected by our Zero-trust product, Cloudflare Access. Therefore, once we had patched the limited <a href="https://www.cloudflare.com/learning/security/what-is-an-attack-surface/">attack surface</a> we had identified, any exploit attempts to Cloudflare’s systems or customers leveraging Access would have required the attacker to authenticate.</p><p>And because we have the Cloudflare WAF product deployed as part of our effort to secure Cloudflare using Cloudflare, we benefited from all the work being done to protect our customers. All new WAF rules written to protect against this vulnerability were updated with a default action of <code>BLOCK</code>. Like every other customer who has the WAF deployed, we are now receiving protection without any action required on our side.</p>
    <div>
      <h2>Conclusion</h2>
      <a href="#conclusion">
        
      </a>
    </div>
    <p>While our response to this challenging situation continues, we hope that this outline of our efforts helps others. We are grateful for all the support we have received from within and outside of Cloudflare.</p><p><i>Thank you to Evan Johnson, Anjum Ahuja, Sourov Zaman, David Haynes, and Jackie Keith who also contributed to this blog.</i></p> ]]></content:encoded>
            <category><![CDATA[Vulnerabilities]]></category>
            <category><![CDATA[Security]]></category>
            <category><![CDATA[Log4J]]></category>
            <category><![CDATA[Log4Shell]]></category>
            <guid isPermaLink="false">1O7bzj7EcacHO0pyRXeWVY</guid>
            <dc:creator>Rushil Shah</dc:creator>
            <dc:creator>Thomas Calderon</dc:creator>
        </item>
    </channel>
</rss>