
<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:media="http://search.yahoo.com/mrss/">
    <channel>
        <title><![CDATA[ The Cloudflare Blog ]]></title>
        <description><![CDATA[ Get the latest news on how products at Cloudflare are built, technologies used, and join the teams helping to build a better Internet. ]]></description>
        <link>https://blog.cloudflare.com</link>
        <atom:link href="https://blog.cloudflare.com/" rel="self" type="application/rss+xml"/>
        <language>en-us</language>
        
        <lastBuildDate>Mon, 13 Apr 2026 18:04:05 GMT</lastBuildDate>
        <item>
            <title><![CDATA[Our ongoing commitment to privacy for the 1.1.1.1 public DNS resolver]]></title>
            <link>https://blog.cloudflare.com/1111-privacy-examination-2026/</link>
            <pubDate>Wed, 01 Apr 2026 13:00:00 GMT</pubDate>
            <description><![CDATA[ Eight years ago, we launched 1.1.1.1 to build a faster, more private Internet. Today, we’re sharing the results of our latest independent examination. The result: our privacy protections are working exactly as promised. ]]></description>
            <content:encoded><![CDATA[ <p></p><p>Exactly 8 years ago today, <a href="https://blog.cloudflare.com/announcing-1111/"><u>we launched the 1.1.1.1 public DNS resolver</u></a>, with the intention to build the world’s <a href="https://www.dnsperf.com/#!dns-resolvers"><u>fastest</u></a> resolver — and the most private one. We knew that trust is everything for a service that handles the "phonebook of the Internet." That’s why, at launch, we made a unique commitment to publicly confirm that we are doing what we said we would do with personal data. In 2020, we <a href="https://blog.cloudflare.com/announcing-the-results-of-the-1-1-1-1-public-dns-resolver-privacy-examination/"><u>hired an independent firm to check our work</u></a>, instead of just asking you to take our word for it. We shared our intention to update such examinations in the future. We also called on other providers to do the same, but, as far as we are aware, no other major public resolver has had their DNS privacy practices independently examined.</p><p>At the time of the 2020 review, the 1.1.1.1 resolver was less than two years old, and the purpose of the examination was to prove our systems made good on all the commitments we made about how our 1.1.1.1 resolver functioned, even commitments that did not impact personal data or user privacy. </p><p>Since then, Cloudflare’s technology stack has grown significantly in both scale and complexity. For example, we <a href="https://blog.cloudflare.com/big-pineapple-intro/"><u>built an entirely new platform</u></a> that powers our 1.1.1.1 resolver and other DNS systems. So we felt it was vital to review our systems, and our 1.1.1.1 resolver privacy commitments in particular, once again with a rigorous and independent review. </p><p>Today, we are sharing the results of our most recent privacy examination by the same Big 4 accounting firm. Its independent examination is available on our <a href="https://www.cloudflare.com/trust-hub/compliance-resources/"><u>compliance page</u></a>.</p><p>Following the conclusion of the 2024 calendar year, we began our comprehensive process of collecting and preparing evidence for our independent auditors. The examination took several months and required many teams across Cloudflare to provide supporting evidence of our privacy controls in action. After the independent auditors' completion of the examination, we're pleased to share the final report, which provides assurance that our commitments were met: our systems are as private as promised. Most importantly, <b>our core privacy guarantees for the 1.1.1.1 resolver remain unchanged and confirmed by independent review:</b></p><ul><li><p><b>Cloudflare will not sell or share public resolver users’ personal data with third parties or use personal data from the public resolver to target any user with advertisements.</b></p></li></ul><ul><li><p><b>Cloudflare will only retain or use what is being asked, not information that will identify who is asking it.</b> </p></li></ul><ul><li><p><b>Source IP addresses are anonymized and deleted within 25 hours.</b></p></li></ul><p>We also want to be transparent about two points. First: as we explained in <a href="https://blog.cloudflare.com/announcing-the-results-of-the-1-1-1-1-public-dns-resolver-privacy-examination/"><u>our 2020 blog announcing the results of our previous examination</u>,</a> randomly sampled network packets (at most 0.05% of all traffic, including the querying IP address of 1.1.1.1 public resolver users) are used solely for network troubleshooting and attack mitigation.</p><p>Second, the scope of this examination focuses exclusively on our privacy commitments. Back in 2020, our first examination reviewed all of our representations, not only our privacy commitments but our description of how we would handle anonymized transaction and debug log data (“Public Resolver Logs”) for the legitimate operation of our Public Resolver and research purposes. Over time, our uses of this data to do things like power <a href="https://radar.cloudflare.com/"><u>Cloudflare Radar</u></a>, which was released after our initial 1.1.1.1 examination, have changed how we treat those logs, even though there is no impact on personal information or personal privacy. </p><p><a href="https://blog.cloudflare.com/announcing-the-results-of-the-1-1-1-1-public-dns-resolver-privacy-examination/"><u>As we noted with the first review 6 years ago</u></a>: we’ve never wanted to know what individuals do on the Internet, and we’ve taken technical steps to ensure we can’t. At Cloudflare, we believe privacy should be the default. By proactively undergoing these independent examinations, we hope to set a standard for the rest of the industry. We believe every user, whether they are browsing the web directly or deploying an AI agent on their behalf, deserves an Internet that doesn't track their movement. And further, Cloudflare steadfastly stands behind the commitment in our <a href="https://www.cloudflare.com/privacypolicy/"><u>Privacy Policy</u></a> that we will not combine any information collected from DNS queries to the 1.1.1.1 resolver with any other Cloudflare or third-party data in any way that can be used to identify individual end users.</p><p>As always, we thank you for trusting 1.1.1.1 to be your gateway to the Internet. Details of the 1.1.1.1 resolver privacy examination and our accountant’s report can be found on Cloudflare’s <a href="https://www.cloudflare.com/trust-hub/compliance-resources/"><u>Certifications and compliance resources page</u></a>. Visit <a href="https://developers.cloudflare.com/1.1.1.1/"><u>https://developers.cloudflare.com/1.1.1.1/</u></a> to learn more about how to get started with the Internet's fastest, privacy-first DNS resolver. </p> ]]></content:encoded>
            <category><![CDATA[1.1.1.1]]></category>
            <category><![CDATA[DNS]]></category>
            <category><![CDATA[Privacy]]></category>
            <category><![CDATA[Consumer Services]]></category>
            <category><![CDATA[Transparency]]></category>
            <guid isPermaLink="false">VOddnCi9jbM6zHOay1HCN</guid>
            <dc:creator>Rory Malone</dc:creator>
            <dc:creator>Hannes Gerhart</dc:creator>
            <dc:creator>Leah Romm</dc:creator>
        </item>
        <item>
            <title><![CDATA[Standing up for the open Internet: why we appealed Italy’s "Piracy Shield" fine]]></title>
            <link>https://blog.cloudflare.com/standing-up-for-the-open-internet/</link>
            <pubDate>Mon, 16 Mar 2026 19:00:00 GMT</pubDate>
            <description><![CDATA[ Cloudflare is appealing a €14 million fine from Italian regulators over "Piracy Shield," a system that forces providers to block content without oversight. We are challenging this framework to protect the Internet from disproportionate overblocking and lack of due process. ]]></description>
            <content:encoded><![CDATA[ <p>At Cloudflare, our mission is to help build a better Internet. Usually, that means rolling out new services to our millions of users or defending the web against the world’s largest cyber attacks. But sometimes, building a better Internet requires us to stand up against laws or regulations that threaten its fundamental architecture.</p><p>Last week, Cloudflare continued its legal battle against "Piracy Shield,” a misguided Italian regulatory scheme designed to protect large rightsholder interests at the expense of the broader Internet. After Cloudflare resisted registering for Piracy Shield and challenged it in court, the Italian communications regulator, AGCOM, fined Cloudflare a staggering €14 million (~$17 million). We appealed that fine on March 8, and we continue to challenge the legality of Piracy Shield itself. </p><p>While the fine is significant, the principles at stake are even larger. This case isn't just about a single penalty; it’s about whether a handful of private entities can prioritize their own economic interests over those of Internet users by forcing global infrastructure providers to block large swaths of the Internet without oversight, transparency, or due process.</p>
    <div>
      <h3>What is Piracy Shield?</h3>
      <a href="#what-is-piracy-shield">
        
      </a>
    </div>
    <p>To understand why we are fighting this, it’s necessary to take a step back and understand Piracy Shield. Marketed by AGCOM as an innovative tool to fight copyright infringement, the system is better understood as a blunt tool for rightsholders to control what is available on the Internet without any traditional legal safeguards.</p><p>Piracy Shield is an unsupervised electronic portal through which an unidentified set of Italian media companies can submit websites and IP addresses that online service providers registered with Piracy Shield are then required to block within 30 minutes. Piracy Shield operates as a “black box” because there is:</p><ul><li><p><b>No judicial oversight:</b> Private companies, not judges or government officials, decide what gets blocked.</p></li><li><p><b>No transparency:</b> The public, and even the service providers themselves, are often left in the dark about who requested a block or why.</p></li><li><p><b>No due process:</b> There is no mechanism for a website owner to challenge a block before their site becomes unavailable on the Italian web.</p></li><li><p><b>No redress:</b> Along with a complete lack of transparency or due process, Piracy Shield offers no effective way for impacted parties to seek redress from erroneous blocking.</p></li></ul><p>It’s not entirely surprising that Piracy Shield so clearly prioritizes the economic interests of media companies over the rights of Italian Internet users. The system was “donated” to the Italian government by SP Tech, an arm of the law firm that represents several of Piracy Shield’s major direct beneficiaries, including Lega Nazionale Professionisti Serie A (Italy’s major soccer league).</p>
    <div>
      <h3>The high cost of Piracy Shield</h3>
      <a href="#the-high-cost-of-piracy-shield">
        
      </a>
    </div>
    <p>Almost immediately after Piracy Shield was rolled out, there were significant problems. In addition to the unworkable 30-minute deadline and the lack of safeguards described above, the scheme requires service providers to engage in IP address blocking. This creates an unavoidable risk of <a href="https://blog.cloudflare.com/consequences-of-ip-blocking/"><u>overblocking innocent websites</u></a> due to the fact that IP addresses are regularly and necessarily shared by thousands of websites. Not surprisingly, within a few months of its launch, Piracy Shield caused major outages for people and businesses who had done nothing wrong. </p><p>Notable failures include:</p><ul><li><p><b>Government and educational blackouts: </b>Tens of thousands of legitimate sites were rendered inaccessible from Italy, including Ukrainian government websites for schools and scientific research.</p></li><li><p><b>Small business &amp; NGO disruption:</b> A wide range of European small businesses and NGOs focused on social programs for women and children were inadvertently blocked.</p></li><li><p><b>Loss of essential services:</b> The system blocked access to Google Drive for over 12 hours, preventing thousands of Italian students and professionals from accessing critical files.</p></li><li><p><b>Persistent collateral blocking:</b> A September 2025 <a href="https://research.utwente.nl/en/publications/90th-minute-a-first-look-to-collateral-damages-and-efficacy-of-th/"><u>study</u></a> by the University of Twente confirmed that the system routinely blocks legitimate websites for months at a time.</p></li></ul><p>Even when faced with clear evidence that Piracy Shield has caused significant and repeated overblocking, AGCOM did not change course. Rather, it chose to <i>expand</i> Piracy Shield to apply to global DNS providers and VPNs, services which are closely associated with privacy and free expression. AGCOM also started taking increasingly aggressive steps to force global service providers, even ones with no legal or operational presence in Italy, to register with Piracy Shield.</p>
    <div>
      <h3>Cloudflare’s principled challenge</h3>
      <a href="#cloudflares-principled-challenge">
        
      </a>
    </div>
    <p>Cloudflare has been clear about the risks posed by Piracy Shield from the beginning. In 2024, we met with AGCOM to highlight the scheme’s structural flaws and <a href="https://labs.ripe.net/author/antonio-prado/live-event-blocking-at-scale-effectiveness-vs-collateral-damage-in-italys-piracy-shield/"><u>consequences</u></a> and proposed <a href="https://blog.cloudflare.com/h1-2025-transparency-report/"><u>more effective ways to collaborate</u></a> that wouldn't break the Internet’s core architecture.  </p><p>When these concerns were ignored, we moved on to legal action. We challenged AGCOM’s effort to force Cloudflare to join Piracy Shield in the Italian administrative courts and, along with the Computer &amp; Communications Industry Association (CCIA), we filed a complaint with the European Commission. More informally, we have continued to reach out to government officials both in Italy and at the EU level to explain our position and make our concerns known. Our position has been consistent and remains that Piracy Shield is incompatible with EU law, most notably the Digital Services Act (DSA), which requires that any content restriction be proportionate and subject to strict procedural safeguards.</p><p>The European Commission, following our complaint, expressed similar concerns, issuing a <a href="https://assets.ctfassets.net/zkvhlag99gkb/2GPYK05HVkVtsXNlZG4VsP/f4a0b571e8be3bb43e28b20973f0a1cb/2025-148-it-en-6852dc2dd741b167827775.pdf"><u>letter</u></a> on June 13, 2025, criticizing the lack of oversight inherent in the Piracy Shield framework. And on December 23, 2025, the Italian administrative court issued an encouraging ruling requiring AGCOM to share with Cloudflare all the records that purportedly support Piracy Shield blocking orders. While we have not yet received those records, we expect them to shed significant light on Piracy Shield’s operations. </p>
    <div>
      <h3>An excessive fine and still no transparency</h3>
      <a href="#an-excessive-fine-and-still-no-transparency">
        
      </a>
    </div>
    <p>Rather than awaiting the outcome of our legal challenges, and less than one week after being ordered to disclose Piracy Shield records to Cloudflare, AGCOM moved on December 29, 2025, to issue its fine. The fine’s timing was not the only eyebrow-raising thing about it. The math behind the penalty is as flawed as the system it is seeking to enforce.</p><p>Under Italian law, fines for non-compliance are capped at 2% of a company’s revenue <i>within the relevant jurisdiction</i>. Based on Cloudflare’s Italian earnings, that cap should have limited any fine to approximately €140,000. Instead, AGCOM calculated the fine based on our <i>global</i> revenue, resulting in a penalty nearly 100 times higher than the legal limit.</p><p>This disproportionate approach sends a chilling message to the global tech community: if you question a flawed regulatory system or defend the rights of your users and the global Internet, you risk facing punitive and excessive financial retaliation.</p><p>At the same time, AGCOM still has not shared with Cloudflare the Piracy Shield records that it was ordered to disclose. Instead, just four days before the deadline for disclosure, AGCOM informed us that it would make some of the records available for inspection at an AGCOM facility in Naples, subject to supervision by AGCOM officials. These limitations are not just unreasonably burdensome and contrary to the letter and spirit of the disclosure order; they raise real questions about why AGCOM is so intent on resisting transparency.</p>
    <div>
      <h3>Next steps: the path forward</h3>
      <a href="#next-steps-the-path-forward">
        
      </a>
    </div>
    <p>We are not backing down. Cloudflare is appealing the €14 million fine, pushing for full access to AGCOM’s Piracy Shield records, and will continue to challenge the underlying legality of the Piracy Shield blocking orders in the Italian administrative courts.</p><p>We recognize that rightsholders have a legitimate interest in protecting their content. In fact, we work with rightsholders every day to address infringement in ways that are precise and effective. But those interests cannot override the basic requirements of legal due process or the technical integrity of the global Internet and our network.</p><p>We will continue to pursue this challenge in the Italian courts and through the European Commission. Global connectivity is too important to be governed by "black boxes" with 30-minute deadlines that result in widespread overblocking with no means of redress. Cloudflare remains committed to building a better Internet: one where the rules are transparent, the regulators are accountable, and the infrastructure that connects the world remains free, open, and secure.</p> ]]></content:encoded>
            <category><![CDATA[Policy & Legal]]></category>
            <category><![CDATA[Privacy]]></category>
            <category><![CDATA[Transparency]]></category>
            <category><![CDATA[Internet Regulation]]></category>
            <category><![CDATA[Cybersecurity]]></category>
            <guid isPermaLink="false">6V4c3s6W2nqoSNaUeUpqWX</guid>
            <dc:creator>Patrick Nemeroff</dc:creator>
            <dc:creator>Emily Terrell</dc:creator>
        </item>
        <item>
            <title><![CDATA[Innovating to address streaming abuse — and our latest transparency report]]></title>
            <link>https://blog.cloudflare.com/h1-2025-transparency-report/</link>
            <pubDate>Fri, 19 Dec 2025 14:00:00 GMT</pubDate>
            <description><![CDATA[ Cloudflare's H1 2025 Transparency Report is here. We discuss our principles on content blocking and our innovative approach to combating unauthorized streaming and copyright abuse. ]]></description>
            <content:encoded><![CDATA[ <p>Cloudflare's latest <a href="https://www.cloudflare.com/transparency/"><u>transparency report</u></a> — covering the first half of 2025 — is now live. As part of our commitment to transparency, Cloudflare publishes such reports twice a year, describing how we handle legal requests for customer information and reports of abuse of our services. Although we’ve been publishing these reports for over 10 years, we’ve continued to adapt our transparency reporting and our commitments to reflect Cloudflare’s growth and changes as a company. Most recently, we made <a href="https://blog.cloudflare.com/cloudflare-2024-transparency-reports-now-live-with-new-data-and-a-new-format/"><u>changes</u></a> to the format of our reports to make them even more comprehensive and understandable.</p><p>In general, we try to provide updates on our approach or the requests that we receive in the transparency report itself. To that end, we have some notable updates for the first half of 2025. But our transparency report can only go so far in explaining the numbers. </p><p>In this blog post, we’ll do a deeper dive on one topic: Cloudflare’s approach to streaming and claims of copyright violations. Given increased access to AI tools and other systems for abuse, bad actors have become increasingly sophisticated in the way they attempt to abuse systems to stream copyrighted content, often incorporating steps to hide their behavior. We’ve responded by experimenting with new ways to address allegations of streaming and copyright infringement, working closely with rightsholders to better identify domains and accounts that might be streaming to speed up our processes to respond in real time and to better identify possible risks. </p><p>This effort aligns with the interests of policymakers, rightsholders, and online service providers in preventing pirated streaming of sporting and other events over the Internet. Indeed, the same actors who infringe legitimate intellectual property rights with unauthorized streaming may seek to misuse Cloudflare’s services, impacting performance, costs, and reliability for other users. This shared interest in identifying and responding to unauthorized streaming has led to opportunities for partnerships and better information sharing. Preventing unauthorized streaming is a hard problem that requires those partnerships, with streamers constantly finding new ways to evade detection and preventive actions.</p>
    <div>
      <h3>Innovating to address abuse and identify new threats </h3>
      <a href="#innovating-to-address-abuse-and-identify-new-threats">
        
      </a>
    </div>
    <p>With approximately 20% of the web behind Cloudflare’s network, building smart and scalable abuse processes has never been optional. Even as a much smaller company with more limited services, we <a href="https://blog.cloudflare.com/out-of-the-clouds-and-into-the-weeds-cloudflares-approach-to-abuse-in-new-products/"><u>recognized</u></a> the importance of creating a system that efficiently got abuse reporting to those best positioned to action the reports, typically the website owner or hosting provider. Our view was that we could play an important role in ensuring that allegations of abuse reported to us went to those entities without compromising their security.</p><p>As we have developed new services, we have applied a service-specific <a href="https://www.cloudflare.com/trust-hub/abuse-approach/"><u>approach to abuse</u></a>, reflecting the nature of the services provided, legal requirements, and human rights considerations. This approach means that we treat hosted content differently than content on websites that use our security and CDN services, an approach reflected throughout our transparency report. </p><p>Beyond Cloudflare’s response to individual abuse reports, we also recognize the value of systems that learn from the abuse reports we receive. Not only do efforts to identify abuse patterns improve our ability to detect and mitigate abuse on our network, they enable us to better protect our customers from a wide range of cyber threats.</p><p>Rapid developments in AI and constantly improving technologies create new challenges and new opportunities. Bad actors have learned how to use AI to quickly stand up sophisticated phishing campaigns, or shift and divide unauthorized streaming traffic to evade detection. LLMs also enable misuse of abuse reporting systems, facilitating the creation of large volumes of low quality or even malicious abuse reports.</p><p>At the same time, the ability to apply machine learning and AI to the reams of traffic and information behind Cloudflare’s network has enabled the development of new tools to detect and mitigate abusive conduct. Cloudflare has created automated systems that can keep up with the scale of the issue, all while more accurately identifying genuine abuse. In 2024, as reflected in the temporary surge in phishing actions reported in our <a href="https://cf-assets.www.cloudflare.com/slt3lc6tev37/7vust2n7oACblNR2Jk7jZx/5b84afdbb6fbdcc751d6a7ba9a7f938b/H2_2024_AbuseProcessesTransparencyReport_AQFinal.pdf?_gl=1*3escw2*_gcl_au*MjgzODYzMTA4LjE3NTY4NDEzMjg.*_ga*MmIwZjcyYmUtY2EzYi00ZDdlLWJhZWEtOTM5NDQ2MjFhZGEz*_ga_SQCRB0TXZW*czE3NjIxMTg3OTkkbzIxMiRnMSR0MTc2MjEyMTM3OCRqNTkkbDAkaDAkZHdnZlU5UHM2VU5YUUlhRVVlUkNKb1g0ck1kM3ZiR2xZM0E."><u>abuse transparency report</u></a>, Cloudflare expanded the use of automated systems to respond to reports of technical abuse like phishing. Behind the scenes, Cloudflare has taken similar steps to identify new patterns of abusive behavior, to help prevent bad actors from using our services in the first place.</p><p>Knowing that bad actors aren’t likely to give up, Cloudflare has continued innovating in 2025. We’re exploring new ways to learn about and respond to abuse, with the goal of identifying and pursuing the strategies with the most promise for long-term impact.</p>
    <div>
      <h3>Technical responses to streaming abuse</h3>
      <a href="#technical-responses-to-streaming-abuse">
        
      </a>
    </div>
    <p>Cloudflare has always believed that, regardless of their size, websites deserve a secure, fast, reliable web presence. And because we didn’t think you should have to pay for coming under cyberattack, we’ve offered a <a href="https://www.cloudflare.com/plans/free/"><u>free plan</u></a> for websites since Cloudflare launched in 2010. That system — which protects websites around the world from cyberattack for free — works because websites do not consume much bandwidth.</p><p>Streaming is different. Every second of a typical video requires as much bandwidth as loading a full webpage. To ensure that we can continue to provide free services, we’ve always restricted use of our free services to deliver streaming video. Although most of our customers respect these limitations and understand the role they play in enabling our ability to provide these services for free, we sometimes have users attempt to misconfigure our service to stream video.</p><p>In the first half of 2025, Cloudflare worked with several large rightsholders on efforts to address unauthorized streaming. This included providing rightsholders with an API for streamlined reporting, giving feedback on the quality of reports to ensure rightsholders are giving us actionable information, and, after verifying reports against our own internal metrics, taking steps to respond to streaming reports at scale.</p><p>Those efforts bore results, helping us better identify and action unauthorized streaming. The engagement resulted in a significant increase in DMCA reports that Cloudflare received for websites using our hosted services, from approximately 11,000 in the second half of 2024 to approximately 125,000 in the first half of 2025. It also enabled us to speed up our notice and takedown process as we took action in response to 54,000 reports, compared to 1,000 reports in the second half of 2024. Using information from these reports, we identified additional signs of abusive behavior, leading us to terminate hosting services to another 21,000 accounts.</p><p>Cloudflare also relied on information provided by rightsholders to bolster our technical tools for preventing unauthorized streaming over Cloudflare’s network by websites using our non-hosted services. To maintain the ability to provide free and low-cost services to static websites, we may take action on websites using those services if they appear to be streaming, regardless of whether that content infringes on copyright. Over the years, we have built a variety of tools to identify and restrict this type of streaming. While rightsholders’ streaming reports are focused on infringement, we can use these reports as signals to help inform our technical tools and improve our response. Working closely with rightsholders has improved our response time on their specific abuse reports and has also helped us prevent thousands of similar websites attempting to stream in an unauthorized manner over our network before they have ever been identified as infringing.</p><p>The information about streamer tactics and techniques gleaned from these efforts are useful in our broader cybersecurity efforts. Earlier this year, for example, we used information from our streaming program to help a smaller customer whose services were being abused to host streaming content without their knowledge. Understanding how illegal streamers were accessing and abusing their services enabled us to provide them guidance and tools to prevent the behavior.</p><p>While we have made significant progress on this issue, we fully expect that streamers will adjust their behavior in response to the steps we’ve taken. Cloudflare’s work is not done, and we will continue to look for innovative ways to prevent and address this type of abuse. </p>
    <div>
      <h3>Addressing blocking demands</h3>
      <a href="#addressing-blocking-demands">
        
      </a>
    </div>
    <p>As Cloudflare has been collaborating with rightsholders on technical solutions to streaming that address the issue in real time, many regulators and rightsholders have taken a clunkier approach: pursuing legally-mandated blocking of the Internet. Lack of technical expertise or sheer indifference can lead to significant overblocking of innocent websites, often without transparency or accountability for those responsible. We share the view of civil society groups like the <a href="https://www.internetsociety.org/resources/policybriefs/2025/perspectives-on-internet-content-blocking/"><u>Internet Society</u></a> that the best and most effective approach remains removing illegal content at the source.</p><p>One of the most notorious examples of overblocking has been actions by Spanish football league LaLiga. Working through ISPs in Spain, they have engaged in widespread blocking of IP addresses shared by many thousands of websites during matches, without any government oversight. This has caused severe Internet outages across Spain during the time of matches. The disproportionate effect of IP address blocking is <a href="https://blog.cloudflare.com/consequences-of-ip-blocking/"><u>well known</u></a>. LaLiga has nonetheless been unapologetic about causing the blocking of countless unrelated websites, suggesting that their commercial interests should trump the rights of Spanish Internet users to access the broader Internet during match times. Although this approach ignores well-established legal principles requiring that any blocking be proportionate to the problem, the Spanish government has not acted to protect the rights of Spanish Internet users. Balanced against these clear harms and lack of government willingness to provide sufficient oversight, we have seen no concrete evidence that such blunt force blocking efforts meaningfully solve the issue.</p><p>Cloudflare believes that regulators and rightsholders have a responsibility to seek out proportionate ways to prevent online infringement, and that working collaboratively with service providers offers the best way to effectively address abuse without fundamentally damaging the Internet. For reasons illustrated by the LaLiga example, blocking at the infrastructure layer is often overbroad, non-transparent, and ineffective.</p><p>Although we have real concerns about blocking, and particularly the way blocking has been co-opted by rightsholders to further their commercial interests over the rights of ordinary Internet users to access lawful content, Cloudflare has examined ways that blocking might be applied as a more targeted or proportionate response. In general, Cloudflare has found that blocking is of limited effectiveness, as determined users will find ways to circumvent restrictions. Nonetheless, Cloudflare has taken steps to comply with valid orders related to our CDN services that satisfy human rights principles relating to proportionality, due process, free expression, and transparency. In countries with laws that provide for blocking access to online content and provide appropriate oversight, Cloudflare may geoblock websites to limit access in the relevant jurisdiction to those websites through Cloudflare’s CDN services.</p><p>Cloudflare has never blocked through our public DNS resolver. As we have previously <a href="https://blog.cloudflare.com/latest-copyright-decision-in-germany-rejects-blocking-through-global-dns-resolvers/"><u>described</u></a>, we believe demands to block through public DNS are at odds with the desire for an open Internet and would require the creation of new tools that are contrary to the design of our resolver. We continue to litigate against efforts to require us to build such capabilities. Cloudflare has sometimes taken action to geoblock access to websites through Cloudflare’s CDN and security services, in response to DNS blocking orders.</p><p>In the first half of 2025, Cloudflare saw a marked increase in the number of blocking orders it received in Europe. Private rightsholders obtained multiple orders directing Cloudflare to block access to websites in Belgium, France, and Italy. While Cloudflare has challenged aspects of those orders, we have taken steps to comply with them by geoblocking access to the websites at issue in the relevant countries through Cloudflare’s CDN and security services. </p><p>Cloudflare also began giving effect to UK court orders directing other service providers to block websites identified as being dedicated to copyright infringement. Based on a voluntary agreement with rightsholders, Cloudflare is geoblocking websites subject to these orders through our pass-through CDN and security services. When we take action on domains pursuant to these orders, we post an interstitial page that returns a <a href="https://developers.cloudflare.com/support/troubleshooting/http-status-codes/4xx-client-error/error-451/"><u>451 status code</u></a> that directs the visitor to the specific order, which includes a process for affected parties to contest the blocking action.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/74JGtoTseoNdxz0xLIW4AK/55545650ab85002692ca9bb07ba6a2a9/image3.png" />
          </figure><p><sup>Example of a 451 error page in the UK.</sup></p><p>Our efforts in the UK to block content based on a finding of infringement in an order directed to a third party reflect our desire to experiment with more targeted approaches than the overblocking we have seen in other countries in Europe, as well as our understanding that the UK’s regime includes important protections around proportionality, due process, and transparency, including an opportunity for affected parties to seek redress. We are currently monitoring the impact of this approach, and have taken these steps with the understanding that we can change course if we see the system being abused. </p><p>Finally, in the first half of 2025, we have seen an expansion of areas for which blocking has been demanded. We received official government notices in France and Belgium that websites using our hosted services were offering gambling services illegally in those jurisdictions. In both cases, we were able to share the notice with our customer, and they took action themselves to address it. This illustrates the benefit of connecting our customer directly with the government regulator so that they can address issues with their websites, rather than proceeding directly to a blocking demand. </p>
    <div>
      <h3>Looking forward</h3>
      <a href="#looking-forward">
        
      </a>
    </div>
    <p>Cloudflare will continue to look for ways to work with rightsholders and regulators to find effective and proportionate ways to address online abuse. As a company that values transparency, we use our biannual transparency reports to describe the principles we apply in doing this work, and in responding to abuse reports or requests for customer information more generally. We invite you to dive into the numbers and <a href="https://www.cloudflare.com/transparency/"><u>learn more here</u></a>.</p> ]]></content:encoded>
            <category><![CDATA[Transparency]]></category>
            <guid isPermaLink="false">5mt8quFYw1l3UpRAh6JsHU</guid>
            <dc:creator>Justin Paine</dc:creator>
        </item>
        <item>
            <title><![CDATA[Keeping the Internet fast and secure: introducing Merkle Tree Certificates]]></title>
            <link>https://blog.cloudflare.com/bootstrap-mtc/</link>
            <pubDate>Tue, 28 Oct 2025 13:00:00 GMT</pubDate>
            <description><![CDATA[ Cloudflare is launching an experiment with Chrome to evaluate fast, scalable, and quantum-ready Merkle Tree Certificates, all without degrading performance or changing WebPKI trust relationships. ]]></description>
            <content:encoded><![CDATA[ <p>The world is in a race to build its first quantum computer capable of solving practical problems not feasible on even the largest conventional supercomputers. While the quantum computing paradigm promises many benefits, it also threatens the security of the Internet by breaking much of the cryptography we have come to rely on.</p><p>To mitigate this threat, Cloudflare is helping to migrate the Internet to Post-Quantum (PQ) cryptography. Today, <a href="https://radar.cloudflare.com/adoption-and-usage#post-quantum-encryption"><u>about 50%</u></a> of traffic to Cloudflare's edge network is protected against the most urgent threat: an attacker who can intercept and store encrypted traffic today and then decrypt it in the future with the help of a quantum computer. This is referred to as the <a href="https://en.wikipedia.org/wiki/Harvest_now,_decrypt_later"><u>harvest now, decrypt later</u></a><i> </i>threat.</p><p>However, this is just one of the threats we need to address. A quantum computer can also be used to crack a server's <a href="https://www.cloudflare.com/application-services/products/ssl/">TLS certificate</a>, allowing an attacker to impersonate the server to unsuspecting clients. The good news is that we already have PQ algorithms we can use for quantum-safe authentication. The bad news is that adoption of these algorithms in TLS will require significant changes to one of the most complex and security-critical systems on the Internet: the Web Public-Key Infrastructure (WebPKI).</p><p>The central problem is the sheer size of these new algorithms: signatures for ML-DSA-44, one of the most performant PQ algorithms standardized by NIST, are 2,420 bytes long, compared to just 64 bytes for ECDSA-P256, the most popular non-PQ signature in use today; and its public keys are 1,312 bytes long, compared to just 64 bytes for ECDSA. That's a roughly 20-fold increase in size. Worse yet, the average TLS handshake includes a number of public keys and signatures, adding up to 10s of kilobytes of overhead per handshake. This is enough to have a <a href="https://blog.cloudflare.com/another-look-at-pq-signatures/#how-many-added-bytes-are-too-many-for-tls"><u>noticeable impact</u></a> on the performance of TLS.</p><p>That makes drop-in PQ certificates a tough sell to enable today: they don’t bring any security benefit before Q-day — the day a cryptographically relevant quantum computer arrives — but they do degrade performance. We could sit and wait until Q-day is a year away, but that’s playing with fire. Migrations always take longer than expected, and by waiting we risk the security and privacy of the Internet, which is <a href="https://developers.cloudflare.com/ssl/edge-certificates/universal-ssl/"><u>dear to us</u></a>.</p><p>It's clear that we must find a way to make post-quantum certificates cheap enough to deploy today by default for everyone — not just those that can afford it. In this post, we'll introduce you to the plan we’ve brought together with industry partners to the <a href="https://datatracker.ietf.org/group/plants/about/"><u>IETF</u></a> to redesign the WebPKI in order to allow a smooth transition to PQ authentication with no performance impact (and perhaps a performance improvement!). We'll provide an overview of one concrete proposal, called <a href="https://datatracker.ietf.org/doc/draft-davidben-tls-merkle-tree-certs/"><u>Merkle Tree Certificates (MTCs)</u></a>, whose goal is to whittle down the number of public keys and signatures in the TLS handshake to the bare minimum required.</p><p>But talk is cheap. We <a href="https://blog.cloudflare.com/experiment-with-pq/"><u>know</u></a> <a href="https://blog.cloudflare.com/announcing-encrypted-client-hello/"><u>from</u></a> <a href="https://blog.cloudflare.com/why-tls-1-3-isnt-in-browsers-yet/"><u>experience</u></a> that, as with any change to the Internet, it's crucial to test early and often. <b>Today we're announcing our intent to deploy MTCs on an experimental basis in collaboration with Chrome Security.</b> In this post, we'll describe the scope of this experiment, what we hope to learn from it, and how we'll make sure it's done safely.</p>
    <div>
      <h2>The WebPKI today — an old system with many patches</h2>
      <a href="#the-webpki-today-an-old-system-with-many-patches">
        
      </a>
    </div>
    <p>Why does the TLS handshake have so many public keys and signatures?</p><p>Let's start with Cryptography 101. When your browser connects to a website, it asks the server to <b>authenticate</b> itself to make sure it's talking to the real server and not an impersonator. This is usually achieved with a cryptographic primitive known as a digital signature scheme (e.g., ECDSA or ML-DSA). In TLS, the server signs the messages exchanged between the client and server using its <b>secret key</b>, and the client verifies the signature using the server's <b>public key</b>. In this way, the server confirms to the client that they've had the same conversation, since only the server could have produced a valid signature.</p><p>If the client already knows the server's public key, then only <b>1 signature</b> is required to authenticate the server. In practice, however, this is not really an option. The web today is made up of around a billion TLS servers, so it would be unrealistic to provision every client with the public key of every server. What's more, the set of public keys will change over time as new servers come online and existing ones rotate their keys, so we would need some way of pushing these changes to clients.</p><p>This scaling problem is at the heart of the design of all PKIs.</p>
    <div>
      <h3>Trust is transitive</h3>
      <a href="#trust-is-transitive">
        
      </a>
    </div>
    <p>Instead of expecting the client to know the server's public key in advance, the server might just send its public key during the TLS handshake. But how does the client know that the public key actually belongs to the server? This is the job of a <b>certificate</b>.</p><p>A certificate binds a public key to the identity of the server — usually its DNS name, e.g., <code>cloudflareresearch.com</code>. The certificate is signed by a Certification Authority (CA) whose public key is known to the client. In addition to verifying the server's handshake signature, the client verifies the signature of this certificate. This establishes a chain of trust: by accepting the certificate, the client is trusting that the CA verified that the public key actually belongs to the server with that identity.</p><p>Clients are typically configured to trust many CAs and must be provisioned with a public key for each. Things are much easier however, since there are only 100s of CAs instead of billions. In addition, new certificates can be created without having to update clients.</p><p>These efficiencies come at a relatively low cost: for those counting at home, that's <b>+1</b> signature and <b>+1</b> public key, for a total of <b>2 signatures and 1 public key</b> per TLS handshake.</p><p>That's not the end of the story, however. As the WebPKI has evolved, so have these chains of trust grown a bit longer. These days it's common for a chain to consist of two or more certificates rather than just one. This is because CAs sometimes need to rotate<b> </b>their keys, just as servers do. But before they can start using the new key, they must distribute the corresponding public key to clients. This takes time, since it requires billions of clients to update their trust stores. To bridge the gap, the CA will sometimes use the old key to issue a certificate for the new one and append this certificate to the end of the chain.</p><p>That's<b> +1</b> signature and<b> +1</b> public key, which brings us to<b> 3 signatures and 2 public keys</b>. And we still have a little ways to go.</p>
    <div>
      <h3>Trust but verify</h3>
      <a href="#trust-but-verify">
        
      </a>
    </div>
    <p>The main job of a CA is to verify that a server has control over the domain for which it’s requesting a certificate. This process has evolved over the years from a high-touch, CA-specific process to a standardized, <a href="https://datatracker.ietf.org/doc/html/rfc8555/"><u>mostly automated process</u></a> used for issuing most certificates on the web. (Not all CAs fully support automation, however.) This evolution is marked by a number of security incidents in which a certificate was <b>mis-issued </b>to a party other than the server, allowing that party to impersonate the server to any client that trusts the CA.</p><p>Automation helps, but <a href="https://en.wikipedia.org/wiki/DigiNotar#Issuance_of_fraudulent_certificates"><u>attacks</u></a> are still possible, and mistakes are almost inevitable. <a href="https://blog.cloudflare.com/unauthorized-issuance-of-certificates-for-1-1-1-1/"><u>Earlier this year</u></a>, several certificates for Cloudflare's encrypted 1.1.1.1 resolver were issued without our involvement or authorization. This apparently occurred by accident, but it nonetheless put users of 1.1.1.1 at risk. (The mis-issued certificates have since been revoked.)</p><p>Ensuring mis-issuance is detectable is the job of the Certificate Transparency (CT) ecosystem. The basic idea is that each certificate issued by a CA gets added to a public <b>log</b>. Servers can audit these logs for certificates issued in their name. If ever a certificate is issued that they didn't request itself, the server operator can prove the issuance happened, and the PKI ecosystem can take action to prevent the certificate from being trusted by clients.</p><p>Major browsers, including Firefox and Chrome and its derivatives, require certificates to be logged before they can be trusted. For example, Chrome, Safari, and Firefox will only accept the server's certificate if it appears in at least two logs the browser is configured to trust. This policy is easy to state, but tricky to implement in practice:</p><ol><li><p>Operating a CT log has historically been fairly expensive. Logs ingest billions of certificates over their lifetimes: when an incident happens, or even just under high load, it can take some time for a log to make a new entry available for auditors.</p></li><li><p>Clients can't really audit logs themselves, since this would expose their browsing history (i.e., the servers they wanted to connect to) to the log operators.</p></li></ol><p>The solution to both problems is to include a signature from the CT log along with the certificate. The signature is produced immediately in response to a request to log a certificate, and attests to the log's intent to include the certificate in the log within 24 hours.</p><p>Per browser policy, certificate transparency adds <b>+2</b> signatures to the TLS handshake, one for each log. This brings us to a total of <b>5 signatures and 2 public keys</b> in a typical handshake on the public web.</p>
    <div>
      <h3>The future WebPKI</h3>
      <a href="#the-future-webpki">
        
      </a>
    </div>
    <p>The WebPKI is a living, breathing, and highly distributed system. We've had to patch it a number of times over the years to keep it going, but on balance it has served our needs quite well — until now.</p><p>Previously, whenever we needed to update something in the WebPKI, we would tack on another signature. This strategy has worked because conventional cryptography is so cheap. But <b>5 signatures and 2 public keys </b>on average for each TLS handshake is simply too much to cope with for the larger PQ signatures that are coming.</p><p>The good news is that by moving what we already have around in clever ways, we can drastically reduce the number of signatures we need.</p>
    <div>
      <h3>Crash course on Merkle Tree Certificates</h3>
      <a href="#crash-course-on-merkle-tree-certificates">
        
      </a>
    </div>
    <p><a href="https://datatracker.ietf.org/doc/draft-davidben-tls-merkle-tree-certs/"><u>Merkle Tree Certificates (MTCs)</u></a> is a proposal for the next generation of the WebPKI that we are implementing and plan to deploy on an experimental basis. Its key features are as follows:</p><ol><li><p>All the information a client needs to validate a Merkle Tree Certificate can be disseminated out-of-band. If the client is sufficiently up-to-date, then the TLS handshake needs just <b>1 signature, 1 public key, and 1 Merkle tree inclusion proof</b>. This is quite small, even if we use post-quantum algorithms.</p></li><li><p>The MTC specification makes certificate transparency a first class feature of the PKI by having each CA run its own log of exactly the certificates they issue.</p></li></ol><p>Let's poke our head under the hood a little. Below we have an MTC generated by one of our internal tests. This would be transmitted from the server to the client in the TLS handshake:</p>
            <pre><code>-----BEGIN CERTIFICATE-----
MIICSzCCAUGgAwIBAgICAhMwDAYKKwYBBAGC2ksvADAcMRowGAYKKwYBBAGC2ksv
AQwKNDQzNjMuNDguMzAeFw0yNTEwMjExNTMzMjZaFw0yNTEwMjgxNTMzMjZaMCEx
HzAdBgNVBAMTFmNsb3VkZmxhcmVyZXNlYXJjaC5jb20wWTATBgcqhkjOPQIBBggq
hkjOPQMBBwNCAARw7eGWh7Qi7/vcqc2cXO8enqsbbdcRdHt2yDyhX5Q3RZnYgONc
JE8oRrW/hGDY/OuCWsROM5DHszZRDJJtv4gno2wwajAOBgNVHQ8BAf8EBAMCB4Aw
EwYDVR0lBAwwCgYIKwYBBQUHAwEwQwYDVR0RBDwwOoIWY2xvdWRmbGFyZXJlc2Vh
cmNoLmNvbYIgc3RhdGljLWN0LmNsb3VkZmxhcmVyZXNlYXJjaC5jb20wDAYKKwYB
BAGC2ksvAAOB9QAAAAAAAAACAAAAAAAAAAJYAOBEvgOlvWq38p45d0wWTPgG5eFV
wJMhxnmDPN1b5leJwHWzTOx1igtToMocBwwakt3HfKIjXYMO5CNDOK9DIKhmRDSV
h+or8A8WUrvqZ2ceiTZPkNQFVYlG8be2aITTVzGuK8N5MYaFnSTtzyWkXP2P9nYU
Vd1nLt/WjCUNUkjI4/75fOalMFKltcc6iaXB9ktble9wuJH8YQ9tFt456aBZSSs0
cXwqFtrHr973AZQQxGLR9QCHveii9N87NXknDvzMQ+dgWt/fBujTfuuzv3slQw80
mibA021dDCi8h1hYFQAA
-----END CERTIFICATE-----</code></pre>
            <p>Looks like your average PEM encoded certificate. Let's decode it and look at the parameters:</p>
            <pre><code>$ openssl x509 -in merkle-tree-cert.pem -noout -text
Certificate:
    Data:
        Version: 3 (0x2)
        Serial Number: 531 (0x213)
        Signature Algorithm: 1.3.6.1.4.1.44363.47.0
        Issuer: 1.3.6.1.4.1.44363.47.1=44363.48.3
        Validity
            Not Before: Oct 21 15:33:26 2025 GMT
            Not After : Oct 28 15:33:26 2025 GMT
        Subject: CN=cloudflareresearch.com
        Subject Public Key Info:
            Public Key Algorithm: id-ecPublicKey
                Public-Key: (256 bit)
                pub:
                    04:70:ed:e1:96:87:b4:22:ef:fb:dc:a9:cd:9c:5c:
                    ef:1e:9e:ab:1b:6d:d7:11:74:7b:76:c8:3c:a1:5f:
                    94:37:45:99:d8:80:e3:5c:24:4f:28:46:b5:bf:84:
                    60:d8:fc:eb:82:5a:c4:4e:33:90:c7:b3:36:51:0c:
                    92:6d:bf:88:27
                ASN1 OID: prime256v1
                NIST CURVE: P-256
        X509v3 extensions:
            X509v3 Key Usage: critical
                Digital Signature
            X509v3 Extended Key Usage:
                TLS Web Server Authentication
            X509v3 Subject Alternative Name:
                DNS:cloudflareresearch.com, DNS:static-ct.cloudflareresearch.com
    Signature Algorithm: 1.3.6.1.4.1.44363.47.0
    Signature Value:
        00:00:00:00:00:00:02:00:00:00:00:00:00:00:02:58:00:e0:
        44:be:03:a5:bd:6a:b7:f2:9e:39:77:4c:16:4c:f8:06:e5:e1:
        55:c0:93:21:c6:79:83:3c:dd:5b:e6:57:89:c0:75:b3:4c:ec:
        75:8a:0b:53:a0:ca:1c:07:0c:1a:92:dd:c7:7c:a2:23:5d:83:
        0e:e4:23:43:38:af:43:20:a8:66:44:34:95:87:ea:2b:f0:0f:
        16:52:bb:ea:67:67:1e:89:36:4f:90:d4:05:55:89:46:f1:b7:
        b6:68:84:d3:57:31:ae:2b:c3:79:31:86:85:9d:24:ed:cf:25:
        a4:5c:fd:8f:f6:76:14:55:dd:67:2e:df:d6:8c:25:0d:52:48:
        c8:e3:fe:f9:7c:e6:a5:30:52:a5:b5:c7:3a:89:a5:c1:f6:4b:
        5b:95:ef:70:b8:91:fc:61:0f:6d:16:de:39:e9:a0:59:49:2b:
        34:71:7c:2a:16:da:c7:af:de:f7:01:94:10:c4:62:d1:f5:00:
        87:bd:e8:a2:f4:df:3b:35:79:27:0e:fc:cc:43:e7:60:5a:df:
        df:06:e8:d3:7e:eb:b3:bf:7b:25:43:0f:34:9a:26:c0:d3:6d:
        5d:0c:28:bc:87:58:58:15:00:00</code></pre>
            <p>While some of the parameters probably look familiar, others will look unusual. On the familiar side, the subject and public key are exactly what we might expect: the DNS name is <code>cloudflareresearch.com</code> and the public key is for a familiar signature algorithm, ECDSA-P256. This algorithm is not PQ, of course — in the future we would put ML-DSA-44 there instead.</p><p>On the unusual side, OpenSSL appears to not recognize the signature algorithm of the issuer and just prints the raw OID and bytes of the signature. There's a good reason for this: the MTC does not have a signature in it at all! So what exactly are we looking at?</p><p>The trick to leave out signatures is that a Merkle Tree Certification Authority (MTCA) produces its <i>signatureless</i> certificates <i>in batches</i> rather than individually. In place of a signature, the certificate has an <b>inclusion proof</b> of the certificate in a batch of certificates signed by the MTCA.</p><p>To understand how inclusion proofs work, let's think about a slightly simplified version of the MTC specification. To issue a batch, the MTCA arranges the unsigned certificates into a data structure called a <b>Merkle tree</b> that looks like this:</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4LGhISsS07kbpSgDkqx8p2/68e3b36deeca7f97139654d2c769df68/image3.png" />
          </figure><p>Each leaf of the tree corresponds to a certificate, and each inner node is equal to the hash of its children. To sign the batch, the MTCA uses its secret key to sign the head of the tree. The structure of the tree guarantees that each certificate in the batch was signed by the MTCA: if we tried to tweak the bits of any one of the certificates, the treehead would end up having a different value, which would cause the signature to fail.</p><p>An inclusion proof for a certificate consists of the hash of each sibling node along the path from the certificate to the treehead:</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4UZZHkRwsBLWXRYeop4rXv/8598cde48c27c112bc4992889f3d5799/image1.gif" />
          </figure><p>Given a validated treehead, this sequence of hashes is sufficient to prove inclusion of the certificate in the tree. This means that, in order to validate an MTC, the client also needs to obtain the signed treehead from the MTCA.</p><p>This is the key to MTC's efficiency:</p><ol><li><p>Signed treeheads can be disseminated to clients out-of-band and validated offline. Each validated treehead can then be used to validate any certificate in the corresponding batch, eliminating the need to obtain a signature for each server certificate.</p></li><li><p>During the TLS handshake, the client tells the server which treeheads it has. If the server has a signatureless certificate covered by one of those treeheads, then it can use that certificate to authenticate itself. That's <b>1 signature,1 public key and 1 inclusion proof</b> per handshake, both for the server being authenticated.</p></li></ol><p>Now, that's the simplified version. MTC proper has some more bells and whistles. To start, it doesn’t create a separate Merkle tree for each batch, but it grows a single large tree, which is used for better transparency. As this tree grows, periodically (sub)tree heads are selected to be shipped to browsers, which we call <b>landmarks</b>. In the common case browsers will be able to fetch the most recent landmarks, and servers can wait for batch issuance, but we need a fallback: MTC also supports certificates that can be issued immediately and don’t require landmarks to be validated, but these are not as small. A server would provision both types of Merkle tree certificates, so that the common case is fast, and the exceptional case is slow, but at least it’ll work.</p>
    <div>
      <h2>Experimental deployment</h2>
      <a href="#experimental-deployment">
        
      </a>
    </div>
    <p>Ever since early designs for MTCs emerged, we’ve been eager to experiment with the idea. In line with the IETF principle of “<a href="https://www.ietf.org/runningcode/"><u>running code</u></a>”, it often takes implementing a protocol to work out kinks in the design. At the same time, we cannot risk the security of users. In this section, we describe our approach to experimenting with aspects of the Merkle Tree Certificates design <i>without</i> changing any trust relationships.</p><p>Let’s start with what we hope to learn. We have lots of questions whose answers can help to either validate the approach, or uncover pitfalls that require reshaping the protocol — in fact, an implementation of an early MTC draft by <a href="https://www.cs.ru.nl/masters-theses/2025/M_Pohl___Implementation_and_Analysis_of_Merkle_Tree_Certificates_for_Post-Quantum_Secure_Authentication_in_TLS.pdf"><u>Maximilian Pohl</u></a> and <a href="https://www.ietf.org/archive/id/draft-davidben-tls-merkle-tree-certs-07.html#name-acknowledgements"><u>Mia Celeste</u></a> did exactly this. We’d like to know:</p><p><b>What breaks?</b> Protocol ossification (the tendency of implementation bugs to make it harder to change a protocol) is an ever-present issue with deploying protocol changes. For TLS in particular, despite having built-in flexibility, time after time we’ve found that if that flexibility is not regularly used, there will be buggy implementations and middleboxes that break when they see things they don’t recognize. TLS 1.3 deployment <a href="https://blog.cloudflare.com/why-tls-1-3-isnt-in-browsers-yet/"><u>took years longer</u></a> than we hoped for this very reason. And more recently, the rollout of PQ key exchange in TLS caused the Client Hello to be split over multiple TCP packets, something that many middleboxes <a href="https://tldr.fail/"><u>weren't ready for</u></a>.</p><p><b>What is the performance impact?</b> In fact, we expect MTCs to <i>reduce </i>the size of the handshake, even compared to today's non-PQ certificates. They will also reduce CPU cost: ML-DSA signature verification is about as fast as ECDSA, and there will be far fewer signatures to verify. We therefore expect to see a <i>reduction in latency</i>. We would like to see if there is a measurable performance improvement.</p><p><b>What fraction of clients will stay up to date? </b>Getting the performance benefit of MTCs requires the clients and servers to be roughly in sync with one another. We expect MTCs to have fairly short lifetimes, a week or so. This means that if the client's latest landmark is older than a week, the server would have to fallback to a larger certificate. Knowing how often this fallback happens will help us tune the parameters of the protocol to make fallbacks less likely.</p><p>In order to answer these questions, we are implementing MTC support in our TLS stack and in our certificate issuance infrastructure. For their part, Chrome is implementing MTC support in their own TLS stack and will stand up infrastructure to disseminate landmarks to their users.</p><p>As we've done in past experiments, we plan to enable MTCs for a subset of our free customers with enough traffic that we will be able to get useful measurements. Chrome will control the experimental rollout: they can ramp up slowly, measuring as they go and rolling back if and when bugs are found.</p><p>Which leaves us with one last question: who will run the Merkle Tree CA?</p>
    <div>
      <h3>Bootstrapping trust from the existing WebPKI</h3>
      <a href="#bootstrapping-trust-from-the-existing-webpki">
        
      </a>
    </div>
    <p>Standing up a proper CA is no small task: it takes years to be trusted by major browsers. That’s why Cloudflare isn’t going to become a “real” CA for this experiment, and Chrome isn’t going to trust us directly.</p><p>Instead, to make progress on a reasonable timeframe, without sacrificing due diligence, we plan to "mock" the role of the MTCA. We will run an MTCA (on <a href="https://github.com/cloudflare/azul/"><u>Workers</u></a> based on our <a href="https://blog.cloudflare.com/azul-certificate-transparency-log/"><u>StaticCT logs</u></a>), but for each MTC we issue, we also publish an existing certificate from a trusted CA that agrees with it. We call this the <b>bootstrap certificate</b>. When Chrome’s infrastructure pulls updates from our MTCA log, they will also pull these bootstrap certificates, and check whether they agree. Only if they do, they’ll proceed to push the corresponding landmarks to Chrome clients. In other words, Cloudflare is effectively just “re-encoding” an existing certificate (with domain validation performed by a trusted CA) as an MTC, and Chrome is using certificate transparency to keep us honest.</p>
    <div>
      <h2>Conclusion</h2>
      <a href="#conclusion">
        
      </a>
    </div>
    <p>With almost 50% of our traffic already protected by post-quantum encryption, we’re halfway to a fully post-quantum secure Internet. The second part of our journey, post-quantum certificates, is the hardest yet though. A simple drop-in upgrade has a noticeable performance impact and no security benefit before Q-day. This means it’s a hard sell to enable today by default. But here we are playing with fire: migrations always take longer than expected. If we want to keep an ubiquitously private and secure Internet, we need a post-quantum solution that’s performant enough to be enabled by default <b>today</b>.</p><p>Merkle Tree Certificates (MTCs) solves this problem by reducing the number of signatures and public keys to the bare minimum while maintaining the WebPKI's essential properties. We plan to roll out MTCs to a fraction of free accounts by early next year. This does not affect any visitors that are not part of the Chrome experiment. For those that are, thanks to the bootstrap certificates, there is no impact on security.</p><p>We’re excited to keep the Internet fast <i>and</i> secure, and will report back soon on the results of this experiment: watch this space! MTC is evolving as we speak, if you want to get involved, please join the IETF <a href="https://mailman3.ietf.org/mailman3/lists/plants@ietf.org/"><u>PLANTS mailing list</u></a>.</p> ]]></content:encoded>
            <category><![CDATA[Post-Quantum]]></category>
            <category><![CDATA[Research]]></category>
            <category><![CDATA[Cryptography]]></category>
            <category><![CDATA[Security]]></category>
            <category><![CDATA[TLS]]></category>
            <category><![CDATA[Chrome]]></category>
            <category><![CDATA[Google]]></category>
            <category><![CDATA[IETF]]></category>
            <category><![CDATA[Transparency]]></category>
            <category><![CDATA[Rust]]></category>
            <category><![CDATA[Open Source]]></category>
            <category><![CDATA[Cloudflare Workers]]></category>
            <guid isPermaLink="false">4jURWdZzyjdrcurJ4LlJ1z</guid>
            <dc:creator>Luke Valenta</dc:creator>
            <dc:creator>Christopher Patton</dc:creator>
            <dc:creator>Vânia Gonçalves</dc:creator>
            <dc:creator>Bas Westerbaan</dc:creator>
        </item>
        <item>
            <title><![CDATA[A next-generation Certificate Transparency log built on Cloudflare Workers]]></title>
            <link>https://blog.cloudflare.com/azul-certificate-transparency-log/</link>
            <pubDate>Fri, 11 Apr 2025 13:00:00 GMT</pubDate>
            <description><![CDATA[ Learn about recent developments in Certificate Transparency (CT), and how we built a next-generation CT log on top of Cloudflare's Developer Platform. ]]></description>
            <content:encoded><![CDATA[ <p>Any public <a href="https://en.wikipedia.org/wiki/Certificate_authority"><u>certification authority (CA)</u></a> can issue a <a href="https://www.cloudflare.com/learning/ssl/what-is-an-ssl-certificate/"><u>certificate</u></a> for any website on the Internet to allow a webserver to authenticate itself to connecting clients. Take a moment to scroll through the list of trusted CAs for your web browser (e.g., <a href="https://chromium.googlesource.com/chromium/src/+/main/net/data/ssl/chrome_root_store/test_store.certs"><u>Chrome</u></a>). You may recognize (and even trust) some of the names on that list, but it should make you uncomfortable that <i>any</i> CA on that list could issue a certificate for any website, and your browser would trust it. It’s a castle with 150 doors.</p><p><a href="https://datatracker.ietf.org/doc/html/rfc6962"><u>Certificate Transparency (CT)</u></a> plays a vital role in the <a href="https://datatracker.ietf.org/wg/wpkops/about/"><u>Web Public Key Infrastructure (WebPKI)</u></a>, the set of systems, policies, and procedures that help to establish trust on the Internet. CT ensures that all website certificates are <a href="https://crt.sh"><u>publicly visible</u></a> and <a href="https://developers.cloudflare.com/ssl/edge-certificates/additional-options/certificate-transparency-monitoring/"><u>auditable</u></a>, helping to protect website operators from certificate mis-issuance by dishonest CAs, and helping honest CAs to detect key compromise and other failures.</p><p>In this post, we’ll discuss the history, evolution, and future of the CT ecosystem. We’ll cover some of the challenges we and others have faced in operating CT logs, and how the new <a href="https://c2sp.org/static-ct-api"><u>static CT API</u></a> log design lowers the bar for operators, helping to ensure that this critical infrastructure keeps up with the fast growth and changing landscape of the Internet and WebPKI. We’re excited to open source our <a href="https://github.com/cloudflare/azul"><u>Rust implementation</u></a> of the new log design, built for deployment on Cloudflare’s Developer Platform, and to announce <a href="https://github.com/cloudflare/azul/tree/main/crates/ct_worker#test-logs"><u>test logs</u></a> deployed using this infrastructure.</p>
    <div>
      <h2>What is Certificate Transparency?</h2>
      <a href="#what-is-certificate-transparency">
        
      </a>
    </div>
    <p>In 2011, the Dutch CA DigiNotar was <a href="https://threatpost.com/final-report-diginotar-hack-shows-total-compromise-ca-servers-103112/77170/"><u>hacked</u></a>, allowing attackers to forge a certificate for *.google.com and use it to impersonate Gmail to targeted Iranian users in an attempt to compromise personal information. Google caught this because they used <a href="https://developers.cloudflare.com/ssl/reference/certificate-pinning/"><u>certificate pinning</u></a>, but that technique <a href="https://blog.cloudflare.com/why-certificate-pinning-is-outdated/"><u>doesn’t scale well</u></a> for the web. This, among other similar attacks, led a team at Google in 2013 to develop Certificate Transparency (CT) as a mechanism to catch mis-issued certificates. CT creates a public audit trail of all certificates issued by public CAs, helping to protect users and website owners by holding <a href="https://sslmate.com/resources/certificate_authority_failures"><u>CAs accountable</u></a> for the certificates they issue (even unwittingly, in the event of key compromise or software bugs). CT has been a great success: since 2013, over <a href="https://crt.sh/cert-populations"><u>17 billion</u></a> certificates have been logged, and CT was awarded the prestigious <a href="https://blog.transparency.dev/certificate-transparency-wins-the-levchin-prize"><u>Levchin Prize</u></a> in 2024 for its role as a critical safety mechanism for the Internet.</p><p>Let’s take a brief look at the entities involved in the CT ecosystem. Cloudflare itself operates the <a href="https://blog.cloudflare.com/introducing-certificate-transparency-and-nimbus/"><u>Nimbus CT logs</u></a> and the CT monitor powering the <a href="https://blog.cloudflare.com/a-tour-through-merkle-town-cloudflares-ct-ecosystem-dashboard/"><u>Merkle Town</u></a> <a href="https://ct.cloudflare.com"><u>dashboard</u></a>.</p><p><i>Certification Authorities (CAs)</i> are organizations entrusted to issue certificates on behalf of website operators, which in turn can use those certificates to authenticate themselves to connecting clients.</p><p><i>CT-enforcing clients</i> like the <a href="https://googlechrome.github.io/CertificateTransparency/ct_policy.html"><u>Chrome</u></a>, <a href="https://support.apple.com/en-us/103214"><u>Safari</u></a>, and <a href="https://developer.mozilla.org/en-US/docs/Web/Security/Certificate_Transparency"><u>Firefox</u></a> browsers are web clients that only accept certificates compliant with their CT policies. For example, a policy might require that a certificate includes proof that it has been submitted to at least two independently-operated public CT logs.</p><p><i>Log operators</i> run CT logs, which are public, append-only lists of certificates. CAs and other clients can submit a certificate to a CT log to obtain a “promise” from the CT log that it will incorporate the entry into the append-only log within some grace period. CT logs periodically (every few seconds, typically) update their log state to incorporate batches of new entries, and publish a signed checkpoint that attests to the new state.</p><p><i>Monitors</i> are third parties that continuously crawl CT logs and check that their behavior is correct. For instance, they verify that a log is self-consistent and append-only by ensuring that when new entries are added to the log, no previous entries are deleted or modified. Monitors may also examine logged certificates to help website operators detect mis-issuance.</p>
    <div>
      <h2>Challenges in operating a CT log</h2>
      <a href="#challenges-in-operating-a-ct-log">
        
      </a>
    </div>
    <p>Despite the success of CT, it is a less than perfect system. Eric Rescorla has an <a href="https://educatedguesswork.org/posts/transparency-part-2/"><u>excellent writeup</u></a> on the many compromises made to make CT deployable on the Internet of 2013. We’ll focus on the operational complexities of running a CT log.</p><p>Let’s look at the requirements for running a CT log from <a href="https://googlechrome.github.io/CertificateTransparency/log_policy.html#ongoing-requirements-of-included-logs"><u>Chrome’s CT log policy</u></a> (which are more or less mirrored by those of <a href="https://support.apple.com/en-us/103703"><u>Safari</u></a> and <a href="https://groups.google.com/a/mozilla.org/g/dev-security-policy/c/lypRGp4JGGE"><u>Firefox</u></a>), and what can go wrong. The requirements center around <b>integrity</b> and <b>availability</b>.</p><p>To be considered a trusted auditing source, CT logs necessarily have stringent <b>integrity</b> requirements. Anything the log produces must be correct and self-consistent, meaning that a CT log cannot present two different views of the log to different clients, and must present a consistent history for its entire lifetime. Similarly, when a CT log accepts a certificate and promises to incorporate it by returning a Signed Certificate Timestamp (SCT) to the client, it must eventually incorporate that certificate into its append-only log.</p><p>The integrity requirements are unforgiving. A single bit-flip due to a hardware failure or cosmic ray can (<a href="https://www.agwa.name/blog/post/how_ct_logs_fail"><u>and</u></a> <a href="https://groups.google.com/a/chromium.org/g/ct-policy/c/R27Zy9U5NjM"><u>has</u></a>) caused logs to produce incorrect results and thus be disqualified by CT programs. Even software updates to running logs can be fatal, as a change that causes a correctness violation cannot simply be rolled back. Perhaps the <a href="https://github.com/C2SP/C2SP/issues/79"><u>greatest risk</u></a> to individual log integrity is <a href="https://groups.google.com/a/chromium.org/g/ct-policy/c/W1Ty2gO0JNA"><u>failing to incorporate certificates</u></a> for which they issued SCTs, for example if they fail to commit those pending certificates to durable storage. See Andrew Ayer’s <a href="https://www.agwa.name/blog/post/how_ct_logs_fail"><u>great synopsis</u></a> for more examples of CT log failures (up to 2021).</p><p>A CT log must also meet certain <b>availability</b> requirements to effectively provide its core functionality as a publicly auditable log. Clients must be able to reliably retrieve log data — Chrome’s policy requires a minimum of 99% average uptime over a 90-day rolling period for each API endpoint — and any entries for which an SCT has been issued must be incorporated into the log within the grace period, called the Maximum Merge Delay (MMD), 24 hours in Chrome’s policy.</p><p>The design of the current CT log read APIs puts strain on the ability of log operators to meet uptime requirements. The API endpoints are <i>dynamic</i> and not easily cacheable without bespoke caching rules that are aware of the CT API. For instance, the <a href="https://datatracker.ietf.org/doc/html/rfc6962#section-4.6"><u>get-entries</u></a> endpoint allows a client to request arbitrary ranges of entries from a log, and the <a href="https://datatracker.ietf.org/doc/html/rfc6962#section-4.5"><u>get-proof-by-hash</u></a> requires the server to construct inclusion proofs for any certificate requested by the client. To serve these requests, CT log servers need to be backed by databases easily 5-10TB in size capable of serving tens of millions of requests per day. This increases operator complexity and expense, not to mention the high cost of bandwidth of serving these requests.</p><p>MMD violations are unfortunately not uncommon. Cloudflare’s own Nimbus logs have experienced prolonged outages in the past, most recently in <a href="https://blog.cloudflare.com/post-mortem-on-cloudflare-control-plane-and-analytics-outage/"><u>November 2023</u></a> due to complete power loss in the datacenter running the logs. During normal log operation, if the log accepts entries more quickly than it incorporates them, the backlog can grow to exceed the MMD. Log operators can remedy this by rate-limiting or temporarily disabling the write APIs, but this can in turn contribute to violations of the uptime requirements.</p><p>The high bar for log operation has limited the organizations operating CT logs to only <a href="https://ct.cloudflare.com/logs"><u>Cloudflare and five others</u></a>! Losing one or two logs is enough to compromise the stability of the CT ecosystem. Clearly, a change is needed.</p>
    <div>
      <h2>A next-generation CT log design</h2>
      <a href="#a-next-generation-ct-log-design">
        
      </a>
    </div>
    <p>In May 2024, Let’s Encrypt <a href="https://letsencrypt.org/2024/03/14/introducing-sunlight/"><u>announced</u></a> <a href="https://github.com/FiloSottile/sunlight"><u>Sunlight</u></a>, an implementation of a next-generation CT log designed for the modern WebPKI, incorporating a decade of lessons learned from running CT and similar transparency systems. The new CT log design, called the <a href="https://c2sp.org/static-ct-api"><u>static CT API</u></a>, is partially based on the <a href="https://go.googlesource.com/proposal/+/master/design/25530-sumdb.md"><u>Go checksum database</u></a>, and organizes log data as a series of <a href="https://research.swtch.com/tlog#tiling_a_log"><u>tiles</u></a> that are easy to cache and serve. The new design provides efficiency improvements that cut operation costs, help logs to meet availability requirements, and reduce the risk of integrity violations.</p><p>The static CT API is split into two parts, the <a href="https://github.com/C2SP/C2SP/blob/main/static-ct-api.md#monitoring-apis"><b><u>monitoring APIs</u></b></a> (so named because CT monitors are the primary clients), and the <a href="https://github.com/C2SP/C2SP/blob/main/static-ct-api.md#monitoring-apis"><b><u>submission APIs</u></b></a> for adding new certificates to the log.</p><p>The <b>monitoring APIs</b> replace the dynamic read APIs of <a href="https://datatracker.ietf.org/doc/html/rfc6962#section-4"><u>RFC 6962</u></a>, and organize log data into static, cacheable tiles. (See <a href="https://research.swtch.com/tlog#tiling_a_log"><u>Russ Cox’s blog post</u></a> for an in-depth explanation of tiled logs.) CT log operators can efficiently serve static tiles from <a href="https://www.cloudflare.com/developer-platform/solutions/s3-compatible-object-storage/">S3-compatible object storage buckets</a> and cache them using CDN infrastructure, without needing dedicated API servers. Clients can then download the necessary tiles to retrieve specific log entries or reconstruct arbitrary proofs.</p><p>The static CT API introduces another efficiency by deduplicating intermediate and root “issuer” certificates in a log entry’s certificate chain. The number of publicly-trusted issuer certificates is small (<a href="https://www.ccadb.org/"><u>in the low thousands</u></a>), so instead of storing them repeatedly for each log entry, only the issuer hash is stored. Clients can look up issuer certificates by hash from a <a href="https://github.com/C2SP/C2SP/blob/main/static-ct-api.md#issuers"><u>separate endpoint</u></a>.</p><p>The <b>submission APIs</b> remain backwards-compatible with <a href="https://datatracker.ietf.org/doc/html/rfc6962#section-4"><u>RFC 6962</u></a>, meaning that TLS clients and CAs can submit to them without any changes. However, there is one notable addition: the static CT specification requires logs to hold on to requests as it batches and sequences them, and responds with an SCT only after entries have been incorporated into the log. The specification defines a <a href="https://github.com/C2SP/C2SP/blob/main/static-ct-api.md#sct-extension"><u>required SCT extension</u></a> indicating the entry’s index in the log. At the cost of slightly delayed SCT issuance (on the order of seconds), this change eliminates one of the major pain points of operating a CT log (the Merge Delay).</p><p>Having the log <i>index</i> of a certificate available in an SCT enables further efficiencies. <i>SCT auditing</i> refers to the process by which TLS clients or monitors can check if a log has fulfilled its promise to incorporate a certificate for which it has issued an SCT. In the RFC 6962 API, checking if a certificate is present in a log when you don’t already know the index requires using the <a href="https://datatracker.ietf.org/doc/html/rfc6962#section-4.5"><u>get-proof-by-hash</u></a> endpoint to look up the entry by the certificate hash (and the server needs to maintain a mapping from hash to index to efficiently serve these requests). Instead, with the index immediately available in the SCT, clients can directly retrieve the specific log data tile covering that index, even with <a href="https://transparency.dev/summit2024/sct-auditing.html"><u>efficient privacy-preserving techniques</u></a>.</p><p>Since it was announced, the static CT API has taken the CT ecosystem by storm. Aside from <a href="https://github.com/FiloSottile/sunlight"><u>Sunlight</u></a> and our brand new <a href="https://github.com/cloudflare/azul"><u>Azul</u></a> (discussed below), there are at least two other independent implementations, <a href="https://blog.transparency.dev/i-built-a-new-certificate-transparency-log-in-2024-heres-what-i-learned"><u>Itko</u></a> and <a href="https://blog.transparency.dev/introducing-trillian-tessera"><u>Trillian Tessera</u></a>. Several CT monitors (including <a href="https://crt.sh"><u>crt.sh</u></a>, <a href="https://sslmate.com/certspotter/"><u>certspotter</u></a>, <a href="https://censys.com/"><u>Censys</u></a>, and our own <a href="https://ct.cloudflare.com"><u>Merkle Town</u></a>) have added support for the new log format, and as of April 1, 2025, Chrome has begun accepting submissions for <a href="https://groups.google.com/a/chromium.org/g/ct-policy/c/HBFZHG0TCsY/m/HAaVRK6MAAAJ"><u>static CT API logs</u></a> into their CT log program.</p>
    <div>
      <h2>A static CT API implementation on Workers</h2>
      <a href="#a-static-ct-api-implementation-on-workers">
        
      </a>
    </div>
    <p>This section discusses how we designed and built our static CT log implementation, <a href="https://github.com/cloudflare/azul"><u>Azul</u></a> (short for <a href="https://en.wikipedia.org/wiki/Azulejo"><u>azulejos</u></a>, the colorful Portuguese and Spanish ceramic tiles). For curious readers and prospective CT log operators, we encourage you to follow the instructions in the repo to quickly set up your own static CT log. Questions and feedback in the form of GitHub issues are welcome!</p><p>Our two prototype logs, <a href="https://static-ct.cloudflareresearch.com/logs/cftest2025h1a/metadata"><u>Cloudflare Research 2025h1a</u></a> and <a href="https://static-ct.cloudflareresearch.com/logs/cftest2025h2a/metadata"><u>Cloudflare Research 2025h2a</u></a> (accepting certificates expiring in the first and second half of 2025, respectively), are available for testing.</p>
    <div>
      <h3>Design decisions and goals</h3>
      <a href="#design-decisions-and-goals">
        
      </a>
    </div>
    <p>The advent of the static CT API gave us the perfect opportunity to rethink how we run our CT logs. There were a few design decisions we made early on to shape the project.</p><p>First and foremost, we wanted to run our CT logs on our distributed global network. Especially after the <a href="https://blog.cloudflare.com/post-mortem-on-cloudflare-control-plane-and-analytics-outage/"><u>painful November 2023 control plane outage</u></a>, there’s been a push to deploy services on our highly available and resilient network instead of running in centralized datacenters.</p><p>Second, with Cloudflare’s deeply engrained culture of <a href="https://blog.cloudflare.com/tag/dogfooding/"><u>dogfooding</u></a> (building Cloudflare on top of Cloudflare), we decided to implement the CT log on top of Cloudflare’s Developer Platform and <a href="https://workers.cloudflare.com/"><u>Workers</u></a>. </p><p>Dogfooding gives us an opportunity to find pain points in our product offerings, and to provide feedback to our development teams to improve the developer experience for everyone. We restricted ourselves to only features and default limits generally available to customers, so that we could have the same experience as an external Cloudflare developer, and would produce an implementation that anyone could deploy.</p><p>Another major design decision was to implement the CT log in Rust, a modern systems programming language with static typing and built-in memory safety that is heavily used across Cloudflare, and which already has mature (if sometimes <a href="#developing-a-workers-application-in-rust"><u>lacking full feature parity</u></a>) <a href="https://github.com/cloudflare/workers-rs"><u>Workers bindings</u></a> that we have used to build <a href="https://blog.cloudflare.com/wasm-coredumps/"><u>several production services</u></a>. This also provided us with an opportunity to produce Rust crates porting <a href="https://pkg.go.dev/golang.org/x/mod/sumdb"><u>Go implementations</u></a> of various <a href="https://c2sp.org"><u>C2SP</u></a> specifications that can be reused across other projects.</p><p>For the new logs to be deployable, they needed to be at least as performant as existing CT logs. As a point of reference, the <a href="https://ct.cloudflare.com/logs/nimbus2025"><u>Nimbus2025</u></a> log currently handles just over 33 million requests per day (~380/s) across the read APIs, and about 6 million per day (~70/s) across the write APIs.</p>
    <div>
      <h3>Implementation </h3>
      <a href="#implementation">
        
      </a>
    </div>
    <p>We based Azul heavily on <a href="https://github.com/FiloSottile/sunlight"><u>Sunlight</u></a>, a Go application built for deployment as a standalone server. As such, this section serves as a reference for translating a traditional server to Cloudflare’s serverless platform.</p><p>To start, let’s briefly review the Sunlight architecture (described in more detail in the <a href="https://github.com/FiloSottile/sunlight/blob/main/README.md"><u>README</u></a> and <a href="https://filippo.io/a-different-CT-log"><u>original design doc</u></a>). A Sunlight instance is a single Go process, serving one or multiple CT logs. It is backed by three different storage locations with different properties:</p><ul><li><p>A “lock backend” which stores the current checkpoint for each log. This datastore needs to be strongly consistent, but only stores trivial amounts of data.</p></li><li><p>A per-log object storage bucket from which to serve tiles, checkpoints, and issuers to CT clients. This datastore needs to be strongly consistent, and to handle multiple terabytes of data.</p></li><li><p>A per-log deduplication cache, to return SCTs for previously-submitted (pre-)certificates. This datastore is best-effort (as duplicate entries are not fatal to log operation), and stores tens to hundreds of gigabytes of data.</p></li></ul><p>Two major components handle the bulk of the CT log application logic:</p><ul><li><p>A frontend HTTP server handles incoming requests to the submission APIs to add new certificates to the log, validates them, checks the deduplication cache, adds the certificate to a pool of entries to be sequenced, and waits for sequencing to complete before responding to the client.</p></li><li><p>The sequencer periodically (every 1s, by default) sequences the pool of pending entries, writes new tiles to the object backend, persists the latest checkpoint covering the new log state to the lock and object backends, and signals to waiting requests that the pool has been sequenced.</p></li></ul>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6gLwzRo4Azbls2wvM12TJx/80d6f7aad1317f31dfe06a0c474ee93c/image5.png" />
          </figure><p><sup><i>A static CT API log running on a traditional server using the Sunlight implementation.</i></sup></p><p>Next, let’s look at how we can translate these components into ones suitable for deployment on Workers.</p>
    <div>
      <h4>Making it work</h4>
      <a href="#making-it-work">
        
      </a>
    </div>
    <p>Let’s start with the easy choices. The static CT <a href="https://github.com/C2SP/C2SP/blob/main/static-ct-api.md#monitoring-apis"><u>monitoring APIs</u></a> are designed to serve static, cacheable, compressible assets from object storage. The API should be highly available and have the capacity to serve any number of CT clients. The natural choice is <a href="https://www.cloudflare.com/developer-platform/products/r2/"><u>Cloudflare R2</u></a>, which provides globally consistent storage with capacity for <a href="https://developers.cloudflare.com/r2/platform/limits/"><u>large data volumes</u></a>, customizability to configure caching and compression, and unbounded read operations.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/qsC1dO8blS1eGOysu9WQa/75da37719be35824a7533dbbd62bede3/image4.png" />
          </figure><p><sup><i>A static CT API log running on Workers using a preliminary version of the Azul implementation which ran into performance limitations.</i></sup></p><p>The static CT <a href="https://github.com/C2SP/C2SP/blob/main/static-ct-api.md#submission-apis"><u>submission APIs</u></a> are where the real challenge lies. In particular, they allow CT clients to submit certificate chains to be incorporated into the append-only log. We used <a href="https://developers.cloudflare.com/learning-paths/workers/concepts/workers-concepts/"><u>Workers</u></a> as the frontend for the CT log application. Workers run in data centers close to the client, scaling on demand to handle request load, making them the ideal place to run the majority of the heavyweight request handling logic, including validating requests, checking the deduplication cache (discussed below), and submitting the entry to be sequenced.</p><p>The next question was where and how we’d run the backend to handle the CT log sequencing logic, which needs to be stateful and tightly coordinated. We chose <a href="https://developers.cloudflare.com/durable-objects/"><u>Durable Objects (DOs)</u></a>, a special type of stateful Cloudflare Worker where each instance has persistent storage and a unique name which can be used to route requests to it from anywhere in the world. DOs are designed to scale effortlessly for applications that can be easily broken up into self-contained units that do not need a lot of coordination across units. For example, a <a href="https://blog.cloudflare.com/introducing-workers-durable-objects/#demo-chat"><u>chat application</u></a> can use one DO to control each chat room. In our model, then, each CT log is controlled by a single DO. This architecture allows us to easily run multiple CT logs within a single Workers application, but as we’ll see, the limitations of <i>individual</i> single-threaded DOs can easily become a bottleneck. More on this later.</p><p>With the CT log backend as a Durable Object, several other components fell into place: Durable Objects’ <a href="https://developers.cloudflare.com/durable-objects/api/storage-api/"><u>strongly-consistent transactional storage</u></a> neatly fit the requirements for the “lock backend” to persist the log’s latest checkpoint, and we can use an <a href="https://developers.cloudflare.com/durable-objects/api/alarms/"><u>alarm</u></a> to trigger the log sequencing every second. We can also use <a href="https://developers.cloudflare.com/durable-objects/reference/data-location/#provide-a-location-hint"><u>location hints</u></a> to place CT logs in locations geographically close to clients for reduced latency, similar to <a href="https://groups.google.com/g/certificate-transparency/c/I74Wp-KdWHc"><u>Google’s Argon and Xenon logs</u></a>.</p><p>The <a href="https://developers.cloudflare.com/workers/platform/storage-options/"><u>choice of datastore</u></a> for the deduplication cache proved to be non-obvious. The cache is best-effort, and intended to avoid re-sequencing entries that are already present in the log. The cache key is computed by hashing certain fields of the <code>add-[pre-]chain</code> request, and the cache value consists of the entry’s index in the log and the timestamp at which it was sequenced. At current log submission rates, the deduplication cache could grow in excess of <a href="https://github.com/FiloSottile/sunlight/tree/main?tab=readme-ov-file#operating-a-sunlight-log"><u>50 GB for 6 months of log data</u></a>. In the Sunlight implementation, the deduplication cache is implemented as a local SQLite database, where checks against it are tightly coupled with sequencing, which ensures that duplicates from in-flight requests are correctly accounted for. However, this architecture did not translate well to Cloudflare's architecture. The data size doesn’t comfortably fit within <a href="https://developers.cloudflare.com/durable-objects/platform/limits/"><u>Durable Object Storage</u></a> or <a href="https://developers.cloudflare.com/d1/platform/limits/"><u>single-database D1</u></a> limits, and it was too slow to directly read and write to remote storage from within the sequencing loop. Ultimately, we split the deduplication cache into two components: a local fixed-size in-memory cache for fast deduplication over short periods of time (on the order of minutes), and the other a long-term deduplication cache built on <a href="https://developers.cloudflare.com/kv/"><u>Cloudflare Workers KV</u></a> a global, low-latency, <a href="https://developers.cloudflare.com/kv/reference/faq/#is-workers-kv-eventually-consistent-or-strongly-consistent"><u>eventually-consistent</u></a> key-value store <a href="https://developers.cloudflare.com/kv/platform/limits/"><u>without storage limitations</u></a>.</p><p>With this architecture, it was <a href="#developing-a-workers-application-in-rust"><u>relatively straightforward</u></a> to port the Go code to Rust, and to bring up a functional static CT log up on Workers. We’re done then, right? Not quite. Performance tests showed that the log was only capable of sequencing 20-30 new entries per second, well under the 70 per second target of existing logs. We could work around this by simply <a href="https://letsencrypt.org/2024/03/14/introducing-sunlight/#running-more-logs"><u>running more logs</u></a>, but that puts strain on other parts of the CT ecosystem — namely on TLS clients and monitors, which need to keep state for each log. Additionally, the alarm used to trigger sequencing would often be delayed by multiple seconds, meaning that the log was failing to produce new tree heads at consistent intervals. Time to go back to the drawing board.</p>
    <div>
      <h4>Making it fast</h4>
      <a href="#making-it-fast">
        
      </a>
    </div>
    <p>In the design thus far, we’re asking a single-threaded Durable Object instance to do a lot of multi-tasking. The DO processes incoming requests from the Frontend Worker to add entries to the sequencing pool, and must periodically sequence the pool and write state to the various storage backends. A log handling 100 requests per second needs to switch between 101 running tasks (the extra one for the sequencing), plus any async tasks like writing to remote storage — usually 10+ writes to object storage and one write to the long-term deduplication cache per sequenced entry. No wonder the sequencing task was getting delayed!</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7BCidjDyYw2YS1Ot84LHdk/240ce935eb4e36c82255d846d964fdff/image2.png" />
          </figure><p><sup><i>A static CT API log running on Workers using the Azul implementation with batching to improve performance.</i></sup></p><p>We were able to work around these issues by adding an additional layer of DOs between the Frontend Worker and the Sequencer, which we call Batchers. The Frontend Worker uses <a href="https://en.wikipedia.org/wiki/Consistent_hashing"><u>consistent hashing</u></a> on the cache key to determine which of several Batchers to submit the entry to, and the Batcher helps to reduce the number of requests to the Sequencer by buffering requests and sending them together in batches. When the batch is sequenced, the Batcher distributes the responses back to the Frontend Workers that submitted the request. The Batcher also handles writing updates to the deduplication cache, further freeing up resources for the Sequencer.</p><p>By limiting the scope of the critical block of code that needed to be run synchronously in a single DO, and leaning on the strengths of DOs by scaling horizontally where the workload allows it, we were able to drastically improve application performance. With this new architecture, the CT log application can handle upwards of 500 requests per second to the submission APIs to add new log entries, while maintaining a consistent sequencing tempo to keep per-request latency low (typically 1-2 seconds).</p>
    <div>
      <h3>Developing a Workers application in Rust</h3>
      <a href="#developing-a-workers-application-in-rust">
        
      </a>
    </div>
    <p>One of the reasons I was excited to work on this project is that it gave me an opportunity to implement a Workers application in Rust, which I’d never done from scratch before. Not everything was smooth, but overall I would recommend the experience.</p><p>The <a href="https://github.com/cloudflare/workers-rs"><u>Rust bindings to Cloudflare Workers</u></a> are an open source project that aims to bring support for all of the features you know and love from the <a href="https://developers.cloudflare.com/workers/languages/javascript/"><u>JavaScript APIs</u></a> to the Rust language. However, there is some lag in terms of feature parity. Often when working on this project, I’d read about a particular Workers feature in the <a href="https://developers.cloudflare.com"><u>developer docs</u></a>, only to find that support had <a href="https://github.com/cloudflare/workers-rs/issues/645"><u>not yet</u></a> <a href="https://github.com/cloudflare/workers-rs/issues/716"><u>been added</u></a>, or was only <a href="https://github.com/cloudflare/workers-rs?tab=readme-ov-file#rpc-support"><u>partially supported</u></a>, for the Rust bindings. I came across some <a href="https://github.com/cloudflare/workers-rs/issues/432"><u>surprising gotchas</u></a> (not all bad, like <a href="https://docs.rs/tokio/1.44.1/tokio/sync/watch/index.html"><u>tokio::sync::watch</u></a> channels <a href="https://github.com/cloudflare/workers-rs/pull/719"><u>working seamlessly</u></a>, despite <a href="https://github.com/cloudflare/workers-rs?tab=readme-ov-file#faq"><u>this warning</u></a>). Documentation about <a href="https://developers.cloudflare.com/workers/observability/dev-tools/breakpoints/"><u>debugging</u></a> and <a href="https://developers.cloudflare.com/workers/observability/dev-tools/cpu-usage/"><u>profiling</u></a> Rust Workers was also not clear (e.g., how to <a href="https://github.com/cloudflare/cloudflare-docs/pull/21347"><u>preserve debug symbols</u></a>), but it does in fact work!</p><p>To be clear, these rough edges are expected! The Workers platform is continuously gaining new features, and it’s natural that the Rust bindings would fall behind. As more developers rely on (and contribute to, <i>hint hint</i>) the Rust bindings, the developer experience will continue to improve.</p>
    <div>
      <h2>What is next for Certificate Transparency</h2>
      <a href="#what-is-next-for-certificate-transparency">
        
      </a>
    </div>
    <p>The WebPKI is constantly evolving and growing, and upcoming changes, in particular shorter certificate lifetimes and larger post-quantum certificates, are going to place significantly more load on the CT ecosystem.</p><p>The <a href="https://cabforum.org/"><u>CA/Browser Forum</u></a> defines a set of <a href="https://cabforum.org/working-groups/server/baseline-requirements/documents/TLSBRv2.0.4.pdf"><u>Baseline Requirements</u></a> for publicly-trusted TLS server certificates.  As of 2020, the maximum certificate lifetime for publicly-trusted certificates is 398 days. However, there is a <a href="https://github.com/cabforum/servercert/pull/553"><u>ballot measure</u></a> to reduce that period to as low as 47 days by March 2029. Let’s Encrypt is going even further, and at the <a href="https://letsencrypt.org/2024/12/11/eoy-letter-2024/"><u>end of 2024 announced</u></a> that they will be offering short-lived certificates with a lifetime of only <a href="https://letsencrypt.org/2025/01/16/6-day-and-ip-certs/"><u>six days</u></a> by the end of 2025. Based on some back-of-the-envelope calculations using statistics from <a href="https://ct.cloudflare.com/"><u>Merkle Town</u></a>, these changes could increase the number of logged entries in the CT ecosystem by <b>16-20x</b>.</p><p>If you’ve been keeping up with this blog, you’ll also know that <a href="https://blog.cloudflare.com/another-look-at-pq-signatures/"><u>post-quantum certificates</u></a> are on the horizon, bringing with them larger signature and public key sizes. Today, a <a href="https://crt.sh/?id=17119212878"><u>certificate</u></a> with an P-256 ECDSA public key and issuer signature can be less than 1kB. Dropping in a ML-DSA<sub>44</sub> public key and signature brings the same certificate size to 4.6 kB, assuming the SCTs use 96-byte <a href="https://blog.cloudflare.com/another-look-at-pq-signatures/"><u>UOV</u><u><sub>ls-pkc</sub></u></a> signatures. With these choices, post-quantum certificates could require CT logs to store <b>4x</b> the amount of data per log entry.</p><p>The static CT API design helps to ensure that CT logs are much better equipped to handle this increased load, especially if the load is distributed across <a href="https://letsencrypt.org/2024/03/14/introducing-sunlight/#running-more-logs"><u>multiple logs</u></a> per operator. Our <a href="https://github.com/cloudflare/azul"><u>new implementation</u></a> makes it easy for log operators to run CT logs on top of Cloudflare’s infrastructure, adding more operational diversity and robustness to the CT ecosystem. We welcome feedback on the design and implementation as <a href="https://github.com/cloudflare/azul/issues"><u>GitHub issues</u></a>, and encourage CAs and other interested parties to start submitting to and consuming from our <a href="https://github.com/cloudflare/azul/tree/main/crates/ct_worker#test-logs"><u>test logs</u></a>.</p> ]]></content:encoded>
            <category><![CDATA[Developer Week]]></category>
            <category><![CDATA[Research]]></category>
            <category><![CDATA[Open Source]]></category>
            <category><![CDATA[Rust]]></category>
            <category><![CDATA[Cloudflare Workers]]></category>
            <category><![CDATA[Transparency]]></category>
            <category><![CDATA[Certificate Transparency]]></category>
            <guid isPermaLink="false">5n88kLCWbpk22AmRzMQN9g</guid>
            <dc:creator>Luke Valenta</dc:creator>
        </item>
        <item>
            <title><![CDATA[Cloudflare’s 2024 Transparency Reports - now live with new data and a new format]]></title>
            <link>https://blog.cloudflare.com/cloudflare-2024-transparency-reports-now-live-with-new-data-and-a-new-format/</link>
            <pubDate>Fri, 28 Feb 2025 14:00:00 GMT</pubDate>
            <description><![CDATA[ Cloudflare’s 2024 Transparency Reports are now live — with new topics, new data points, and a new format, consistent with the EU’s Digital Services Act ]]></description>
            <content:encoded><![CDATA[ <p>Cloudflare’s 2024 <a href="https://www.cloudflare.com/transparency/"><u>Transparency Reports</u></a> are now live — with new topics, new data points, and a new format. For <a href="https://www.cloudflare.com/transparency/archive/"><u>over 10 years</u></a>, Cloudflare has published transparency reports twice a year in order to provide information to our customers, policymakers, and the public about how we handle legal requests and abuse reports relating to the websites using our services. Such transparency reporting is now recognized as a <a href="https://www.accessnow.org/campaign/transparency-reporting-index/"><u>best practice</u></a> among companies offering online services, and has even been written into law with the European Union’s Digital Service Act (DSA).</p><p>While Cloudflare has been publishing transparency reports for a long time, this year we chose to revamp the report in light of new reporting obligations under the DSA, and our goal of making our reports both comprehensive and easy to understand. Before you dive into the reports, learn more about Cloudflare’s longstanding commitment to transparency reporting and the key updates we made in this year’s reports.</p>
    <div>
      <h3>Cloudflare’s approach to transparency reporting</h3>
      <a href="#cloudflares-approach-to-transparency-reporting">
        
      </a>
    </div>
    <p>Cloudflare started issuing transparency reports early on, because we have long believed that transparency is essential to earning trust. In addition to sharing data about the number and nature of requests we receive, our transparency reports have provided a forum for Cloudflare to articulate the principles we apply in approaching <a href="https://www.cloudflare.com/trust-hub/law-enforcement/"><u>legal requests for customer information</u></a> and how we <a href="https://www.cloudflare.com/trust-hub/abuse-approach/"><u>handle abuse</u></a>.</p><p>Grounded in Cloudflare’s principles, our transparency reports have necessarily evolved over time as the scale and complexity of our services has grown. While our initial reports were focused on governmental requests for customer information, our reports have expanded to cover a broader set of issues, including civil requests for customer information, legal requests to limit or terminate services, and our process for handling reports of abuse on websites using our services.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7xcEb5PMZSvbk1Blkh7I1S/1694b584f1223a24d5aedde0065352ae/image2.png" />
          </figure>
    <div>
      <h3>The EU’s Digital Services Act</h3>
      <a href="#the-eus-digital-services-act">
        
      </a>
    </div>
    <p>A key driver of this year’s updates was the transparency reporting obligations in the <a href="https://blog.cloudflare.com/digital-services-act/"><u>EU’s Digital Services Act (DSA)</u></a>. As we have written about <a href="https://blog.cloudflare.com/digital-services-act/"><u>previously</u></a>, the DSA replaced a 20-year-old law called the e-Commerce Directive, providing an important framework for addressing the legal responsibilities of online service providers.</p><p>While the DSA addresses a number of topics, an important one is transparency. The DSA sets different transparency reporting obligations for different services, establishing baseline reporting requirements for all intermediary services, more detailed reporting for hosting services, and the most extensive reporting for online platforms like social media sites and search engines. Most of Cloudflare’s services are pass-through (intermediary) services related to security and performance with limited transparency reporting requirements under the DSA, while our hosting services have some additional requirements related to our abuse-related actions.</p><p>The DSA transparency obligations align with Cloudflare’s longstanding practices and company principles toward transparency. Because Cloudflare has always strived to provide meaningful transparency into its approach to these issues, we are well positioned to comply with the specific reporting obligations set forth in the DSA. That said, while we believe that our existing reports already satisfied much of the DSA, we identified changes we wanted to make to match specific types of data or formatting called for under the DSA. </p>
    <div>
      <h3>New data and a new format</h3>
      <a href="#new-data-and-a-new-format">
        
      </a>
    </div>
    <p>Our 2024 Transparency Reports include more information than ever before, all in a new format that we believe will make the information easier to understand.</p><p>Prompted by the DSA’s requirements and the continued expansion of services we offer, the 2024 reports includes new information, including additional categories of hosted content abuse, automated steps Cloudflare has taken to mitigate phishing and technical abuse, the mean time to take action on different types of abuse reports, and information about additional types of requests for customer information that we have received. You’ll find a machine-readable version of the data alongside our transparency reports, consistent with DSA requirements. We also introduced "additional context" boxes to call out trends or notable developments during the reporting period.</p><p>To try to make all of this information as digestible as possible, we divided our transparency report into two parts. Our report on Legal Requests for Information addresses the law enforcement, government, and civil requests for customer information that Cloudflare receives in the United States and around the world. Our report on Abuse Processes addresses Cloudflare’s processes for handling reports of abuse on websites using our services and our response to legal requests to terminate or restrict access to our users.</p><p>Because we divided the report into two parts, you’ll find our ‘<a href="https://blog.cloudflare.com/cloudflare-transparency-update-joining-cloudflares-flock-of-warrant-canaries-2/"><u>warrant canaries</u></a>’ on the <a href="https://www.cloudflare.com/transparency/"><u>transparency report landing page</u></a> of our <a href="https://www.cloudflare.com/trust-hub/"><u>Trust Hub</u></a> and no longer in the reports themselves. The warrant canary statements about things we have never done as a company are an essential part of our commitment to transparency in how we handle both customers’ information in response to legal requests and abuse reports. All of our warrant canaries remain intact, meaning we still haven't done any of these things.</p><p>We’ll continue to publish transparency reports twice a year, available on the <a href="https://www.cloudflare.com/transparency/"><u>Transparency page</u></a> of our website as well as through an <a href="https://www.cloudflare.com/transparency/rss.xml"><u>RSS feed</u></a>. Our approach to these reports will continue to evolve in order to provide meaningful transparency in line with our company principles, product portfolio growth, and in line with the new regulatory environment.</p> ]]></content:encoded>
            <category><![CDATA[Trust & Safety]]></category>
            <category><![CDATA[Transparency]]></category>
            <category><![CDATA[Policy & Legal]]></category>
            <guid isPermaLink="false">6r04i7Ke1lNGEWK4u3pRK1</guid>
            <dc:creator>Abby Vollmer</dc:creator>
            <dc:creator>Despina Papageorge</dc:creator>
        </item>
        <item>
            <title><![CDATA[Goodbye, section 2.8 and hello to Cloudflare’s new terms of service]]></title>
            <link>https://blog.cloudflare.com/updated-tos/</link>
            <pubDate>Tue, 16 May 2023 13:00:55 GMT</pubDate>
            <description><![CDATA[ We’re excited to announce new updates that will modernize our terms of service and hopefully cut down on customer confusion and frustration. ]]></description>
            <content:encoded><![CDATA[ 
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/74CZiPCyIk5BUptWYl34Iy/2f0149ccfaca2300310d7ebd55d0c0af/image1-37.png" />
            
            </figure><p>Earlier this year, we <a href="http://blog.cloudflare.com/how-cloudflare-erroneously-throttled-a-customers-web-traffic/">blogged</a> about an incident where we mistakenly throttled a customer due to internal confusion about a potential violation of our Terms of Service. That incident highlighted a growing point of confusion for many of our customers. Put simply, our terms had not kept pace with the rapid innovation here at Cloudflare, especially with respect to our <a href="https://www.cloudflare.com/developer-platform-hub/">Developer Platform</a>. We’re excited to announce new updates that will modernize our terms and cut down on customer confusion and frustration.</p>
    <div>
      <h3>A bit of background on our legal terms of service</h3>
      <a href="#a-bit-of-background-on-our-legal-terms-of-service">
        
      </a>
    </div>
    <p>We want our terms to set clear expectations about what we’ll deliver and what customers can do with our services. But drafting terms is often an iterative process, and iteration over a decade can lead to bloat, complexity, and vestigial branches in need of pruning. Now, time to break out the shears.</p>
    <div>
      <h3>Snip, snip</h3>
      <a href="#snip-snip">
        
      </a>
    </div>
    <p>To really nip this in the bud, we started at the source–the content-based restriction housed in Section 2.8 of our Self-Serve Subscription Agreement:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6GWvC0oLUJpLVPRZX9kB5l/149dbed702d904c6ffdae45dc0d6827c/image6-7.png" />
            
            </figure><p>Cloudflare is much, much more than a <a href="https://www.cloudflare.com/learning/cdn/what-is-a-cdn/">CDN</a>, but that wasn’t always the case. The CDN was one of our first services and originally designed to serve HTML content like webpages. User attempts to serve video and other large files hosted outside of Cloudflare were disruptive on many levels. So, years ago, we added Section 2.8 to give Cloudflare the means to preserve the original intent of the CDN: limiting use of the CDN to webpages.</p><p>Over time, Cloudflare’s network became larger and more robust and its portfolio broadened to include services like <a href="https://www.cloudflare.com/products/cloudflare-stream/">Stream</a>, <a href="https://www.cloudflare.com/products/cloudflare-images/">Images</a>, and <a href="https://www.cloudflare.com/products/r2/">R2</a>. These services are explicitly designed to allow customers to serve non-HTML content like video, images, and other large files hosted directly by Cloudflare. And yet, Section 2.8 persisted in our Self-Serve Subscription Agreement–the umbrella terms that apply to <i>all</i> services. We acknowledge that this didn’t make much sense.</p><p>To address the problem, we’ve done a few things. First, we moved the content-based restriction concept to a new <a href="https://www.cloudflare.com/service-specific-terms-application-services/">CDN-specific section</a> in our Service-Specific Terms. We want to be clear that this restriction only applies to use of our CDN. Next, we got rid of the antiquated HTML vs. non-HTML construct, which was far too broad. Finally, we made it clear that customers can serve video and other large files using the CDN so long as that content is hosted by a Cloudflare service like Stream, Images, or R2. This will allow customers to confidently innovate on our Developer Platform while leveraging the speed, security, and reliability of our CDN. Video and large files hosted outside of Cloudflare will still be restricted on our CDN, but we think that our service features, generous free tier, and competitive pricing (including <a href="/r2-ga/">zero egress fees on R2</a>) make for a compelling package for developers that want to access the reach and performance of our network.</p><p>Here are a few diagrams to help understand how our terms of service fit together for various use cases.</p><p><i>Customer A is on a free, pro, or business plan and wants to use the CDN service:</i></p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6egDNDn9C9wcJGp22GEm7n/96ba1f278b6c538d698b0a5d40fd729d/Blog-1792---Customer-A.png" />
            
            </figure><p><b>Customer B is on a free, pro, or business plan and wants to use the Developer Platform and Zero Trust services:</b></p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3nWwUG8zvmVvPmUDmrDnJL/e279ee5a2e17145f9ce43f4014c14212/Blog-1792---Customer-B.png" />
            
            </figure><p><i>Customer C is on a free, pro, or business plan and wants to use Stream with the CDN service and</i> Web Application Firewall <i>with the CDN service:</i></p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4EivnjP1JbfXnPg21BytB8/174010bfa39cabea1ff2d9b58653c804/Blog-1792---Customer-C.png" />
            
            </figure>
    <div>
      <h3>Quality of life upgrades</h3>
      <a href="#quality-of-life-upgrades">
        
      </a>
    </div>
    <p>We also took this opportunity to tune up other aspects of our Terms of Service to make for a more user-first experience. For example, we streamlined our Self-Serve Subscription Agreement to make it clearer and easier to understand from the start.</p><p>We also heard previous complaints and removed an old restriction on benchmarking–we’re confident in the performance of our network and services, unlike some of our competitors. Last but not least, we renamed the Supplemental Terms to the Service-Specific Terms and gave them a major facelift to improve clarity and usability.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7BMaDAxwWnXOnIMkTDUOXk/48a2f1ef001ba2cec072c4b41d4f1835/image5-4.png" />
            
            </figure>
    <div>
      <h3>Users first</h3>
      <a href="#users-first">
        
      </a>
    </div>
    <p>We’ve learned a lot from our users throughout this process, and we are always grateful for your feedback. Our terms were never meant to act as a gating mechanism that stifled innovation. With these updates, we hope that customers will feel confident in building the next generation of apps and services on Cloudflare. And we’ll keep the shears handy as we continue to work to help build a better Internet.</p> ]]></content:encoded>
            <category><![CDATA[Developer Week]]></category>
            <category><![CDATA[Developers]]></category>
            <category><![CDATA[Policy & Legal]]></category>
            <category><![CDATA[Transparency]]></category>
            <category><![CDATA[Customers]]></category>
            <guid isPermaLink="false">6LmTFvsRNlXdg0CMBsJzpr</guid>
            <dc:creator>Eugene Kim</dc:creator>
        </item>
        <item>
            <title><![CDATA[How Cloudflare erroneously throttled a customer’s web traffic]]></title>
            <link>https://blog.cloudflare.com/how-cloudflare-erroneously-throttled-a-customers-web-traffic/</link>
            <pubDate>Tue, 07 Feb 2023 18:20:49 GMT</pubDate>
            <description><![CDATA[ Today’s post is a little different. It’s about a single customer’s website not working correctly because of incorrect action taken by Cloudflare. ]]></description>
            <content:encoded><![CDATA[ <p></p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6TlF3dN51Im2Zyzwy04oyr/8fbcd20d22abe9ba7b78dd673e9f04a1/BLOG-1707-header-1.png" />
            
            </figure><p>Over the years when Cloudflare has had an <a href="/tag/outage/">outage</a> that affected our customers we have very quickly blogged about what happened, why, and what we are doing to address the causes of the outage. Today’s post is a little different. It’s about a single customer’s website <a href="https://news.ycombinator.com/item?id=34639212">not working correctly</a> because of incorrect action taken by Cloudflare.</p><p>Although the customer was not in any way banned from Cloudflare, or lost access to their account, their website didn’t work. And it didn’t work because Cloudflare applied a bandwidth throttle between us and their origin server. The effect was that the website was unusable.</p><p>Because of this unusual throttle there was some internal confusion for our customer support team about what had happened. They, incorrectly, believed that the customer had been limited because of a breach of section 2.8 of our <a href="https://www.cloudflare.com/terms/">Self-Serve Subscription Agreement</a> which prohibits use of our self-service CDN to serve excessive non-HTML content, such as images and video, without a paid plan that includes those services (this is, for example, designed to prevent someone building an image-hosting service on Cloudflare and consuming a huge amount of bandwidth; for that sort of use case we have paid <a href="https://www.cloudflare.com/products/cloudflare-images/">image</a> and <a href="https://www.cloudflare.com/products/cloudflare-stream/">video</a> plans).</p><p>However, this customer wasn’t breaking section 2.8, and they were both a paying customer and a paying customer of Cloudflare Workers through which the throttled traffic was passing. This throttle should not have happened. In addition, there is and was no need for the customer to upgrade to some other plan level.</p><p>This incident has set off a number of workstreams inside Cloudflare to ensure better communication between teams, prevent such an incident happening, and to ensure that communications between Cloudflare and our customers are much clearer.</p><p>Before we explain our own mistake and how it came to be, we’d like to apologize to the customer. We realize the serious impact this had, and how we fell short of expectations. In this blog post, we want to explain what happened, and more importantly what we’re going to change to make sure it does not happen again.</p>
    <div>
      <h3>Background</h3>
      <a href="#background">
        
      </a>
    </div>
    <p>On February 2, an on-call network engineer received an alert for a congesting interface with Equinix IX in our Ashburn data center. While this is not an unusual alert, this one stood out for two reasons. First, it was the second day in a row that it happened, and second, the congestion was due to a sudden and extreme spike of traffic.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2BHPQTMGbXizjZHfUEwf7S/7b9665371ed4e291c5df992066bc3c82/image2-1.png" />
            
            </figure><p>The engineer in charge identified the customer’s domain, tardis.dev, as being responsible for this sudden spike of traffic between Cloudflare and their origin network, a storage provider. Because this congestion happens on a physical interface connected to external peers, there was an immediate impact to many of our customers and peers. A port congestion like this one typically incurs packet loss, slow throughput and higher than usual latency. While we have automatic mitigation in place for congesting interfaces, in this case the mitigation was unable to resolve the impact completely.</p><p>The traffic from this customer went suddenly from an average of 1,500 requests per second, and a 0.5 MB payload per request, to 3,000 requests per second (2x) and more than 12 MB payload per request (25x).</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3ROLTRXsuoeYEpw0ewWDFX/c4af0bfaf7ef4c3c3966058153416209/image1-4.png" />
            
            </figure><p>The congestion happened between Cloudflare and the origin network. Caching did not happen because the requests were all unique URLs going to the origin, and therefore we had no ability to serve from cache.</p><p><b>A Cloudflare engineer decided to apply a throttling mechanism to prevent the zone from pulling so much traffic from their origin. Let's be very clear on this action: Cloudflare does not have an established process to throttle customers that consume large amounts of bandwidth, and does not intend to have one. This remediation was a mistake, it was not sanctioned, and we deeply regret it.</b></p><p>We lifted the throttle through internal escalation 12 hours and 53 minutes after having set it up.</p>
    <div>
      <h3>What's next</h3>
      <a href="#whats-next">
        
      </a>
    </div>
    <p>To make sure a similar incident does not happen, we are establishing clear rules to mitigate issues like this one. Any action taken against a customer domain, paying or not, will require multiple levels of approval and clear communication to the customer. Our tooling will be improved to reflect this. We have many ways of traffic shaping in situations where a huge spike of traffic affects a link and could have applied a different mitigation in this instance.</p><p>We are in the process of rewriting our terms of service to better reflect the type of services that our customers deliver on our platform today. We are also committed to explaining to our users in plain language what is permitted under self-service plans. As a developer-first company with transparency as one of its core principles, we know we can do better here. We will follow up with a blog post dedicated to these changes later.</p><p>Once again, we apologize to the customer for this action and for the confusion it created for other Cloudflare customers.</p> ]]></content:encoded>
            <category><![CDATA[Customers]]></category>
            <category><![CDATA[Transparency]]></category>
            <guid isPermaLink="false">5Ulx28kIpVehkdG8jDUoLB</guid>
            <dc:creator>Jeremy Hartman</dc:creator>
            <dc:creator>Jérôme Fleury</dc:creator>
        </item>
        <item>
            <title><![CDATA[First Half 2019 Transparency Report and an Update on a Warrant Canary]]></title>
            <link>https://blog.cloudflare.com/first-half-2019-transparency-report-and-an-update-on-a-warrant-canary/</link>
            <pubDate>Fri, 20 Dec 2019 21:49:36 GMT</pubDate>
            <description><![CDATA[ Today, we are releasing Cloudflare’s transparency report for the first half of 2019. We recognize the importance of keeping the reports current, but It’s taken us a little longer ]]></description>
            <content:encoded><![CDATA[ <p>Today, we are releasing <a href="https://www.cloudflare.com/transparency/">Cloudflare’s transparency report</a> for the first half of 2019. We recognize the importance of keeping the reports current, but It’s taken us a little longer than usual to put it together. We have a few notable updates.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4xY1LkLltSH3mLIdmrOzEJ/d090e2f5d85f1dadc2ddd868242a6d58/canary-1.png" />
            
            </figure>
    <div>
      <h3>Pulling a warrant canary</h3>
      <a href="#pulling-a-warrant-canary">
        
      </a>
    </div>
    <p>Since we issued our very first transparency report in 2014, we’ve maintained a number of commitments - known as warrant canaries - about what actions we will take and how we will respond to certain types of law enforcement requests. We supplemented those initial commitments <a href="/cloudflare-transparency-update-joining-cloudflares-flock-of-warrant-canaries-2/">earlier this year</a>, so that our current warrant canaries state that Cloudflare has never:</p><ol><li><p>Turned over our encryption or authentication keys or our customers' encryption or authentication keys to anyone.</p></li><li><p>Installed any law enforcement software or equipment anywhere on our network.</p></li><li><p>Terminated a customer or taken down content due to political pressure*</p></li><li><p>Provided any law enforcement organization a feed of our customers' content transiting our network.</p></li><li><p>Modified customer content at the request of law enforcement or another third party.</p></li><li><p>Modified the intended destination of DNS responses at the request of law enforcement or another third party.</p></li><li><p>Weakened, compromised, or subverted any of its encryption at the request of law enforcement or another third party.</p></li></ol><p>These commitments serve as a statement of values to remind us what is important to us as a company, to convey not only what we do, but what we believe we should do. For us to maintain these commitments. We have to believe not only that we’ve met them in the past, but that we can continue to meet them.</p><p>Unfortunately, there is one warrant canary that no longer meets the test for remaining on our website. After Cloudlfare terminated the Daily Stormer’s service in 2017, Matthew <a href="/why-we-terminated-daily-stormer/">observed</a>:</p><p><i>"We're going to have a long debate internally about whether we need to remove the bullet about not terminating a customer due to political pressure. It's powerful to be able to say you've never done something. And, after today, make no mistake, it will be a little bit harder for us to argue against a government somewhere pressuring us into taking down a site they don't like."</i></p><p>We addressed this issue in our subsequent transparency reports by retaining the statement, but adding an asterisk identifying the Daily Stormer debate and the criticism that we had received in the wake of our decision to terminate services. Our goal was to signal that we remained committed to the principle that we should not terminate a customer due to political pressure, while not ignoring the termination. We also sought to be public about the termination and our reasons for the decision, ensuring that it would not go unnoticed.</p><p>Although that termination sparked significant debate about whether infrastructure companies making decisions about what content should remain online, we haven’t yet seen politically accountable actors put forth real alternatives to address deeply troubling content and behavior online. Since that time, we’ve seen even more real world consequences from the vitriol and hateful content spread online, from the screeds posted in connection with the terror attacks in Christchurch, Poway and El Paso to the posting of video glorifying those attacks. Indeed, in the absence of true public policy initiatives to address those concerns, the pressure on tech companies -- even deep Internet infrastructure companies like Cloudflare --  to make judgments about what stays online has only increased.  </p><p>In August 2019, Cloudflare terminated service to 8chan based on their failure to moderate their hate-filled platform in a way that inspired murderous acts. Although we don’t think removing cybersecurity services to force a site offline is the right public policy approach to the hate festering online, a site’s failure to take responsibility to prevent or mitigate the harm caused by its platform leaves service providers like us with few choices. We’ve come to recognize that the prolonged and persistent lawlessness of others might require action by those further down the technical stack. Although we’d prefer that governments recognize that need, and build mechanisms for due process, if they fail to act, infrastructure companies may be required to take action to prevent harm.</p><p>And that brings us back to our warrant canary. If we believe we might have an obligation to terminate customers, even in a limited number of cases, retaining a commitment that we will never terminate a customer “due to political pressure” is untenable. We could, in theory, argue that terminating a lawless customer like 8chan was not a termination “due to political pressure.” But that seems wrong. We shouldn’t be parsing specific words of our commitments to explain to people why we don’t believe we’ve violated the standard.</p><p>We remain committed to the principle that providing cybersecurity services to everyone, regardless of content, makes the Internet a better place. Although we’re removing the warrant canary from our website, we believe that to earn and maintain our users’ trust, we must be transparent about the actions we take. We therefore commit to reporting on any action that we take to terminate a user that could be viewed as a termination “due to political pressure.”</p>
    <div>
      <h3>UK/US Cloud agreement</h3>
      <a href="#uk-us-cloud-agreement">
        
      </a>
    </div>
    <p>As we’ve described <a href="/digital-evidence-across-borders-and-engagement-with-non-us-authorities/">previously</a>, governments have been working to find ways to improve law enforcement access to digital evidence across borders. Those efforts resulted in a new U.S. law, the Clarifying Lawful Overseas Use of Data (CLOUD) Act, premised on the idea that law enforcement around the world should be able to get access to electronic content related to their citizens when conducting law enforcement investigations, wherever that data is stored, as long as they are bound by sufficient procedural safeguards to ensure due process.</p><p>On October 3, 2019, the US and UK signed the first Executive Agreement under this law. According to the requirements of U.S. law, that Agreement will go into effect in 180 days, in March 2020, unless Congress takes action to block it. There is an ongoing debate as to whether the agreement includes sufficient due process and privacy protections. We’re going to take a wait and see approach, and will closely monitor any requests we receive after the agreement goes into effect.</p><p>For the time being, Cloudflare intends to comply with appropriately scoped and targeted requests for data from UK law enforcement, provided that those requests are consistent with the law and international human rights standards. Information about the legal requests that Cloudflare receives from non-U.S. governments pursuant to the CLOUD Act will be included in future transparency reports.</p> ]]></content:encoded>
            <category><![CDATA[Policy & Legal]]></category>
            <category><![CDATA[Trust & Safety]]></category>
            <category><![CDATA[Transparency]]></category>
            <category><![CDATA[Policy & Legal]]></category>
            <guid isPermaLink="false">26p8e8McNC9PBOC8HjH5ql</guid>
            <dc:creator>Alissa Starzak</dc:creator>
            <dc:creator>Justin Paine</dc:creator>
        </item>
        <item>
            <title><![CDATA[Cloudflare Transparency Update: Joining Cloudflare’s Flock of (Warrant) Canaries]]></title>
            <link>https://blog.cloudflare.com/cloudflare-transparency-update-joining-cloudflares-flock-of-warrant-canaries-2/</link>
            <pubDate>Mon, 25 Feb 2019 14:00:00 GMT</pubDate>
            <description><![CDATA[ Today, Cloudflare is releasing its transparency report for the second half of 2018. We have been publishing biannual Transparency Reports since 2013. ]]></description>
            <content:encoded><![CDATA[ <p></p><p>Today, Cloudflare is releasing its <a href="https://www.cloudflare.com/transparency/updates/">transparency report</a> for the second half of 2018. We have been <a href="https://www.cloudflare.com/transparency/">publishing</a> biannual Transparency Reports since 2013.</p><p>We believe an essential part of earning the trust of our customers is being transparent about our features and services, what we do – and do not do – with our users’ data, and generally how we conduct ourselves in our engagement with third parties such as law enforcement authorities.  We also think that an important part of being fully transparent is being rigorously consistent and anticipating future circumstances, so our users not only know how we have behaved in the past, but are able to anticipate with reasonable certainty how we will act in the future, even in difficult cases.</p><p>As part of that effort, we have set forth certain ‘warrant canaries’ – statements of things we have never done as a company. As described in greater detail below, the report published today adds three new ‘warrant canaries’, which is the first time we’ve added to that list since 2013. The transparency report is also distinguished because it adds new reporting on requests for user information from foreign law enforcement, and requests for user information that we receive from government agencies that are not part of law enforcement.</p><p>This is the first in a series of blog posts this week that will describe our process and the commitments we make in relation to the handling of user data and abuse queries, our interactions with the law enforcement and the security communities, and our essential red-lines when it comes to how we operate as a company. The specific updates will include:</p><ul><li><p>Monday: This blogpost on the updated transparency report and new warrant canaries.</p></li><li><p>Tuesday: An updated discussion about how we address requests for content moderation</p></li><li><p>Wednesday: How we plan to deal with abuse of new products</p></li><li><p>Thursday: Dealing with requests from non-US law enforcement</p></li></ul><p>This is an exciting time of growth for Cloudflare and we are only just getting started, so we do expect more complexity over the years. However, the fundamentals remain for us, always - transparency, due process, openness, integrity and a commitment to improving the Internet for all. We are excited to share more with you this week!</p>
    <div>
      <h3>New Warrant Canaries</h3>
      <a href="#new-warrant-canaries">
        
      </a>
    </div>
    <p>From the beginning, and consistent with our mission of “helping build a better Internet,” Cloudflare has relied on a set of values that inform how we work with our customers, with law enforcement, and with other third parties. Maintaining the privacy and trust of our users and supporting a secure, well-functioning, and content-neutral Internet is essential to us.</p><p>It’s not enough for us to be transparent about the things we do willingly, because tech companies are pressured every day to take the easy way out and avoid controversy or conflict by doing seemingly small things easily and quietly that are corrosive to these values. So, for many years, we have published a list of “things we have never done” in our transparency report to demonstrate our commitment to these values.</p><p>The rationale behind including “warrant canaries” in our transparency report is twofold. On one hand, if Cloudflare is asked by law enforcement or a third party to act against one of the warrant canaries and not disclose it publicly, we will still have to remove it from our list. The removal of the warrant canary, like the silence of a canary in the coal mine, will signal to our customers that something is not right. And in addition, these statements serve as a signal to groups which may ask us to take actions contravening our values that such actions are not so easy for us to take. We have said before and re-commit here: if Cloudflare were asked to take an action violating one of the warrant canaries, we would pursue legal remedies challenging the request in order to protect our customers from what we believe are improper, illegal, or unconstitutional requests.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2xOkIYGjQYv3DaGruYxMAS/17c2644547861ee34c7a4840c1514f68/canary-1.png" />
            
            </figure>
    <div>
      <h3>Why add new warrant canaries?</h3>
      <a href="#why-add-new-warrant-canaries">
        
      </a>
    </div>
    <p>We have not added warrant canaries since we put out our first transparency report in 2013. The original canaries are as follows:</p><ul><li><p>Cloudflare has never turned over our SSL keys or our customers SSL keys to anyone.</p></li><li><p>Cloudflare has never installed any law enforcement software or equipment anywhere on our network.</p></li><li><p>Cloudflare has never terminated a customer or taken down content due to political pressure.</p></li><li><p>Cloudflare has never provided any law enforcement organization a feed of our customers' content transiting our network.</p></li></ul><p>So, why change that this year? Though the company develops new products each year, the addition of new types of services in 2018, notably Cloudflare Workers and DNS Resolver 1.1.1.1, expanded our capabilities in a way that we believe is worth addressing. Similarly, regulation of technology has been changing globally, and we feel it is pertinent to respond to these developments.</p><p>The new canaries, and the issues they are intended to address, are outlined below.  To be clear, we haven’t necessarily received law enforcement requests to do any of these things at this point.  We just want to make sure we lay out our commitments as clearly as possible before we get a request.</p>
    <div>
      <h3>The new canaries</h3>
      <a href="#the-new-canaries">
        
      </a>
    </div>
    <p><b>Cloudflare has never modified customer content at the request of law enforcement or another third party.</b></p><p>The Internet has come a long way since the early days when every visitor to a website saw precisely the same content. Cookies and other techniques allow developers to customize the user experience. In the last year and a half, Cloudflare launched Workers, which allows website developers to customize their websites using edge side code. Using Workers, our customers can do things like customizing their websites, serving different versions of their website to different types of visitors or to those in different locations. Although being able to alter the version of a website particular visitors see or what application runs for different visitors is a powerful new tool for our customers, we recognize that it also holds the potential for mischief and abuse. Governments or malicious actors could in theory use edge-side code to modify the content of a website, make changes only for particular viewers, or collect information about the visitors to a site.</p><p>We believe that only those who are empowered to change the site itself should be empowered to make changes by running code at the edge. We will therefore fight requests to make modifications, either by adding apps or modifying content, at the request of a third party without the customer’s consent.</p><p><b>Cloudflare has never modified the intended destination of DNS responses at the request of law enforcement or another third party.</b></p><p>The privacy and security of DNS Resolver 1.1.1.1 are very important to us, and were front of mind when designing the service, as described <a href="/announcing-1111/">here</a>. At Cloudflare we believe that part of helping to build a better Internet is to ensure that users are routed to the website they intend to visit.</p><p>DNS spoofing, or cache poisoning, exploits the functioning of DNS resolvers in order to route unsuspecting visitors incorrectly. If we think of DNS as the phonebook of the Internet, DNS spoofing is similar to someone taking new phonebooks from people’s doors and replacing them with fakes. In this new copy, the attacker has changed ordinary people’s numbers to the numbers of phone scammers. When a user with one of the affected books looks up and calls the number of, say, a landscaping service, or even a friend, they end up dialing a scammer instead. In DNS spoofing, a person looking up an affected website would be directed to a fake website, or somewhere different entirely, rather than the intended destination.</p><p>We saw a concrete example of this type of DNS spoofing earlier this month. On February 10, 2019, Venezuelan opposition leader Juan Guaido asked Venezuelans to volunteer to help international humanitarian organizations deliver aid into the country. A day after this public announcement, however, a similarly named website was set up, and users in Venezuela trying to visit the original and official website were redirected -- using DNS spoofing -- to the fake website. The fake website had a form to register personal data, such as name, email and cell phone.</p><p>According to <a href="https://motherboard.vice.com/en_us/article/d3mdxm/venezuela-government-hack-activists-phishing">Motherboard</a>:</p><blockquote><p>While studying the fake website, researchers found phishing sites hosted on the same IP address. And there’s evidence that the people behind the second, apparently fake and malicious, website were working for the <a href="https://www.nytimes.com/2019/01/23/world/americas/venezuela-protests-guaido-maduro.html"><b>government</b></a> of Maduro, according to security firm CrowdStrike and independent researchers.</p></blockquote><blockquote><p>“It’s clearly the work of the Venezuelan government trying to identify the people working against them, so that they can put a stop to it,” Adam Meyers, the vice president of intelligence at CrowdStrike, a firm that’s analyzed the attacks, told Motherboard in a phone call.</p></blockquote><p>This type of DNS spoofing can be done for any number of purposes, from gaining sensitive information to preventing access to websites with controversial content. Making a commitment not to modify the intended destination of DNS responses at the request of law enforcement or a third party is an affirmation of our desire to ensure the reliability of 1.1.1.1 and do our best to maintain confidence in the DNS and Internet infrastructure more generally.</p><p>Occasionally, law enforcement uses Cloudflare for domains they have seized from <a href="https://www.cloudflare.com/learning/dns/glossary/what-is-a-domain-name-registrar/">domain registrars</a> using legal process. Because law enforcement has obtained legal control of the website in those circumstances (through seizure), that service does not involve modification of DNS responses.</p><p><b>Cloudflare has never weakened, compromised, or subverted any of its encryption at the request of law enforcement or another third party.</b></p><p>We believe encryption is critical to a trustworthy and secure Internet. Encryption prevents the theft of private data, making it safer to bank, shop, and communicate online.</p><p>Because of the importance of encryption to the Internet ecosystem, we have a team constantly working on new ways to increase encryption on the Internet, whether that means providing <a href="https://www.cloudflare.com/application-services/products/ssl/">SSL certificates for free</a> to all our users, <a href="/esni/">pioneering eSNI</a> or supporting <a href="/dns-resolver-1-1-1-1/">DNS over TLS and DNS over HTTPS</a> on 1.1.1.1.</p><p>Because encryption can complicate efforts to obtain access to digital evidence, however, law enforcement agencies have pushed for tools to gain access to encrypted material. These efforts range from the FBI’s attempt to get a court order to require Apple to assist them in obtaining encrypted data from an iPhone in February 2015, to Australia’s new Assistance and Access law, passed last fall. We’re concerned that these types of efforts will raise questions about the security of encryption products. As one Cloudflare employee put it after Australia’s law passed, “tech companies now have to do code reviews of everything coming out of Australia” to ensure there are no vulnerabilities.</p><p>We added the new commitment to prevent this uncertainty. Our intent is to continue focusing on ways to improve current encryption methods and deployment of these methods, not weaken them.</p><p><b>Cloudflare has never turned over our encryption or authentication keys or our customers' encryption or authentication keys to anyone.</b></p><p>This is a slight modification to a previous commitment.  The wording previously referred to “SSL keys” rather than “encryption and authentication keys.” Given the deprecation of SSL, we wanted to be absolutely clear that we were referring to all encryption and authentication keys, not just those from a deprecated security protocol.</p><p>Our goal in modifying this canary is to provide additional security for our customers. We therefore believe it makes sense to distill the language to encompass the crux of what we will not do, which is provide our customers’ keys to third parties.</p> ]]></content:encoded>
            <category><![CDATA[Transparency]]></category>
            <category><![CDATA[Trust & Safety]]></category>
            <category><![CDATA[Policy & Legal]]></category>
            <guid isPermaLink="false">1fwUBKWTTfPKSqz9W3e3kR</guid>
            <dc:creator>Alissa Starzak</dc:creator>
            <dc:creator>Justin Paine</dc:creator>
            <dc:creator>Erin Walk</dc:creator>
        </item>
    </channel>
</rss>