
<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:media="http://search.yahoo.com/mrss/">
    <channel>
        <title><![CDATA[ The Cloudflare Blog ]]></title>
        <description><![CDATA[ Get the latest news on how products at Cloudflare are built, technologies used, and join the teams helping to build a better Internet. ]]></description>
        <link>https://blog.cloudflare.com</link>
        <atom:link href="https://blog.cloudflare.com/" rel="self" type="application/rss+xml"/>
        <language>en-us</language>
        
        <lastBuildDate>Tue, 14 Apr 2026 19:18:11 GMT</lastBuildDate>
        <item>
            <title><![CDATA[Measuring characteristics of TCP connections at Internet scale]]></title>
            <link>https://blog.cloudflare.com/measuring-network-connections-at-scale/</link>
            <pubDate>Wed, 29 Oct 2025 13:00:00 GMT</pubDate>
            <description><![CDATA[ Researchers and practitioners have been studying connections almost as long as the Internet that supports them. Today, Cloudflare’s global network receives millions of connections per second. We explore various characteristics of TCP connections, including lifetimes, sizes, and more. ]]></description>
            <content:encoded><![CDATA[ <p></p><p>Every interaction on the Internet—including loading a web page, streaming a video, or making an API call—starts with a connection. These fundamental logical connections consist of a stream of packets flowing back and forth between devices.</p><p>Various aspects of these network connections have captured the attention of researchers and practitioners for as long as the Internet has existed. The interest in connections even predates the label, as can be seen in the seminal 1991 paper, “<a href="https://dl.acm.org/doi/10.1145/115994.116003"><u>Characteristics of wide-area TCP/IP conversations</u></a>.” By any name, the Internet measurement community has been steeped in characterizations of Internet communication for <i>decades</i>, asking everything from “how long?” and “how big?” to “how often?” – and those are just to start.</p><p>Surprisingly, connection characteristics on the wider Internet are largely unavailable. While anyone can  use tools (e.g., <a href="https://www.wireshark.org/"><u>Wireshark</u></a>) to capture data locally, it’s virtually impossible to measure connections globally because of access and scale. Moreover, network operators generally do not share the characteristics they observe — assuming that non-trivial time and energy is taken to observe them.</p><p>In this blog post, we move in another direction by sharing aggregate insights about connections established through our global CDN. We present characteristics of <a href="https://developers.cloudflare.com/fundamentals/reference/tcp-connections/"><u>TCP</u></a> connections—which account for about <a href="https://radar.cloudflare.com/adoption-and-usage"><u>70% of HTTP requests</u></a> to Cloudflare—providing empirical insights that are difficult to obtain from client-side measurements alone.</p>
    <div>
      <h2>Why connection characteristics matter</h2>
      <a href="#why-connection-characteristics-matter">
        
      </a>
    </div>
    <p>Characterizing system behavior helps us predict the impact of changes. In the context of networks, consider a new routing algorithm or transport protocol: how can you measure its effects? One option is to deploy the change directly on live networks, but this is risky. Unexpected consequences could disrupt users or other parts of the network, making a “deploy-first” approach potentially unsafe or ethically questionable.</p><p>A safer alternative to live deployment as a first step is simulation. Using simulation, a designer can get important insights about their scheme without having to build a full version. But simulating the whole Internet is challenging, as described by another highly seminal work, “<a href="https://dl.acm.org/doi/10.1145/268437.268737"><u>Why we don't know how to simulate the Internet</u></a>”.</p><p>To run a useful simulation, we need it to behave like the real system we’re studying. That means generating synthetic data that mimics real-world behavior. Often, we do this by using statistical distributions — mathematical descriptions of how the real data behaves. But before we can create those distributions, we first need to characterize the data — to measure and understand its key properties. Only then can our simulation produce realistic results.</p>
    <div>
      <h2>Unpacking the dataset</h2>
      <a href="#unpacking-the-dataset">
        
      </a>
    </div>
    <p>The value of any data depends on its collection mechanism. Every dataset has blind spots, biases, and limitations, and ignoring these can lead to misleading conclusions. By examining the finer details — how the data was gathered, what it represents, and what it excludes — we can better understand its reliability and make informed decisions about how to use it. Let’s take a closer look at our collected telemetry.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5ksUQ7xlzXPWp2hH7eX4dG/124456d20c6fd5e7e185d68865aee6fa/image5.png" />
          </figure><p><b>Dataset Overview</b>. The data describes TCP connections, labeled <i>Visitor to Cloudflare</i> in the above diagram, which serve requests via HTTP 1.0, 1.1, and 2.0 that make up <a href="https://radar.cloudflare.com/adoption-and-usage">about 70%</a> of all 84 million HTTP requests per second, on average, received at our global CDN servers.</p><p><b>Sampling.</b> The passively collected snapshot of data is drawn from a uniformly sampled 1% of all TCP connections to Cloudflare between October 7 and October 15, 2025. Sampling takes place at each individual client-facing server to mitigate biases that may appear by sampling at the datacenter level.</p><p><b>Diversity.</b> Unlike many large operators, whose traffic is primarily their own and dominated by a few services such as search, social media, or streaming video, the vast majority of Cloudflare’s workload comes from our customers, who choose to put Cloudflare in front of their websites to help protect, improve performance, and reduce costs. This diversity of customers brings a wide variety of web applications, services, and users from around the world. As a result, the connections we observe are shaped by a broad range of client devices and application-specific behaviors that are constantly evolving.</p><p><b>What we log.</b> Each entry in the log consists of socket-level metadata captured via the Linux kernel’s <a href="https://man7.org/linux/man-pages/man7/tcp.7.html"><u>TCP_INFO</u></a> struct, alongside the SNI and the number of requests made during the connection. The logs exclude individual HTTP requests, transactions, and details. We restrict our use of the logs to connection metadata statistics such as duration and number of packets transmitted, as well as the number of HTTP requests processed.</p><p><b>Data capture.</b> We have elected to represent ‘useful’ connections in our dataset that have been fully processed, by characterizing only those connections that close gracefully with <a href="https://blog.cloudflare.com/tcp-resets-timeouts/#tcp-connections-from-establishment-to-close"><u>a FIN packet</u></a>. This excludes connections intercepted by attack mitigations, or that timeout, or that abort because of a RST packet.</p><p>Since a graceful close does not in itself indicate a ‘useful’ connection, <b>we additionally require at least one successful HTTP request</b> during the connection to filter out idle or non-HTTP connections from this analysis — interestingly, these make up 11% of all TCP connections to Cloudflare that close with a FIN packet.</p><p>If you’re curious, we’ve also previously blogged about the details of Cloudflare’s <a href="https://blog.cloudflare.com/how-we-make-sense-of-too-much-data/"><u>overall logging mechanism</u></a> and <a href="https://blog.cloudflare.com/http-analytics-for-6m-requests-per-second-using-clickhouse/"><u>post-processing pipeline</u></a>.  </p>
    <div>
      <h2>Visualizing connection characteristics</h2>
      <a href="#visualizing-connection-characteristics">
        
      </a>
    </div>
    <p>Although networks are inherently dynamic and trends can change over time, the large-scale patterns we observe across our global infrastructure remain remarkably consistent over time. While our data offers a global view of connection characteristics, distributions can still vary according to regional traffic patterns.</p><p>In our visualizations we represent characteristics with <a href="https://en.wikipedia.org/wiki/Cumulative_distribution_function"><u>cumulative distribution function (CDF)</u></a> graphs, specifically their <a href="https://en.wikipedia.org/wiki/Empirical_distribution_function"><u>empirical equivalents</u></a>. CDFs are particularly useful for gaining a macroscopic view of the distribution. They give a clear picture of both common and extreme cases in a single view. We use them in the illustrations below to make sense of large-scale patterns. To better interpret the distributions, we also employ log-scaled axes to account for the presence of extreme values common to networking data.</p><p>A long-standing question about Internet connections relates to “<a href="https://en.wikipedia.org/wiki/Elephant_flow"><u>Elephants and Mice</u></a>”; practitioners and researchers are entirely aware that most flows are small and some are huge, yet little data exists to inform the lines that divide them. This is where our presentation begins.</p>
    <div>
      <h3>Packet Counts</h3>
      <a href="#packet-counts">
        
      </a>
    </div>
    <p>Let’s start by taking a look at the distribution of the number of <i>response</i> packets sent in connections by Cloudflare servers back to the clients.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5qaPCul0l7bdOQfaxL1Wbn/d0ef9cc108ba35d49593029baed7cb86/image12.png" />
          </figure><p>On the graph, the x-axis represents the number of response packets sent in log-scale, while the y-axis shows the cumulative fraction of connections below each packet count. The average response consists of roughly 240 packets, but the distribution is highly skewed. The median is 12 packets, which indicates that 50% of Internet connections consist of <i>very few packets</i>.<i> </i>Extending further to<i> </i>the 90th percentile, connections carry only 107 packets.</p><p>This stark contrast highlights the heavy-tailed nature of Internet traffic: while a few connections transport massive amounts of data—like video streams or large file transfers—most interactions are tiny, delivering small web objects, microservice traffic, or API responses.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3Mf6VwD2Xq8aBwQP1V9aX5/1a20d6fa2caab5c719591db8b232f6a1/image11.png" />
          </figure><p>The above plot breaks down the packet count distribution by HTTP protocol version. For HTTP/1.X (both HTTP 1.0 and 1.1 combined) connections, the median response consists of just 10 packets, and 90% of connections carry fewer than 63 response packets. In contrast, HTTP/2 connections show larger responses, with a median of 16 packets and a 90th percentile of 170 packets. This difference likely reflects how HTTP/2 multiplexes multiple streams over a single connection, often consolidating more requests and responses into fewer connections, which increases the total number of packets exchanged per connection. HTTP/2 connections also have additional control-plane frames and flow-control messages that increase response packet counts.</p><p>Despite these differences, the combined view displays the same heavy-tailed pattern: a small fraction of connections carry enormous volumes of data (<a href="https://en.wikipedia.org/wiki/Elephant_flow"><u>elephant flows</u></a>), extending to millions of packets, while most remain lightweight (<a href="https://en.wikipedia.org/wiki/Mouse_flow"><u>mice flows</u></a>).</p><p>So far, we’ve focused on the total number of packets sent from our servers to clients, but another important dimension of connection behavior is the balance between packets sent and received, illustrated below.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5VZeU0d2EYLxPl3SaTPJBb/6b46a793d6eea178838c4f5b2572caf1/image2.png" />
          </figure><p>The x-axis shows the ratio of packets sent by our servers to packets received from clients, visualized as a CDF. Across all connections, the median ratio is 0.91, meaning that in half of connections, clients send slightly more packets than the server responds with. This excess of client-side packets primarily reflects <a href="https://www.cloudflare.com/learning/ssl/transport-layer-security-tls/"><u>TLS</u></a> handshake initiation (ClientHello), HTTP control request headers, and data acknowledgements (ACKs), causing the client to typically transmit more packets than the server returns with the content payload — particularly for low-volume connections that dominate the distribution.</p><p>The mean ratio is higher, at 1.28, due to a long tail of client-heavy connections, such as large downloads typical of CDN workloads. Most connections fall within a relatively narrow range: 10% of connections have a ratio below 0.67, and 90% are below 1.85. However, the long-tailed behavior highlights the diversity of Internet traffic: extreme values arise from both upload-heavy and download-heavy connections. The variance of 3.71 reflects these asymmetric flows, while the bulk of connections maintain a roughly balanced upload-to-download exchange.</p>
    <div>
      <h3>Bytes sent</h3>
      <a href="#bytes-sent">
        
      </a>
    </div>
    <p>Another dimension to look at the data is using bytes sent by our servers to clients, which captures the actual volume of data delivered over each connection. This metric is derived from tcpi_bytes_sent, also covering (re)transmitted segment payloads while excluding the TCP header, as defined in <a href="https://github.com/torvalds/linux/blob/v6.14/include/uapi/linux/tcp.h#L222-L312"><u>linux/tcp.h</u></a> and aligned with <a href="https://www.rfc-editor.org/rfc/rfc4898.html"><u>RFC 4898</u></a> (TCP Extended Statistics MIB).</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1VZs6F65RQjyyEUUxZSP2L/b0edd986738e9128c16dcbecb7d83761/image3.png" />
          </figure><p>The plots above break down bytes sent by HTTP protocol version. The x-axis represents the total bytes sent by our servers over each connection. The patterns are generally consistent with what we observed in the packet count distributions.</p><p>For HTTP/1.X, the median response delivers 4.8 KB, and 90% of connections send fewer than 51 KB. In contrast, HTTP/2 connections show slightly larger responses, with a median of 6 KB and a 90th percentile of 146 KB. The mean is much higher—224 KB for HTTP/1.x and 390 KB for HTTP/2—reflecting a small number of very large transfers. These long-tailed extreme flows can reach tens of gigabytes per connection, while some very lightweight connections carry minimal payloads: the minimum for HTTP/1.X is 115 bytes and for HTTP/2 it is 202 bytes.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2xRYaXYQbte6MszIT92uky/837ebdc842c9784a9c413ad886f7a5d6/image6.png" />
          </figure><p>By making use of the tcpi_bytes_received metric, we can now look at the ratio of bytes sent to bytes received per connection to better understand the balance of data exchange. This ratio captures how asymmetric each connection is — essentially, how much data our servers send compared to what they receive from clients. Across all connections, the median ratio is 3.78, meaning that in half of all cases, servers send nearly four times more data than they receive. The average is far higher at 81.06, showing a strong long tail driven by download-heavy flows. Again we see the heavy long-tailed distribution, a small fraction of extreme cases push the ratio into the millions, with more extreme values of data transfers towards clients.</p>
    <div>
      <h3>Connection duration</h3>
      <a href="#connection-duration">
        
      </a>
    </div>
    <p>While packet and byte counts capture how much data is exchanged, connection duration provides insight into how that exchange unfolds over time.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5noP7Acqu2Ky4hCGtETH1F/92c7bd220d57232fb40440624d227a78/image8.png" />
          </figure><p>The CDF above shows the distribution of connection durations (lifetimes) in seconds. A reminder that the x-axis is log-scale. Across all connections, the median duration is just 4.7 seconds, meaning half of connections complete in under five seconds. The mean is much higher at 96 seconds, reflecting a small number of long-lived connections that skew the average. Most connections fall within a window of 0.1 seconds (10th percentile) to 300 seconds (90th percentile). We also observe some extremely long-lived connections lasting multiple days, possibly maintained via <a href="https://developers.cloudflare.com/fundamentals/reference/tcp-connections/#tcp-connections-and-keep-alives"><u>keep-alives</u></a> for connection reuse without hitting <a href="https://developers.cloudflare.com/fundamentals/reference/connection-limits/"><u>our default idle timeout limits</u></a>. These long-lived connections typically represent persistent sessions or multimedia traffic, while the majority of web traffic remains short, bursty, and transient.</p>
    <div>
      <h3>Request counts</h3>
      <a href="#request-counts">
        
      </a>
    </div>
    <p>A single connection can carry multiple HTTP requests for web traffic. This reveals patterns about connection multiplexing.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4hsoigL4rFtIyRJpdSUXwh/5ef82b3c0cf5b25b8dc13ed38761f895/image7.png" />
          </figure><p>The above shows the number of HTTP requests (in log-scale) that we see on a single connection, broken down by HTTP protocol version. Right away, we can see that for both HTTP/1.X (mean 3 requests) and HTTP/2 (mean 8 requests) connections, the median number of requests is just 1, reinforcing the prevalence of limited connection reuse. However, because HTTP/2 supports multiplexing multiple streams over a single connection, the 90th percentile rises to 10 requests, with occasional extreme cases carrying thousands of requests, which can be amplified due to <a href="https://blog.cloudflare.com/connection-coalescing-experiments/"><u>connection coalescing</u></a>. In contrast, HTTP/1.X connections have much lower request counts. This aligns with protocol design: HTTP/1.0 followed a “one request per connection” philosophy, while HTTP/1.1 introduced persistent connections — even combining both versions, it’s rare to see HTTP/1.X connections carrying more than two requests at the 90th percentile.</p><p>The prevalence of short-lived connections can be partly explained by automated clients or scripts that tend to open new connections rather than maintaining long-lived sessions. To explore this intuition, we split the data between traffic originating from data centers (likely automated) and typical user traffic (user-driven), using client ASNs as a proxy.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1DhUbNv8cjQVGOqKUai7KU/fecc8eaa488ec216bfb14084a518501b/image9.png" />
          </figure><p>The plot above shows that non-DC (user-driven) traffic has slightly higher request counts per connection, consistent with browsers or apps fetching multiple resources over a single persistent connection, with a mean of 5 requests and a 90th percentile of 5 requests per connection. In contrast, DC-originated traffic has a mean of roughly 3 requests and a 90th percentile of 2, validating our expectation. Despite these differences, the median number of requests remains 1 for both groups highlighting that, regardless of origin of connections, most are genuinely brief.</p>
    <div>
      <h2>Inferring path characteristics from connection-level data</h2>
      <a href="#inferring-path-characteristics-from-connection-level-data">
        
      </a>
    </div>
    <p>Connection-level measurements can also provide insights into underlying path characteristics. Let’s examine this in more detail.</p>
    <div>
      <h3>Path MTU</h3>
      <a href="#path-mtu">
        
      </a>
    </div>
    <p>The maximum transmission unit (<a href="https://www.cloudflare.com/learning/network-layer/what-is-mtu/"><u>MTU</u></a>) along the network path is often referred to as the Path MTU (PMTU). PMTU determines the largest packet size that can traverse a connection without fragmentation or packet drop, affecting throughput, efficiency, and latency. The Linux TCP stack on our servers tracks the largest segment size that can be sent without fragmentation along the path for a connection, as part of <a href="https://blog.cloudflare.com/path-mtu-discovery-in-practice/"><u>Path MTU discovery.</u></a></p><p>From that data we saw that the median (and the 90th percentile!) PMTU was 1500 bytes, which aligns with the typical Ethernet MTU and is <a href="https://en.wikipedia.org/wiki/Maximum_transmission_unit"><u>considered standard</u></a> for most Internet paths. Interestingly, the 10th percentile sits at 1,420 bytes, reflecting cases where paths include network links with slightly smaller MTUs—common in some <a href="https://blog.cloudflare.com/migrating-from-vpn-to-access/"><u>VPNs</u></a>, <a href="https://blog.cloudflare.com/increasing-ipv6-mtu/"><u>IPv6tov4 tunnels</u></a>, or older networking equipment that impose stricter limits to avoid fragmentation. At the extreme, we have seen MTU as small as 552 bytes for IPv4 connections which relates to the minimum allowed PMTU value <a href="https://www.kernel.org/doc/html/v6.5/networking/ip-sysctl.html#:~:text=Default%3A%20FALSE-,min_pmtu,-%2D%20INTEGER"><u>by the Linux kernel</u></a>.</p>
    <div>
      <h3>Initial congestion window</h3>
      <a href="#initial-congestion-window">
        
      </a>
    </div>
    <p>A key parameter in transport protocols is the congestion window (CWND), which is the number of packets that can be transmitted without waiting for an acknowledgement from the receiver. We call these packets or bytes “in-flight.” During a connection, the congestion window evolves dynamically throughout a connection.</p><p>However, the initial congestion window (ICWND) at the start of a data transfer can have an outsized impact, especially for short-lived connections, which dominate Internet traffic as we’ve seen above. If the ICWND is set too low, small and medium transfers take additional round-trip times to reach bottleneck bandwidth, slowing delivery. Conversely, if it’s too high, the sender risks overwhelming the network, causing unnecessary packet loss and retransmissions — potentially for all connections that share the bottleneck link.</p><p>A reasonable estimate of the ICWND can be taken as the congestion window size at the instant the TCP sender transitions out of <a href="https://www.rfc-editor.org/rfc/rfc5681#section-3.1"><u>slow start</u></a>. This transition marks the point at which the sender shifts from exponential growth to congestion-avoidance, having inferred that further growth may risk congestion. The figure below shows the distribution of congestion window sizes at the moment slow start exits — as calculated by <a href="https://blog.cloudflare.com/http-2-prioritization-with-nginx/#bbr-congestion-control"><u>BBR</u></a>. The median is roughly 464 KB, which corresponds to about 310 packets per connection with a typical 1,500-byte MTU, while extreme flows carry tens of megabytes in flight. This variance reflects the diversity of TCP connections and the dynamically evolving nature of the networks carrying traffic.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7BzqE6HSQgkriWisqS3Yx3/de4dc12a453d162884e9a015ccb40348/image4.png" />
          </figure><p>It’s important to emphasize that these values reflect a mix of network paths, including not only paths between Cloudflare and end users, but also between Cloudflare and neighboring datacenters, which are typically well provisioned and offer higher bandwidth.</p><p>Our initial inspection of the above distribution left us doubtful, because the values seem very high. We then realized the numbers are an artifact of behaviour specific to BBR, in which it sets the congestion window higher than its estimate of the path’s available capacity, <a href="https://en.wikipedia.org/wiki/Bandwidth-delay_product"><u>bandwidth delay product (BDP)</u></a>. The inflated value is <a href="https://www.ietf.org/archive/id/draft-cardwell-iccrg-bbr-congestion-control-01.html#name-state-machine-operation"><u>by design</u></a>. To prove the hypothesis, we re-plot the distribution from above in the figure below alongside BBR’s estimate of BDP. The difference is clear between BBR’s congestion window of unacknowledged packets and its BDP estimate.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/34YFSv4Zdp82qszNM79XsH/3c147dfd5c5006fe55abb53dab47bef1/image10.png" />
          </figure><p>The above plot adds the computed BDP values in context with connection telemetry. The median BDP comes out to be roughly 77 KB, which is roughly 50 packets. If we compare this to the congestion window distribution taken from above, we see BDP estimations from recently closed connections are much more stable.</p><p>We are using these insights to help identify reasonable initial congestion window sizes and the circumstances for them. Our own experiments internally make clear that ICWND sizes can affect performance by as much as 30-40% for smaller connections. Such insights will potentially help to revisit efforts to find better initial congestion window values, which has been a default of <a href="https://datatracker.ietf.org/doc/html/rfc6928"><u>10 packets</u></a> for more than a decade.</p>
    <div>
      <h3>Deeper understanding, better performance</h3>
      <a href="#deeper-understanding-better-performance">
        
      </a>
    </div>
    <p>We observed that Internet connections are highly heterogeneous, confirming decades-long observations of strong heavy-tail characteristics consistent with “<a href="https://en.wikipedia.org/wiki/Elephant_flow"><u>elephants and mice</u></a>” phenomenon. Ratios of upload to download bytes are unsurprising for larger flows, but surprisingly small for short flows, highlighting the asymmetric nature of Internet traffic. Understanding these connection characteristics continues to inform ways to improve connection performance, reliability, and user experience.</p><p>We will continue to build on this work, and plan to publish connection-level statistics on <a href="https://radar.cloudflare.com/"><u>Cloudflare Radar</u></a> so that others can similarly benefit.</p><p>Our work on improving our network is ongoing, and we welcome researchers, academics, <a href="https://blog.cloudflare.com/cloudflare-1111-intern-program/"><u>interns</u></a>, and anyone interested in this space to reach out at <a href="#"><u>ask-research@cloudflare.com</u></a>. By sharing knowledge and working together, we all can continue to make the Internet faster, safer, and more reliable for everyone.</p> ]]></content:encoded>
            <category><![CDATA[Research]]></category>
            <category><![CDATA[Better Internet]]></category>
            <category><![CDATA[Insights]]></category>
            <category><![CDATA[TCP]]></category>
            <guid isPermaLink="false">5jyi6dhHiLQu3BVMVGKrVG</guid>
            <dc:creator>Suleman Ahmad</dc:creator>
            <dc:creator>Peter Wu</dc:creator>
        </item>
        <item>
            <title><![CDATA[One IP address, many users: detecting CGNAT to reduce collateral effects]]></title>
            <link>https://blog.cloudflare.com/detecting-cgn-to-reduce-collateral-damage/</link>
            <pubDate>Wed, 29 Oct 2025 13:00:00 GMT</pubDate>
            <description><![CDATA[ IPv4 scarcity drives widespread use of Carrier-Grade Network Address Translation, a practice in ISPs and mobile networks that places many users behind each IP address, along with their collected activity and volumes of traffic. We introduce the method we’ve developed to detect large-scale IP sharing globally and mitigate the issues that result.  ]]></description>
            <content:encoded><![CDATA[ <p>IP addresses have historically been treated as stable identifiers for non-routing purposes such as for geolocation and security operations. Many operational and security mechanisms, such as blocklists, rate-limiting, and anomaly detection, rely on the assumption that a single IP address represents a cohesive<b>, </b>accountable<b> </b>entity or even, possibly, a specific user or device.</p><p>But the structure of the Internet has changed, and those assumptions can no longer be made. Today, a single IPv4 address may represent hundreds or even thousands of users due to widespread use of <a href="https://en.wikipedia.org/wiki/Carrier-grade_NAT"><u>Carrier-Grade Network Address Translation (CGNAT)</u></a>, VPNs, and proxy<b> </b>middleboxes. This concentration of traffic can result in <a href="https://blog.cloudflare.com/consequences-of-ip-blocking/"><u>significant collateral damage</u></a> – especially to users in developing regions of the world – when security mechanisms are applied without taking into account the multi-user nature of IPs.</p><p>This blog post presents our approach to detecting large-scale IP sharing globally. We describe how we <a href="https://www.cloudflare.com/learning/ai/how-to-secure-training-data-against-ai-data-leaks/">build reliable training data</a>, and how detection can help avoid unintentional bias affecting users in regions where IP sharing is most prevalent. Arguably it's those regional variations that motivate our efforts more than any other. </p>
    <div>
      <h2>Why this matters: Potential socioeconomic bias</h2>
      <a href="#why-this-matters-potential-socioeconomic-bias">
        
      </a>
    </div>
    <p>Our work was initially motivated by a simple observation: CGNAT is a likely unseen source of bias on the Internet. Those biases would be more pronounced wherever there are more users and few addresses, such as in developing regions. And these biases can have profound implications for user experience, network operations, and digital equity.</p><p>The reasons are understandable for many reasons, not least because of necessity. Countries in the developing world often have significantly fewer available IPs, and more users. The disparity is a historical artifact of how the Internet grew: the largest blocks of IPv4 addresses were allocated decades ago, primarily to organizations in North America and Europe, leaving a much smaller pool for regions where Internet adoption expanded later. </p><p>To visualize the IPv4 allocation gap, we plot country-level ratios of users to IP addresses in the figure below. We take online user estimates from the <a href="https://data.worldbank.org/indicator/IT.NET.USER.ZS"><u>World Bank Group</u></a> and the number of IP addresses in a country from Regional Internet Registry (RIR) records. The colour-coded map that emerges shows that the usage of each IP address is more concentrated in regions that generally have poor Internet penetration. For example, large portions of Africa and South Asia appear with the highest user-to-IP ratios. Conversely, the lowest user-to-IP ratios appear in Australia, Canada, Europe, and the USA — the very countries that otherwise have the highest Internet user penetration numbers.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2YBdqPx0ALt7pY7rmQZyLQ/049922bae657a715728700c764c4af16/BLOG-3046_2.png" />
          </figure><p>The scarcity of IPv4 address space means that regional differences can only worsen as Internet penetration rates increase. A natural consequence of increased demand in developing regions is that ISPs would rely even more heavily on CGNAT, and is compounded by the fact that CGNAT is common in mobile networks that users in developing regions so heavily depend on. All of this means that <a href="https://datatracker.ietf.org/doc/html/rfc7021"><u>actions known to be based</u></a> on IP reputation or behaviour would disproportionately affect developing economies. </p><p>Cloudflare is a global network in a global Internet. We are sharing our methodology so that others might benefit from our experience and help to mitigate unintended effects. First, let’s better understand CGNAT.</p>
    <div>
      <h3>When one IP address serves multiple users</h3>
      <a href="#when-one-ip-address-serves-multiple-users">
        
      </a>
    </div>
    <p>Large-scale IP address sharing is primarily achieved through two distinct methods. The first, and more familiar, involves services like VPNs and proxies. These tools emerge from a need to secure corporate networks or improve users' privacy, but can be used to circumvent censorship or even improve performance. Their deployment also tends to concentrate traffic from many users onto a small set of exit IPs. Typically, individuals are aware they are using such a service, whether for personal use or as part of a corporate network.</p><p>Separately, another form of large-scale IP sharing often goes unnoticed by users: <a href="https://en.wikipedia.org/wiki/Carrier-grade_NAT"><u>Carrier-Grade NAT (CGNAT)</u></a>. One way to explain CGNAT is to start with a much smaller version of network address translation (NAT) that very likely exists in your home broadband router, formally called a Customer Premises Equipment (or CPE), which translates unseen private addresses in the home to visible and routable addresses in the ISP. Once traffic leaves the home, an ISP may add an additional enterprise-level address translation that causes many households or unrelated devices to appear behind a single IP address.</p><p>The crucial difference between large-scale IP sharing is user choice: carrier-grade address sharing is not a user choice, but is configured directly by Internet Service Providers (ISPs) within their access networks. Users are not aware that CGNATs are in use. </p><p>The primary driver for this technology, understandably, is the exhaustion of the IPv4 address space. IPv4's 32-bit architecture supports only 4.3 billion unique addresses — a capacity that, while once seemingly vast, has been completely outpaced by the Internet's explosive growth. By the early 2010s, Regional Internet Registries (RIRs) had depleted their pools of unallocated IPv4 addresses. This left ISPs unable to easily acquire new address blocks, forcing them to maximize the use of their existing allocations.</p><p>While the long-term solution is the transition to IPv6, CGNAT emerged as the immediate, practical workaround. Instead of assigning a unique public IP address to each customer, ISPs use CGNAT to place multiple subscribers behind a single, shared IP address. This practice solves the problem of IP address scarcity. Since translated addresses are not publicly routable, CGNATs have also had the positive side effect of protecting many home devices that might be vulnerable to compromise. </p><p>CGNATs also create significant operational fallout stemming from the fact that hundreds or even thousands of clients can appear to originate from a single IP address. <b>This means an IP-based security system may inadvertently block or throttle large groups of users as a result of a single user behind the CGNAT engaging in malicious activity.</b></p><p>This isn't a new or niche issue. It has been recognized for years by the Internet Engineering Task Force (IETF), the organization that develops the core technical standards for the Internet. These standards, known as Requests for Comments (RFCs), act as the official blueprints for how the Internet should operate. <a href="https://www.rfc-editor.org/rfc/rfc6269.html"><u>RFC 6269</u></a>, for example, discusses the challenges of IP address sharing, while <a href="https://datatracker.ietf.org/doc/html/rfc7021"><u>RFC 7021</u></a> examines the impact of CGNAT on network applications. Both explain that traditional abuse-mitigation techniques, such as blocklisting or rate-limiting, assume a one-to-one relationship between IP addresses and users: when malicious activity is detected, the offending IP address can be blocked to prevent further abuse.</p><p>In shared IPv4 environments, such as those using CGNAT or other address-sharing techniques, this assumption breaks down because multiple subscribers can appear under the same public IP. Blocking the shared IP therefore penalizes many innocent users along with the abuser. In 2015 Ofcom, the UK's telecommunications regulator, reiterated these concerns in a <a href="https://oxil.uk/research/mc159-report-on-the-implications-of-carrier-grade-network-address-translators-final-report"><u>report</u></a> on the implications of CGNAT where they noted that, “In the event that an IPv4 address is blocked or blacklisted as a source of spam, the impact on a CGNAT would be greater, potentially affecting an entire subscriber base.” </p><p>While the hope was that CGNAT was only a temporary solution until the eventual switch to IPv6, as the old proverb says, nothing is more permanent than a temporary solution. While IPv6 deployment continues to lag, <a href="https://blog.apnic.net/2022/01/19/ip-addressing-in-2021/"><u>CGNAT deployments have become increasingly common</u></a>, and so do the related problems. </p>
    <div>
      <h2>CGNAT detection at Cloudflare</h2>
      <a href="#cgnat-detection-at-cloudflare">
        
      </a>
    </div>
    <p>To enable a fairer treatment of users behind CGNAT IPs by security techniques that rely on IP reputation, our goal is to identify large-scale IP sharing. This allows traffic filtering to be better calibrated and collateral damage minimized. Additionally, we want to distinguish CGNAT IPs from other large-scale sharing (LSS) IP technologies, such as VPNs and proxies, because we may need to take different approaches to different kinds of IP-sharing technologies.</p><p>To do this, we decided to take advantage of Cloudflare’s extensive view of the active IP clients, and build a supervised learning classifier that would distinguish CGNAT and VPN/proxy IPs from IPs that are allocated to a single subscriber (non-LSS IPs), based on behavioural characteristics. The figure below shows an overview of our supervised classifier: </p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7tFXZByKRCYxVaAFDG0Xda/d81e7f09b5d12e03e39c266696df9cc3/BLOG-3046_3.png" />
          </figure><p>While our classification approach is straightforward, a significant challenge is the lack of a reliable, comprehensive, and labeled dataset of CGNAT IPs for our training dataset.</p>
    <div>
      <h3>Detecting CGNAT using public data sources </h3>
      <a href="#detecting-cgnat-using-public-data-sources">
        
      </a>
    </div>
    <p>Detection begins by building an initial dataset of IPs believed to be associated with CGNAT. Cloudflare has vast HTTP and traffic logs. Unfortunately there is no signal or label in any request to indicate what is or is not a CGNAT. </p><p>To build an extensive labelled dataset to train our ML classifier, we employ a combination of network measurement techniques, as described below. We rely on public data sources to help disambiguate an initial set of large-scale shared IP addresses from others in Cloudflare’s logs.   </p>
    <div>
      <h4>Distributed Traceroutes</h4>
      <a href="#distributed-traceroutes">
        
      </a>
    </div>
    <p>The presence of a client behind CGNAT can often be inferred through traceroute analysis. CGNAT requires ISPs to insert a NAT step that typically uses the Shared Address Space (<a href="https://datatracker.ietf.org/doc/html/rfc6598"><u>RFC 6598</u></a>) after the customer premises equipment (CPE). By running a traceroute from the client to its own public IP and examining the hop sequence, the appearance of an address within 100.64.0.0/10 between the first private hop (e.g., 192.168.1.1) and the public IP is a strong indicator of CGNAT.</p><p>Traceroute can also reveal multi-level NAT, which CGNAT requires, as shown in the diagram below. If the ISP assigns the CPE a private <a href="https://datatracker.ietf.org/doc/html/rfc1918"><u>RFC 1918</u></a> address that appears right after the local hop, this indicates at least two NAT layers. While ISPs sometimes use private addresses internally without CGNAT, observing private or shared ranges immediately downstream combined with multiple hops before the public IP strongly suggests CGNAT or equivalent multi-layer NAT.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/57k4gwGCHcPggIWtSy36HU/6cf8173c1a4c568caa25a1344a516e9e/BLOG-3046_4.png" />
          </figure><p>Although traceroute accuracy depends on router configurations, detecting private and shared IP ranges is a reliable way to identify large-scale IP sharing. We apply this method to distributed traceroutes from over 9,000 RIPE Atlas probes to classify hosts as behind CGNAT, single-layer NAT, or no NAT.</p>
    <div>
      <h4>Scraping WHOIS and PTR records</h4>
      <a href="#scraping-whois-and-ptr-records">
        
      </a>
    </div>
    <p>Many operators encode metadata about their IPs in the corresponding reverse DNS pointer (PTR) record that can signal administrative attributes and geographic information. We first query the DNS for PTR records for the full IPv4 space and then filter for a set of known keywords from the responses that indicate a CGNAT deployment. For example, each of the following three records matches a keyword (<code>cgnat</code>, <code>cgn</code> or <code>lsn</code>) used to detect CGNAT address space:</p><p><code>node-lsn.pool-1-0.dynamic.totinternet.net
103-246-52-9.gw1-cgnat.mobile.ufone.nz
cgn.gsw2.as64098.net</code></p><p>WHOIS and Internet Routing Registry (IRR) records may also contain organizational names, remarks, or allocation details that reveal whether a block is used for CGNAT pools or residential assignments. </p><p>Given that both PTR and WHOIS records may be manually maintained and therefore may be stale, we try to sanitize the extracted data by validating the fact that the corresponding ISPs indeed use CGNAT based on customer and market reports. </p>
    <div>
      <h4>Collecting VPN and proxy IPs </h4>
      <a href="#collecting-vpn-and-proxy-ips">
        
      </a>
    </div>
    <p>Compiling a list of VPN and proxy IPs is more straightforward, as we can directly find such IPs in public service directories for anonymizers. We also subscribe to multiple VPN providers, and we collect the IPs allocated to our clients by connecting to a unique HTTP endpoint under our control. </p>
    <div>
      <h2>Modeling CGNAT with machine learning</h2>
      <a href="#modeling-cgnat-with-machine-learning">
        
      </a>
    </div>
    <p>By combining the above techniques, we accumulated a dataset of labeled IPs for more than 200K CGNAT IPs, 180K VPNs &amp; proxies and close to 900K IPs allocated that are not LSS IPs. These were the entry points to modeling with machine learning.</p>
    <div>
      <h3>Feature selection</h3>
      <a href="#feature-selection">
        
      </a>
    </div>
    <p>Our hypothesis was that aggregated activity from CGNAT IPs is distinguishable from activity generated from other non-CGNAT IP addresses. Our feature extraction is an evaluation of that hypothesis — since networks do not disclose CGNAT and other uses of IPs, the quality of our inference is strictly dependent on our confidence in the training data. We claim the key discriminator is diversity, not just volume. For example, VM-hosted scanners may generate high numbers of requests, but with low information diversity. Similarly, globally routable CPEs may have individually unique characteristics, but with volumes that are less likely to be caught at lower sampling rates.</p><p>In our feature extraction, we parse a 1% sampled HTTP requests log for distinguishing features of IPs compiled in our reference set, and the same features for the corresponding /24 prefix (namely IPs with the same first 24 bits in common). We analyse the features for each of the VPNs, proxies, CGNAT, or non LSS IP. We find that features from the following broad categories are key discriminators for the different types of IPs in our training dataset:</p><ul><li><p><b>Client-side signals:</b> We analyze the aggregate properties of clients connecting from an IP. A large, diverse user base (like on a CGNAT) naturally presents a much wider statistical variety of client behaviors and connection parameters than a single-tenant server or a small business proxy.</p></li><li><p><b>Network and transport-level behaviors:</b> We examine traffic at the network and transport layers. The way a large-scale network appliance (like a CGNAT) manages and routes connections often leaves subtle, measurable artifacts in its traffic patterns, such as in port allocation and observed network timing.</p></li><li><p><b>Traffic volume and destination diversity:</b> We also model the volume and "shape" of the traffic. An IP representing thousands of independent users will, on average, generate a higher volume of requests and target a much wider, less correlated set of destinations than an IP representing a single user.</p></li></ul><p>Crucially, to distinguish CGNAT from VPNs and proxies (which is absolutely necessary for calibrated security filtering), we had to aggregate these features at two different scopes: per-IP and per /24 prefixes. CGNAT IPs are typically allocated large blocks of IPs, whereas VPNs IPs are more scattered across different IP prefixes. </p>
    <div>
      <h3>Classification results</h3>
      <a href="#classification-results">
        
      </a>
    </div>
    <p>We compute the above features from HTTP logs over 24-hour intervals to increase data volume and reduce noise due to DHCP IP reallocation. The dataset is split into 70% training and 30% testing sets with disjoint /24 prefixes, and VPN and proxy labels are merged due to their similarity and lower operational importance compared to CGNAT detection.</p><p>Then we train a multi-class <a href="https://xgboost.readthedocs.io/en/stable/"><u>XGBoost</u></a> model with class weighting to address imbalance, assigning each IP to the class with the highest predicted probability. XGBoost is well-suited for this task because it efficiently handles large feature sets, offers strong regularization to prevent overfitting, and delivers high accuracy with limited parameter tuning. The classifier achieves 0.98 accuracy, 0.97 weighted F1, and 0.04 log loss. The figure below shows the confusion matrix of the classification.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/26i81Pe0yjlftHfIDrjB5X/45d001447fc52001a25176c8036a92cb/BLOG-3046_5.png" />
          </figure><p>Our model is accurate for all three labels. The errors observed are mainly misclassifications of VPN/proxy IPs as CGNATs, mostly for VPN/proxy IPs that are within a /24 prefix that is also shared by broadband users outside of the proxy service. We also evaluate the prediction accuracy using <a href="https://scikit-learn.org/stable/modules/cross_validation.html"><u>k-fold cross validation</u></a>, which provides a more reliable estimate of performance by training and validating on multiple data splits, reducing variance and overfitting compared to a single train–test split. We select 10 folds and we evaluate the <a href="https://developers.google.com/machine-learning/crash-course/classification/roc-and-auc"><u>Area Under the ROC Curve</u></a> (AUC) and the multi-class logloss. We achieve a macro-average AUC of 0.9946 (σ=0.0069) and log loss of 0.0429 (σ=0.0115). Prefix-level features are the most important contributors to classification performance.</p>
    <div>
      <h3>Users behind CGNAT are more likely to be rate limited</h3>
      <a href="#users-behind-cgnat-are-more-likely-to-be-rate-limited">
        
      </a>
    </div>
    <p>The figure below shows the daily number of CGNAT IP inferences generated by our CDN-deployed detection service between December 17, 2024 and January 9, 2025. The number of inferences remains largely stable, with noticeable dips during weekends and holidays such as Christmas and New Year’s Day. This pattern reflects expected seasonal variations, as lower traffic volumes during these periods lead to fewer active IP ranges and reduced request activity.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7hiYstptHAK6tFQrM2kEsf/7f8192051156fc6eaecdf26a829ef11c/BLOG-3046_6.png" />
          </figure><p>Next, recall that actions that rely on IP reputation or behaviour may be unduly influenced by CGNATs. One such example is bot detection. In an evaluation of our systems, we find that bot detection is resilient to those biases. However, we also learned that customers are more likely to rate limit IPs that we find are CGNATs.</p><p>We analyze bot labels by analyzing how often requests from CGNAT and non-CGNAT IPs are labeled as bots. <a href="https://www.cloudflare.com/resources/assets/slt3lc6tev37/JYknFdAeCVBBWWgQUtNZr/61844a850c5bba6b647d65e962c31c9c/BDES-863_Bot_Management_re_edit-_How_it_Works_r3.pdf"><u>Cloudflare assigns a bot score</u></a> to each HTTP request using CatBoost models trained on various request features, and these scores are then exposed through the Web Application Firewall (WAF), allowing customers to apply filtering rules. The median bot rate is nearly identical for CGNAT (4.8%) and non-CGNAT (4.7%) IPs. However, the mean bot rate is notably lower for CGNATs (7%) than for non-CGNATs (13.1%), indicating different underlying distributions. Non-CGNAT IPs show a much wider spread, with some reaching 100% bot rates, while CGNAT IPs cluster mostly below 15%. This suggests that non-CGNAT IPs tend to be dominated by either human or bot activity, whereas CGNAT IPs reflect mixed behavior from many end users, with human traffic prevailing.</p><p>Interestingly, despite bot scores that indicate traffic is more likely to be from human users, CGNAT IPs are subject to rate limiting three times more often than non-CGNAT IPs. This is likely because multiple users share the same public IP, increasing the chances that legitimate traffic gets caught by customers’ bot mitigation and firewall rules.</p><p>This tells us that users behind CGNAT IPs are indeed susceptible to collateral effects, and identifying those IPs allows us to tune mitigation strategies to disrupt malicious traffic quickly while reducing collateral impact on benign users behind the same address.</p>
    <div>
      <h2>A global view of the CGNAT ecosystem</h2>
      <a href="#a-global-view-of-the-cgnat-ecosystem">
        
      </a>
    </div>
    <p>One of the early motivations of this work was to understand if our knowledge about IP addresses might hide a bias along socio-economic boundaries—and in particular if an action on an IP address may disproportionately affect populations in developing nations, often referred to as the Global South. Identifying where different IPs exist is a necessary first step.</p><p>The map below shows the fraction of a country’s inferred CGNAT IPs over all IPs observed in the country. Regions with a greater reliance on CGNAT appear darker on the map. This view highlights the geodiversity of CGNATs in terms of importance; for example, much of Africa and Central and Southeast Asia rely on CGNATs. </p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4P2XcuEebKfcYdCgykMWuP/4a0aa86bd619ba24533de6862175e919/BLOG-3046_7.png" />
          </figure><p>As further evidence of continental differences, the boxplot below shows the distribution of distinct user agents per IP across /24 prefixes inferred to be part of a CGNAT deployment in each continent. </p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7bqJSHexFuXFs4A8am1ibQ/591be6880e8f58c9d61b147aaf0487f5/BLOG-3046_8.png" />
          </figure><p>Notably, Africa has a much higher ratio of user agents to IP addresses than other regions, suggesting more clients share the same IP in African <a href="https://www.cloudflare.com/learning/network-layer/what-is-an-autonomous-system/"><u>ASNs</u></a>. So, not only do African ISPs rely more extensively on CGNAT, but the number of clients behind each CGNAT IP is higher. </p><p>While the deployment rate of CGNAT per country is consistent with the users-per-IP ratio per country, it is not sufficient by itself to confirm deployment. The scatterplot below shows the number of users (according to <a href="https://stats.labs.apnic.net/aspop/"><u>APNIC user estimates</u></a>) and the number of IPs per ASN for ASNs where we detect CGNAT. ASNs that have fewer available IP addresses than their user base appear below the diagonal. Interestingly the scatterplot indicates that many ASNs with more addresses than users still choose to deploy CGNAT. Presumably, these ASNs provide additional services beyond broadband, preventing them from dedicating their entire address pool to subscribers. </p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/34GKPlJWvkwudU5MbOtots/c883760a7c448b12995997e3e6e51979/BLOG-3046_9.png" />
          </figure>
    <div>
      <h3>What this means for everyday Internet users</h3>
      <a href="#what-this-means-for-everyday-internet-users">
        
      </a>
    </div>
    <p>Accurate detection of CGNAT IPs is crucial for minimizing collateral effects in network operations and for ensuring fair and effective application of security measures. Our findings underscore the potential socio-economic and geographical variations in the use of CGNATs, revealing significant disparities in how IP addresses are shared across different regions. </p><p>At Cloudflare we are going beyond just using these insights to evaluate policies and practices. We are using the detection systems to improve our systems across our application security suite of features, and working with customers to understand how they might use these insights to improve the protections they configure.</p><p>Our work is ongoing and we’ll share details as we go. In the meantime, if you’re an ISP or network operator that operates CGNAT and want to help, get in touch at <a href="#"><u>ask-research@cloudflare.com</u></a>. Sharing knowledge and working together helps make better and equitable user experience for subscribers, while preserving web service safety and security.</p> ]]></content:encoded>
            <category><![CDATA[Research]]></category>
            <category><![CDATA[WAF]]></category>
            <category><![CDATA[Web Application Firewall]]></category>
            <category><![CDATA[Better Internet]]></category>
            <category><![CDATA[Security]]></category>
            <category><![CDATA[Bots]]></category>
            <category><![CDATA[IPv4]]></category>
            <category><![CDATA[Network Services]]></category>
            <guid isPermaLink="false">9cTCNUkDdgVjdBN6M6JLv</guid>
            <dc:creator>Vasilis Giotsas</dc:creator>
            <dc:creator>Marwan Fayed</dc:creator>
        </item>
        <item>
            <title><![CDATA[A framework for measuring Internet resilience]]></title>
            <link>https://blog.cloudflare.com/a-framework-for-measuring-internet-resilience/</link>
            <pubDate>Tue, 28 Oct 2025 13:00:00 GMT</pubDate>
            <description><![CDATA[ We present a data-driven framework to quantify cross-layer Internet resilience. We also share a list of measurements with which to quantify facets of Internet resilience for geographical areas. ]]></description>
            <content:encoded><![CDATA[ <p></p><p>On July 8, 2022, a massive outage at Rogers, one of Canada's largest telecom providers, knocked out Internet and mobile services for over 12 million users. Why did this single event have such a catastrophic impact? And more importantly, why do some networks crumble in the face of disruption while others barely stumble?</p><p>The answer lies in a concept we call <b>Internet resilience</b>: a network's ability not just to stay online, but to withstand, adapt to, and rapidly recover from failures.</p><p>It’s a quality that goes far beyond simple "uptime." True resilience is a multi-layered capability, built on everything from the diversity of physical subsea cables to the security of BGP routing and the health of a competitive market. It's an emergent property much like <a href="https://en.wikipedia.org/wiki/Psychological_resilience"><u>psychological resilience</u></a>: while each individual network must be robust, true resilience only arises from the collective, interoperable actions of the entire ecosystem. In this post, we'll introduce a data-driven framework to move beyond abstract definitions and start quantifying what makes a network resilient. All of our work is based on public data sources, and we're sharing our metrics to help the entire community build a more reliable and secure Internet for everyone.</p>
    <div>
      <h2>What is Internet resilience?</h2>
      <a href="#what-is-internet-resilience">
        
      </a>
    </div>
    <p>In networking, we often talk about "reliability" (does it work under normal conditions?) and "robustness" (can it handle a sudden traffic surge?). But resilience is more dynamic. It's the ability to gracefully degrade, adapt, and most importantly, recover. For our work, we've adopted a pragmatic definition:</p><p><b><i>Internet resilience</i></b><i> is the measurable capability of a national or regional network ecosystem to maintain diverse and secure routing paths in the face of challenges, and to rapidly restore connectivity following a disruption.</i></p><p>This definition links the abstract goal of resilience to the concrete, measurable metrics that form the basis of our analysis.</p>
    <div>
      <h3>Local decisions have global impact</h3>
      <a href="#local-decisions-have-global-impact">
        
      </a>
    </div>
    <p>The Internet is a global system but is built out of thousands of local pieces. Every country depends on the global Internet for economic activity, communication, and critical services, yet most of the decisions that shape how traffic flows are made locally by individual networks.</p><p>In most national infrastructures like water or power grids, a central authority can plan, monitor, and coordinate how the system behaves. The Internet works very differently. Its core building blocks are Autonomous Systems (ASes), which are networks like ISPs, universities, cloud providers or enterprises. Each AS controls autonomously how it connects to the rest of the Internet, which routes it accepts or rejects, how it prefers to forward traffic, and with whom it interconnects. That’s why they’re called Autonomous Systems in the first place! There’s no global controller. Instead, the Internet’s routing fabric emerges from the collective interaction of thousands of independent networks, each optimizing for its own goals.</p><p>This decentralized structure is one of the Internet’s greatest strengths: no single failure can bring the whole system down. But it also makes measuring resilience at a country level tricky. National statistics can hide local structures that are crucial to global connectivity. For example, a country might appear to have many international connections overall, but those connections could be concentrated in just a handful of networks. If one of those fails, the whole country could be affected.</p><p>For resilience, the goal isn’t to isolate national infrastructure from the global Internet. In fact, the opposite is true: healthy integration with diverse partners is what makes both local and global connectivity stronger. When local networks invest in secure, redundant, and diverse interconnections, they improve their own resilience and contribute to the stability of the Internet as a whole.</p><p>This perspective shapes how we design and interpret resilience metrics. Rather than treating countries as isolated units, we look at how well their networks are woven into the global fabric: the number and diversity of upstream providers, the extent of international peering, and the richness of local interconnections. These are the building blocks of a resilient Internet.</p>
    <div>
      <h3>Route hygiene: Keeping the Internet healthy</h3>
      <a href="#route-hygiene-keeping-the-internet-healthy">
        
      </a>
    </div>
    <p>The Internet is constructed according to a <i>layered</i> model, by design, so that different Internet components and features can evolve independent of the others. The Physical layer stores, carries, and forwards, all the bits and bytes transmitted in packets between devices. It consists of cables, routers and switches, but also buildings that house interconnection facilities. The Application layer sits above all others and has virtually no information about the network so that applications can communicate without having to worry about the underlying details, for example, if a network is ethernet or Wi-Fi. The application layer includes web browsers, web servers, as well as caching, security, and other features provided by Content Distribution Networks (CDNs). Between the physical and application layers is the Network layer responsible for Internet routing. It is ‘logical’, consisting of software that learns about interconnection and routes, and makes (local) forwarding decisions that deliver packets to their destinations. </p><p>Good route hygiene works like personal hygiene: it prevents problems before they spread. The Internet relies on the <a href="https://www.cloudflare.com/learning/security/glossary/what-is-bgp/"><u>Border Gateway Protocol</u></a> (BGP) to exchange routes between networks, but BGP wasn’t built with security in mind. A single bad route announcement, whether by mistake or attack, can send traffic the wrong way or cause widespread outages.</p><p>Two practices help stop this: The <b>RPKI</b> (Resource Public Key Infrastructure) lets networks publish cryptographic proof that they’re allowed to announce specific IP prefixes. <b>ROV </b>(Route Origin Validation) checks those proofs before accepting routes.</p><p>Together, they act like passports and border checks for Internet routes, helping filter out hijacks and leaks early.</p><p>Hygiene doesn’t just happen in the routing table – it spans multiple layers of the Internet’s architecture, and weaknesses in one layer can ripple through the rest. At the physical layer, having multiple, geographically diverse cable routes ensures that a single cut or disaster doesn’t isolate an entire region. For example, distributing submarine landing stations along different coastlines can protect international connectivity when one corridor fails. At the network layer, practices like multi-homing and participation in Internet Exchange Points (IXPs) give operators more options to reroute traffic during incidents, reducing reliance on any single upstream provider. At the application layer, Content Delivery Networks (CDNs) and caching keep popular content close to users, so even if upstream routes are disrupted, many services remain accessible. Finally, policy and market structure also play a role: open peering policies and competitive markets foster diversity, while dependence on a single ISP or cable system creates fragility.</p><p>Resilience emerges when these layers work together. If one layer is weak, the whole system becomes more vulnerable to disruption.</p><p>The more networks adopt these practices, the stronger and more resilient the Internet becomes. We actively support the deployment of RPKI, ROV, and diverse routing to keep the global Internet healthy.</p>
    <div>
      <h2>Measuring resilience is harder than it sounds</h2>
      <a href="#measuring-resilience-is-harder-than-it-sounds">
        
      </a>
    </div>
    <p>The biggest hurdle in measuring resilience is data access. The most valuable information, like internal network topologies, the physical paths of fiber cables, or specific peering agreements, is held by private network operators. This is the ground truth of the network.</p><p>However, operators view this information as a highly sensitive competitive asset. Revealing detailed network maps could expose strategic vulnerabilities or undermine business negotiations. Without access to this ground truth data, we're forced to rely on inference, approximation, and the clever use of publicly available data sources. Our framework is built entirely on these public sources to ensure anyone can reproduce and build upon our findings.</p><p>Projects like RouteViews and RIPE RIS collect BGP routing data that shows how networks connect. <a href="https://www.cloudflare.com/en-in/learning/network-layer/what-is-mtr/"><u>Traceroute</u></a> measurements reveal paths at the router level. IXP and submarine cable maps give partial views of the physical layer. But each of these sources has blind spots: peering links often don’t appear in BGP data, backup paths may remain hidden, and physical routes are hard to map precisely. This lack of a single, complete dataset means that resilience measurement relies on combining many partial perspectives, a bit like reconstructing a city map from scattered satellite images, traffic reports, and public utility filings. It’s challenging, but it’s also what makes this field so interesting.</p>
    <div>
      <h3>Translating resilience into quantifiable metrics</h3>
      <a href="#translating-resilience-into-quantifiable-metrics">
        
      </a>
    </div>
    <p>Once we understand why resilience matters and what makes it hard to measure, the next step is to translate these ideas into concrete metrics. These metrics give us a way to evaluate how well different parts of the Internet can withstand disruptions and to identify where the weak points are. No single metric can capture Internet resilience on its own. Instead, we look at it from multiple angles: physical infrastructure, network topology, interconnection patterns, and routing behavior. Below are some of the key dimensions we use. Some of these metrics are inspired from existing research, like the <a href="https://pulse.internetsociety.org/en/resilience/"><u>ISOC Pulse</u></a> framework. All described methods rely on public data sources and are fully reproducible. As a result, in our visualizations we intentionally omit country and region names to maintain focus on the methodology and interpretation of the results. </p>
    <div>
      <h3>IXPs and colocation facilities</h3>
      <a href="#ixps-and-colocation-facilities">
        
      </a>
    </div>
    <p>Networks primarily interconnect in two types of physical facilities: colocation facilities (colos), and Internet Exchange Points (IXPs) often housed within the colos. Although symbiotically linked, they serve distinct functions in a nation’s digital ecosystem. A colocation facility provides the foundational infrastructure —- secure space, power, and cooling – for network operators to place their equipment. The IXP builds upon this physical base to provide the logical interconnection fabric, a role that is transformative for a region’s Internet development and resilience. The networks that connect at these facilities are its members. </p><p>Metrics that reflect resilience include:</p><ul><li><p><b>Number and distribution of IXPs</b>, normalized by population or geography. A higher IXP count, weighted by population or geographic coverage, is associated with improved local connectivity.</p></li><li><p><b>Peering participation rates</b> — the percentage of local networks connected to domestic IXPs. This metric reflects the extent to which local networks rely on regional interconnection rather than routing traffic through distant upstream providers.</p></li><li><p><b>Diversity of IXP membership</b>, including ISPs, CDNs, and cloud providers, which indicates how much critical content is available locally, making it accessible to domestic users even if international connectivity is severely degraded.</p></li></ul><p>Resilience also depends on how well local networks connect globally:</p><ul><li><p>How many <b>local networks peer at international IXPs</b>, increasing their routing options</p></li><li><p>How many <b>international networks peer at local IXPs</b>, bringing content closer to users</p></li></ul><p>A balanced flow in both directions strengthens resilience by ensuring multiple independent paths in and out of a region.</p><p>The geographic distribution of IXPs further enhances resilience. A resilient IXP ecosystem should be geographically dispersed to serve different regions within a country effectively, reducing the risk of a localized infrastructure failure from affecting the connectivity of an entire country. Spatial distribution metrics help evaluate how infrastructure is spread across a country’s geography or its population. Key spatial metrics include:</p><ul><li><p><b>Infrastructure per Capita</b>: This metric – inspired by <a href="https://en.wikipedia.org/wiki/Telephone_density"><u>teledensity</u></a>  – measures infrastructure relative to population size of a sub-region, providing a per-person availability indicator. A low IXP-per-population ratio in a region suggests that users there rely on distant exchanges, increasing the bit-risk miles.</p></li><li><p><b>Infrastructure per Area (Density)</b>: This metric evaluates how infrastructure is distributed per unit of geographic area, highlighting spatial coverage. Such area-based metrics are crucial for critical infrastructures to ensure remote areas are not left inaccessible.</p></li></ul><p>These metrics can be summarized using the <a href="https://www.bls.gov/k12/students/economics-made-easy/location-quotients.pdf"><u>Location Quotient (LQ)</u></a>. The location quotient is a widely used geographic index that measures a region’s share of infrastructure relative to its share of a baseline (such as population or area).</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4S52jlwCpQ8WVS6gRSdNqp/4722abb10331624a54b411708f1e576b/image5.png" />
          </figure><p>For example, the figure above represents US states where a region hosts more or less infrastructure that is expected for its population, based on its LQ score. This statistic illustrates how even for the states with the highest number of facilities this number is <i>still</i> lower than would be expected given the population size of those states.</p>
    <div>
      <h4>Economic-weighted metrics</h4>
      <a href="#economic-weighted-metrics">
        
      </a>
    </div>
    <p>While spatial metrics capture the physical distribution of infrastructure, economic and usage-weighted metrics reveal how infrastructure is actually used. These account for traffic, capacity, or economic activity, exposing imbalances that spatial counts miss. <b>Infrastructure Utilization Concentration</b> measures how usage is distributed across facilities, using indices like the <b>Herfindahl–Hirschman Index (HHI)</b>. HHI sums the squared market shares of entities, ranging from 0 (competitive) to 10,000 (highly concentrated). For IXPs, market share is defined through operational metrics such as:</p><ul><li><p><b>Peak/Average Traffic Volume</b> (Gbps/Tbps): indicates operational significance</p></li><li><p><b>Number of Connected ASNs</b>: reflects network reach</p></li><li><p><b>Total Port Capacity</b>: shows physical scale</p></li></ul><p>The chosen metric affects results. For example, using connected ASNs yields an HHI of 1,316 (unconcentrated) for a Central European country, whereas using port capacity gives 1,809 (moderately concentrated).</p><p>The <b>Gini coefficient</b> measures inequality in resource or traffic distribution (0 = equal, 1 = fully concentrated). The <b>Lorenz curve</b> visualizes this: a straight 45° line indicates perfect equality, while deviations show concentration.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/30bh4nVHRX5O3HMKvGRYh7/e0c5b3a7cb8294dfe3caaec98a0557d0/Screenshot_2025-10-27_at_14.10.57.png" />
          </figure><p>The chart on the left suggests substantial geographical inequality in colocation facility distribution across the US states. However, the population-weighted analysis in the chart on the right demonstrates that much of that geographic concentration can be explained by population distribution.</p>
    <div>
      <h3>Submarine cables</h3>
      <a href="#submarine-cables">
        
      </a>
    </div>
    <p>Internet resilience, in the context of undersea cables, is defined by the global network’s capacity to withstand physical infrastructure damage and to recover swiftly from faults, thereby ensuring the continuity of intercontinental data flow. The metrics for quantifying this resilience are multifaceted, encompassing the frequency and nature of faults, the efficiency of repair operations, and the inherent robustness of both the network’s topology and its dedicated maintenance resources. Such metrics include:</p><ul><li><p>Number of <b>landing stations</b>, cable corridors, and operators. The goal is to ensure that national connectivity should withstand single failure events, be they natural disaster, targeted attack, or major power outage. A lack of diversity creates single points of failure, as highlighted by <a href="https://www.theguardian.com/news/2025/sep/30/tonga-pacific-island-internet-underwater-cables-volcanic-eruption"><u>incidents in Tonga</u></a> where damage to the only available cable led to a total outage.</p></li><li><p><b>Fault rates</b> and <b>mean time to repair (MTTR)</b>, which indicate how quickly service can be restored. These metrics measure a country’s ability to prevent, detect, and recover from cable incidents, focusing on downtime reduction and protection of critical assets. Repair times hinge on <b>vessel mobilization</b> and <b>government permits</b>, the latter often the main bottleneck.</p></li><li><p>Availability of <b>satellite backup capacity</b> as an emergency fallback. While cable diversity is essential, resilience planning must also cover worst-case outages. The Non-Terrestrial Backup System Readiness metric measures a nation’s ability to sustain essential connectivity during major cable disruptions. LEO and MEO satellites, though costlier and lower capacity than cables, offer proven emergency backup during conflicts or disasters. Projects like HEIST explore hybrid space-submarine architectures to boost resilience. Key indicators include available satellite bandwidth, the number of NGSO providers under contract (for diversity), and the deployment of satellite terminals for public and critical infrastructure. Tracking these shows how well a nation can maintain command, relief operations, and basic connectivity if cables fail.</p></li></ul>
    <div>
      <h3>Inter-domain routing</h3>
      <a href="#inter-domain-routing">
        
      </a>
    </div>
    <p>The network layer above the physical interconnection infrastructure governs how traffic is routed across the Autonomous Systems (ASes). Failures or instability at this layer – such as misconfigurations, attacks, or control-plane outages – can disrupt connectivity even when the underlying physical infrastructure remains intact. In this layer, we look at resilience metrics that characterize the robustness and fault tolerance of AS-level routing and BGP behavior.</p><p><b>AS Path Diversity</b> measures the number and independence of AS-level routes between two points. High diversity provides alternative paths during failures, enabling BGP rerouting and maintaining connectivity. Low diversity leaves networks vulnerable to outages if a critical AS or link fails. Resilience depends on upstream topology.</p><ul><li><p>Single-homed ASes rely on one provider, which is cheaper and simpler but more fragile.</p></li><li><p>Multi-homed ASes use multiple upstreams, requiring BGP but offering far greater redundancy and performance at higher cost.</p></li></ul><p>The <b>share of multi-homed ASes</b> reflects an ecosystem’s overall resilience: higher rates signal greater protection from single-provider failures. This metric is easy to measure using <b>public BGP data</b> (e.g., RouteViews, RIPE RIS, CAIDA). Longitudinal BGP monitoring helps reveal hidden backup links that snapshots might miss.</p><p>Beyond multi-homing rates, <b>the distribution of single-homed ASes per transit provider</b> highlights systemic weak points. For each provider, counting customer ASes that rely exclusively on it reveals how many networks would be cut off if that provider fails. </p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1ECZveUVwyM6TmGa1SaZnl/1222c7579c81fd62a5d8d80d63000ec3/image1.png" />
          </figure><p>The figure above shows Canadian transit providers for July 2025: the x-axis is total customer ASes, the y-axis is single-homed customers. Canada’s overall single-homing rate is 30%, with some providers serving many single-homed ASes, mirroring vulnerabilities seen during the <a href="https://en.wikipedia.org/wiki/2022_Rogers_Communications_outage"><u>2022 Rogers outage</u></a>, which disrupted over 12 million users.</p><p>While multi-homing metrics provide a valuable, static view of an ecosystem’s upstream topology, a more dynamic and nuanced understanding of resilience can be achieved by analyzing the characteristics of the actual BGP paths observed from global vantage points. These path-centric metrics move beyond simply counting connections to assess the diversity and independence of the routes to and from a country’s networks. These metrics include:</p><ul><li><p><b>Path independence</b> measures whether those alternative routes truly avoid shared bottlenecks. Multi-homing only helps if upstream paths are truly distinct. If two providers share upstream transit ASes, redundancy is weak. Independence can be measured with the Jaccard distance between AS paths. A stricter <b>path disjointness score</b> calculates the share of path pairs with no common ASes, directly quantifying true redundancy.</p></li><li><p><b>Transit entropy</b> measures how evenly traffic is distributed across transit providers. High Shannon entropy signals a decentralized, resilient ecosystem; low entropy shows dependence on few providers, even if nominal path diversity is high.</p></li><li><p><b>International connectivity ratios</b> evaluate the share of domestic ASes with direct international links. High percentages reflect a mature, distributed ecosystem; low values indicate reliance on a few gateways.</p></li></ul><p>The figure below encapsulates the aforementioned AS-level resilience metrics into single polar pie charts. For the purpose of exposition we plot the metrics for infrastructure from two different nations with very different resilience profiles.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/PKxDcl4m1XXCAuvFUcTdZ/d0bce797dcbd5e1baf39ca66e7ac0056/image4.png" />
          </figure><p>To pinpoint critical ASes and potential single points of failure, graph centrality metrics can provide useful insights. <b>Betweenness Centrality (BC)</b> identifies nodes lying on many shortest paths, but applying it to BGP data suffers from vantage point bias. ASes that provide BGP data to the RouteViews and RIS collectors appear falsely central. <b>AS Hegemony</b>, developed by<a href="https://dl.acm.org/doi/10.1145/3123878.3131982"><u> Fontugne et al.</u></a>, corrects this by filtering biased viewpoints, producing a 0–1 score that reflects the true fraction of paths crossing an AS. It can be applied globally or locally to reveal Internet-wide or AS-specific dependencies.</p><p><b>Customer cone size</b> developed by <a href="https://asrank.caida.org/about#customer-cone"><u>CAIDA</u></a> offers another perspective, capturing an AS’s economic and routing influence via the set of networks it serves through customer links. Large cones indicate major transit hubs whose failure affects many downstream networks. However, global cone rankings can obscure regional importance, so <a href="https://www.caida.org/catalog/papers/2023_on_importance_being_as/on_importance_being_as.pdf"><u>country-level adaptations</u></a> give more accurate resilience assessments.</p>
    <div>
      <h4>Impact-Weighted Resilience Assessment</h4>
      <a href="#impact-weighted-resilience-assessment">
        
      </a>
    </div>
    <p>Not all networks have the same impact when they fail. A small hosting provider going offline affects far fewer people than if a national ISP does. Traditional resilience metrics treat all networks equally, which can mask where the real risks are. To address this, we use impact-weighted metrics that factor in a network’s user base or infrastructure footprint. For example, by weighting multi-homing rates or path diversity by user population, we can see how many people actually benefit from redundancy — not just how many networks have it. Similarly, weighting by the number of announced prefixes highlights networks that carry more traffic or control more address space.</p><p>This approach helps separate theoretical resilience from practical resilience. A country might have many multi-homed networks, but if most users rely on just one single-homed ISP, its resilience is weaker than it looks. Impact weighting helps surface these kinds of structural risks so that operators and policymakers can prioritize improvements where they matter most.</p>
    <div>
      <h3>Metrics of network hygiene</h3>
      <a href="#metrics-of-network-hygiene">
        
      </a>
    </div>
    <p>Large Internet outages aren’t always caused by cable cuts or natural disasters — sometimes, they stem from routing mistakes or security gaps. Route hijacks, leaks, and spoofed announcements can disrupt traffic on a national scale. How well networks protect themselves against these incidents is a key part of resilience, and that’s where network hygiene comes in.</p><p>Network hygiene refers to the security and operational practices that make the global routing system more trustworthy. This includes:</p><ul><li><p><b>Cryptographic validation</b>, like RPKI, to prevent unauthorized route announcements. <b>ROA Coverage</b> measures the share of announced IPv4/IPv6 space with valid Route Origin Authorizations (ROAs), indicating participation in the RPKI ecosystem. <b>ROV Deployment</b> gauges how many networks drop invalid routes, but detecting active filtering is difficult. Policymakers can improve visibility by supporting independent measurements, data transparency, and standardized reporting.</p></li><li><p><b>Filtering and cooperative norms</b>, where networks block bogus routes and follow best practices when sharing routing information.</p></li><li><p><b>Consistent deployment across both domestic networks and their international upstreams</b>, since traffic often crosses multiple jurisdictions.</p></li></ul><p>Strong hygiene practices reduce the likelihood of systemic routing failures and limit their impact when they occur. We actively support and monitor the adoption of these mechanisms, for instance through <a href="https://isbgpsafeyet.com/"><u>crowd-sourced measurements</u></a> and public advocacy, because every additional network that validates routes and filters traffic contributes to a safer and more resilient Internet for everyone.</p><p>Another critical aspect of Internet hygiene is mitigating DDoS attacks, which often rely on IP address spoofing to amplify traffic and obscure the attacker’s origin. <a href="https://datatracker.ietf.org/doc/bcp38/"><u>BCP-38</u></a>, the IETF’s network ingress filtering recommendation, addresses this by requiring operators to block packets with spoofed source addresses, reducing a region’s role as a launchpad for global attacks. While BCP-38 does not prevent a network from being targeted, its deployment is a key indicator of collective security responsibility. Measuring compliance requires active testing from inside networks, which is carried out by the <a href="https://spoofer.caida.org/summary.php"><u>CAIDA Spoofer Project</u></a>. Although the global sample remains limited, these metrics offer valuable insight into both the technical effectiveness and the security engagement of a nation’s network community, complementing RPKI in strengthening the overall routing security posture.</p>
    <div>
      <h3>Measuring the collective security posture</h3>
      <a href="#measuring-the-collective-security-posture">
        
      </a>
    </div>
    <p>Beyond securing individual networks through mechanisms like RPKI and BCP-38, strengthening the Internet’s resilience also depends on collective action and visibility. While origin validation and anti-spoofing reduce specific classes of threats, broader frameworks and shared measurement infrastructures are essential to address systemic risks and enable coordinated responses.</p><p>The <a href="https://manrs.org/"><u>Mutually Agreed Norms for Routing Security (MANRS)</u></a> initiative promotes Internet resilience by defining a clear baseline of best practices. It is not a new technology but a framework fostering collective responsibility for global routing security. MANRS focuses on four key actions: filtering incorrect routes, anti-spoofing, coordination through accurate contact information, and global validation using RPKI and IRRs. While many networks implement these independently, MANRS participation signals a public commitment to these norms and to strengthening the shared security ecosystem.</p><p>Additionally, a region’s participation in public measurement platforms reflects its Internet observability, which is essential for fault detection, impact assessment, and incident response. <a href="https://atlas.ripe.net/"><u>RIPE Atlas</u></a> and <a href="https://www.caida.org/projects/ark/"><u>CAIDA Ark</u></a> provide dense data-plane measurements; <a href="https://www.routeviews.org/routeviews/"><u>RouteViews</u></a> and <a href="https://www.ripe.net/analyse/internet-measurements/routing-information-service-ris/"><u>RIPE RIS</u></a> collect BGP routing data to detect anomalies; and <a href="https://www.peeringdb.com/"><u>PeeringDB</u></a> documents interconnection details, reflecting operational maturity and integration into the global peering fabric. Together, these platforms underpin observatories like <a href="https://ioda.inetintel.cc.gatech.edu/"><u>IODA</u></a> and <a href="https://grip.oie.gatech.edu/home"><u>GRIP</u></a>, which combine BGP and active data to detect outages and routing incidents in near real time, offering critical visibility into Internet health and security.</p>
    <div>
      <h2>Building a more resilient Internet, together</h2>
      <a href="#building-a-more-resilient-internet-together">
        
      </a>
    </div>
    <p>Measuring Internet resilience is complex, but it's not impossible. By using publicly available data, we can create a transparent and reproducible framework to identify strengths, weaknesses, and single points of failure in any network ecosystem.</p><p>This isn't just a theoretical exercise. For policymakers, this data can inform infrastructure investment and pro-competitive policies that encourage diversity. For network operators, it provides a benchmark to assess their own resilience and that of their partners. And for everyone who relies on the Internet, it's a critical step toward building a more stable, secure, and reliable global network.</p><p><i>For more details of the framework, including a full table of the metrics and links to source code, please refer to the full paper: </i> <a href="https://papers.ssrn.com/sol3/papers.cfm?abstract_id=5376106"><u>Regional Perspectives for Route Resilience in a Global Internet: Metrics, Methodology, and Pathways for Transparency</u></a> published at <a href="https://www.tprcweb.com/tprc23program"><u>TPRC23</u></a>.</p> ]]></content:encoded>
            <category><![CDATA[Research]]></category>
            <category><![CDATA[Better Internet]]></category>
            <category><![CDATA[Routing Security]]></category>
            <category><![CDATA[Insights]]></category>
            <guid isPermaLink="false">48ry6RI3JhA9H3t280EWUX</guid>
            <dc:creator>Vasilis Giotsas</dc:creator>
            <dc:creator>Cefan Daniel Rubin</dc:creator>
            <dc:creator>Marwan Fayed</dc:creator>
        </item>
        <item>
            <title><![CDATA[To build a better Internet in the age of AI, we need responsible AI bot principles. Here’s our proposal.]]></title>
            <link>https://blog.cloudflare.com/building-a-better-internet-with-responsible-ai-bot-principles/</link>
            <pubDate>Wed, 24 Sep 2025 13:00:00 GMT</pubDate>
            <description><![CDATA[ We are proposing—as starting points—responsible AI bot principles that emphasize transparency, accountability, and respect for content access and use preferences. ]]></description>
            <content:encoded><![CDATA[ <p>Cloudflare has a unique vantage point: we see not only how changes in technology shape the Internet, but also how new technologies can unintentionally impact different stakeholders. Take, for instance, the increasing reliance by everyday Internet users on AI–powered <a href="https://www.cloudflare.com/learning/bots/what-is-a-chatbot/"><u>chatbots</u></a> and <a href="https://www.pewresearch.org/short-reads/2025/07/22/google-users-are-less-likely-to-click-on-links-when-an-ai-summary-appears-in-the-results/sr_25-07-22_ai_summaries_1/"><u>search summaries</u></a>. On the one hand, end users are getting information faster than ever before. On the other hand, web publishers, who have historically relied on human eyeballs to their website to support their businesses, are seeing a <a href="https://www.forbes.com/sites/torconstantino/2025/04/14/the-60-problem---how-ai-search-is-draining-your-traffic/"><u>dramatic</u></a> <a href="https://blog.cloudflare.com/ai-search-crawl-refer-ratio-on-radar/"><u>decrease</u></a> in those eyeballs, which can reduce their ability to create original high-quality content. This cycle will ultimately hurt end users and AI companies (whose success relies on fresh, high-quality content to train models and provide services) alike.</p><p>We are indisputably at a point in time when the Internet needs clear “rules of the road” for AI bot behavior (a note on terminology: throughout this blog we refer to AI bots and crawlers interchangeably). We have had ongoing cross-functional conversations, both internally and with stakeholders and partners across the world, and it’s clear to us that the Internet at large needs key groups — publishers and content creators, bot operators, and Internet infrastructure and cybersecurity companies — to reach a consensus on certain principles that AI bots should follow.</p><p>Of course, agreeing on what exactly those principles are will take time and require continued discussion and collaboration, and a policy framework can’t perfectly capture every technical concern. Nevertheless, we think it’s important to start a conversation that we hope others will join. After all, a rough draft is better than a blank page.</p><p>That is why we are proposing the following responsible AI bot principles as starting points:</p><ol><li><p><b>Public disclosure: </b>Companies should publicly disclose information about their AI bots;</p></li><li><p><b>Self-identification: </b>AI bots should truthfully self-identify, eventually replacing less reliable methods, like user agent and IP address verification, with cryptographic verification;</p></li><li><p><b>Declared single purpose:</b> AI bots should have one distinct purpose and declare it;</p></li><li><p><b>Respect preferences: </b>AI bots should respect and comply with preferences expressed by website operators where proportionate and technically feasible;</p></li><li><p><b>Act with good intent:</b> AI bots must not flood sites with excessive traffic or engage in deceptive behavior.</p></li></ol><p>Each principle is discussed in greater detail <a href="#responsible-ai-bot-principles"><u>below</u></a>. These principles focus on AI bots because of the impact <a href="https://www.cloudflare.com/learning/ai/what-is-generative-ai/"><u>generative AI</u></a> is having on the Internet, but we have already seen these practices in action with other types of (non-AI) bots as well. We believe these principles will help move the Internet in a better direction. That said, we acknowledge that they are a starting point for this conversation, which requires input from other stakeholders. The Internet has always been a collaborative place for innovation, and these principles should be seen as equally dynamic and evolving. </p>
    <div>
      <h2>Why Cloudflare is encouraging this conversation</h2>
      <a href="#why-cloudflare-is-encouraging-this-conversation">
        
      </a>
    </div>
    <p>Since declaring July 1st <a href="https://blog.cloudflare.com/content-independence-day-no-ai-crawl-without-compensation/"><u>Content Independence Day</u></a>, Cloudflare has strived to play a balanced and effective role in safeguarding the future of the Internet in the age of generative AI. We have enabled customers to <a href="https://blog.cloudflare.com/introducing-pay-per-crawl/"><u>charge AI crawlers for access</u></a> or <a href="https://www.cloudflare.com/learning/ai/how-to-block-ai-crawlers/"><u>block them with one click</u></a>, published and enforced our <a href="https://developers.cloudflare.com/bots/concepts/bot/verified-bots/policy/"><u>verified bots policy</u></a> and developed the <a href="https://developers.cloudflare.com/bots/reference/bot-verification/web-bot-auth/"><u>Web Bot Auth</u></a> proposal, and unapologetically <a href="https://blog.cloudflare.com/perplexity-is-using-stealth-undeclared-crawlers-to-evade-website-no-crawl-directives/#how-well-meaning-bot-operators-respect-website-preferences"><u>called out and stopped bad behavior</u></a>.</p><p>While we have recently focused our attention on AI crawlers, Cloudflare has long been a leader in the bot management space, helping our customers protect their websites from unwanted — and even malicious —traffic. We also want to make sure that anyone — whether they’re our customer or not — can see <a href="https://radar.cloudflare.com/ai-insights#ai-bot-best-practices"><u>which AI bots are abiding by all, some, or none of these best practices</u></a>. </p><p>But we aren’t ignorant to the fact that companies operating crawlers are also adapting to a new Internet landscape — and we genuinely believe that most players in this space want to do the right thing, while continuing to innovate and propel the Internet in an exciting direction. Our hope is that we can use our expertise and unique vantage point on the Internet to help bring seemingly incompatible parties together and find a path forward — continuing our mission of helping to build a better Internet for everyone.</p>
    <div>
      <h2>Responsible AI bot principles</h2>
      <a href="#responsible-ai-bot-principles">
        
      </a>
    </div>
    <p>The following principles are a launchpad for a larger conversation, and we recognize that there is work to be done to address many nuanced perspectives. We envision these principles applying to AI bots but understand that technical complexity may require flexibility. <b>Ultimately, our goal is to emphasize transparency, accountability, and respect for content access and use preferences.</b> If these principles fall short of that — or fail to consider other important priorities — we want to know.</p>
    <div>
      <h3>Principle #1: Public disclosure</h3>
      <a href="#principle-1-public-disclosure">
        
      </a>
    </div>
    <p><b>Companies should publicly disclose information about their AI bots.</b> The following information should be publicly available and easy to find:</p><ul><li><p><b>Identity:</b> information that helps external parties identify a bot, <i>e.g.</i>, user agent, relevant IP address(es), and/or individual cryptographic identification (more on this below, in <a href="#principle-2-self-identification"><i><u>Principle #2: Self-identification</u></i></a>).</p></li><li><p><b>Operator:</b> the legal entity responsible for the AI bot, including a point of contact (<i>e.g.</i>, for reporting abuse);</p></li><li><p><b>Purpose:</b> for which purpose the accessed data will be used, <i>i.e.</i>, search, AI-input, or training (more on this below, in <a href="#principle-3-declared-single-purpose"><i><u>Principle #3: Declared Single Purpose</u></i></a>).</p></li></ul><p>OpenAI is an example of a leading AI company that clearly <a href="https://platform.openai.com/docs/bots"><u>discloses their bots</u></a>, complete with detailed explanations of each bot’s purpose. The benefits of this disclosure are apparent in the subsequent principles. It helps website operators validate that a given request is in fact coming from OpenAI and what its purpose is (<i>e.g.</i>, search indexing or AI model training). This, in turn, enables website operators to control access to and use of their content through preference expression mechanisms, like <a href="https://www.cloudflare.com/learning/bots/what-is-robots-txt/"><u>robots.txt files</u></a>.</p>
    <div>
      <h3>Principle #2: Self-identification</h3>
      <a href="#principle-2-self-identification">
        
      </a>
    </div>
    <p><b>AI bots should truthfully self-identify.</b> Not only should information about bots be disclosed in a publicly accessible location, this information should also be clearly communicated by bots themselves, <i>e.g.,</i> through an HTTP request that conveys the bot’s official user agent and comes from an IP address that the bot claims to send traffic from. Admittedly, this current approach is flawed, as we discuss in <a href="#a-note-on-cryptographic-verification-and-the-future-of-principle-2"><u>more detail below</u></a>. But until cryptographic verification is more widely adopted, we think relying on user agent and IP verification is better than nothing.</p><p>OpenAI’s <a href="https://radar.cloudflare.com/bots/directory/gptbot"><u>GPTBot</u></a> is an example of this principle in action. OpenAI <a href="https://platform.openai.com/docs/bots"><u>publicly shares</u></a> the expected full user-agent string for this bot and includes it in its requests. OpenAI also explains this bot’s purpose (“used to make [OpenAI’s] generative AI foundation models more useful and safe” and “to crawl content that may be used in training [their] generative AI foundation models”). And we have observed this bot sending traffic from IP addresses reported by OpenAI. Because site operators see GPTBot’s user agent and IP addresses matching what is publicly disclosed and expected, and they know information about the bot is publicly documented, they can confidently recognize the bot. This enables them to make informed decisions about whether they want to allow traffic from it.</p><p>Unfortunately, not all bots uphold this principle, making it difficult for website owners to know exactly which bot operators respect their crawl preferences, much less enforce them. For example, while Anthropic publishes its user agent alone, absent other verifiable information, it’s unclear which requests are truly from Anthropic. And xAI’s bot, grok, does not self-identify at all, making it impossible for website operators to block it. Anthropic and xAI’s lack of identification undermines trust between them and website owners, yet this could be fixed with minimal effort on their parts.</p>
    <div>
      <h2>A note on cryptographic verification and the future of Principle #2</h2>
      <a href="#a-note-on-cryptographic-verification-and-the-future-of-principle-2">
        
      </a>
    </div>
    <p>Truthful declaration of user agent and dedicated IP lists have historically been a functional way to verify. But in today’s rapidly-evolving bot climate, bots are increasingly vulnerable to being spoofed by bad actors. These bad actors, in turn, ignore robots.txt, which communicates allow/disallow preferences only on a user agent basis (so, a bad bot could spoof a permitted user agent and circumvent that domain’s preferences).</p><p><b>Ultimately, every AI bot should be cryptographically verified using an accepted standard.</b> This would protect them against spoofing and ensure website operators have the accurate and reliable information they need to properly evaluate access by AI bots. At this time, we believe that <a href="https://datatracker.ietf.org/doc/html/draft-meunier-web-bot-auth-architecture?cf_history_state=%7B%22guid%22%3A%22C255D9FF78CD46CDA4F76812EA68C350%22%2C%22historyId%22%3A43%2C%22targetId%22%3A%226EAB129D6194DD2C4E8CCD7C06D57DE2%22%7D"><u>Web Bot Auth</u></a> is sufficient proof of compliance with Principle #2. We recognize that this standard is still in development, and, as a result, this principle may evolve accordingly.</p><p>Web Bot Auth <a href="https://blog.cloudflare.com/web-bot-auth/"><u>uses cryptography to verify bot traffic</u></a>; cryptographic signatures in HTTP messages are used as verification that a given request came from an automated bot. Our implementation relies on proposed IETF <a href="https://datatracker.ietf.org/doc/html/draft-meunier-http-message-signatures-directory"><u>directory</u></a> and <a href="https://datatracker.ietf.org/doc/html/draft-meunier-web-bot-auth-architecture"><u>protocol</u></a> drafts. Initial reception of Web Bot Auth has been very positive, and we expect even more adoption. For example, a little over a month ago, Vercel <a href="https://vercel.com/changelog/vercels-bot-verification-now-supports-web-bot-auth"><u>announced</u></a> that its bot verification now supports Web Bot Auth. And OpenAI’s <a href="https://help.openai.com/en/articles/11845367-chatgpt-agent-allowlisting"><u>ChatGPT agent now signs its requests using Web Bot Auth</u></a>, in addition to using the HTTP Message Signatures <a href="https://datatracker.ietf.org/doc/html/rfc9421"><u>standard</u></a>.</p><p>We envision a future where cryptographic authentication becomes the norm, as we believe this will further strengthen the trustworthiness of bots.</p>
    <div>
      <h3>Principle #3: Declared single purpose </h3>
      <a href="#principle-3-declared-single-purpose">
        
      </a>
    </div>
    <p><b>AI bots should have one distinct purpose and declare it. </b>Today, <a href="https://blog.cloudflare.com/ai-crawler-traffic-by-purpose-and-industry"><u>some</u></a> bots self-identify their purpose as Training, Search, or User Action (<i>i.e.</i>, accessing a web page in response to a user’s query).</p><p>However, these purposes are sometimes combined without clear distinction. For example, content accessed for search purposes might also be used to train the AI model powering the search engine. When a bot’s purpose is unclear, website operators face a difficult decision: block it and risk undermining search engine optimization (SEO), or allow it and risk content being used in unwanted ways.</p><p>When operators deploy bots with distinct purposes, website owners are able to make clear decisions over who can access their content. What those purposes should be is up for debate, but we think the following breakdown is a starting point based on bot activity we see. We recognize this is an evolving space and changes may be required as innovation continues:</p><ul><li><p><b>Search:</b> building a search index and providing search results (<i>e.g.</i>, returning hyperlinks and short excerpts from your website’s contents). Search does <u>not</u> include providing AI-generated search summaries;</p></li><li><p><b>AI-input:</b> inputting content into one or more AI models, <i>e.g.</i>, retrieval-augmented generation (RAG), grounding, or other real-time taking of content for generative AI search answers; and</p></li><li><p><b>Training:</b> training or fine-tuning AI models.</p></li></ul><p>Relatedly, bots should not combine purposes in a way that prevents web operators from deliberately and effectively deciding whether to allow crawling.</p><p>Let’s consider two AI bots, OAI-SearchBot and Googlebot, from the perspective of Vinny, a website operator trying to make a living on the Internet. OAI-SearchBot has a single purpose: linking to and surfacing websites in ChatGPT’s search features. If Vinny takes OpenAI at face value (which we think it makes sense to do), he can trust that OAI-SearchBot does not crawl his content for training OpenAI’s generative AI models rather, a separate bot (GPTBot, as discussed in <a href="https://docs.google.com/document/d/1LQ2DkaKBaTn6pXrgLZp-5BjHsQd1FOS-7vmkf6DVxx0/edit?tab=t.1023mi6snxqe#heading=h.yfcrchlj1en9"><i><u>Principle #2: Self-identification</u></i></a>) does. Vinny can decide how he wants his content used by OpenAI, <i>e.g.</i>, permitting its use for search but not for AI training, and feel confident that his choices are respected because OAI-SearchBot <i>only</i> crawls for search purposes, while GPTBot is not granted access to the content in the first place (and therefore cannot use it).</p><p>On the other hand, while Googlebot scrapes content for traditional search-indexing (not model training), it also uses that content for inference purposes, such as for AI Overviews and AI Mode. Why is this a problem for Vinny? While he almost certainly wants his content appearing in search results, which drive the human eyeballs that fund his site, Vinny is forced to also accept that his content will appear in Google’s AI-generated summaries. If eyeballs are satisfied by the summary then they never visit Vinny’s website, which leads to <a href="https://www.bain.com/insights/goodbye-clicks-hello-ai-zero-click-search-redefines-marketing/"><u>“zero-click” searches</u> and undermines</a> Vinny’s ability to financially benefit from his content.</p><p>This is a vicious cycle: creating high-quality content, which typically leads to higher search rankings, now inadvertently also reduces the chances an eyeball will visit the site because that same valuable content is surfaced in an AI Overview (if it is even referenced as a source in the summary). To prevent this, Vinny must either opt out of search completely or use snippet controls (which risks degrading how his content appears in search results). This is because the only available signal to opt-out of AI, disallowing <a href="https://developers.google.com/search/docs/crawling-indexing/google-common-crawlers#google-extended"><u>Google-Extended,</u></a> is limited to training and does not apply to AI Overview, which is attached to search. Whether by accident or by design, this setup forces an impossible choice onto website owners.</p><p>Finally, the prominent technical argument in favor of combining multiple purposes — that this reduces the crawler operator’s costs — needs to be debunked. To reason by analogy: it’s like arguing that placing one call to order two pizzas is cheaper than placing two calls to order two pizzas. In reality, the cost of the two pizzas (both of which take time and effort to make) remains the same. The extra phone call may be annoying, but its costs are negligible.</p><p>Similarly, whether one bot request is made for two purposes (<i>e.g.</i>, search indexing and AI model training) or a separate bot request is made for each of two purposes, the costs basically remain the same. For the crawler, the cost of compute is the same because the content still needs to be processed for each purpose. And the cost of two connections (<i>i.e.</i>, for two requests) is virtually the same as one. We know this because Cloudflare runs one of the largest networks in the world, handling on average 84 million requests per second, so we understand the cost of requests at Internet scale. (As an aside, while additional crawls incur costs on website operators, they have the ability to choose whether the crawl is worth the cost, especially when bots have a single purpose.)</p>
    <div>
      <h3>Principle # 4: Respect preferences</h3>
      <a href="#principle-4-respect-preferences">
        
      </a>
    </div>
    <p><b>AI bots should respect and comply with preferences expressed by website operators where proportionate and technically feasible.</b> There are multiple options for expressing preferences. Prominent examples include the longstanding and familiar robots.txt, as well as newly emerging HTTP headers.</p><p>Given the widespread use of robots.txt files, bots should make a good faith attempt to fetch a robots.txt file first, in accordance with <a href="https://datatracker.ietf.org/doc/html/rfc9309"><u>RFC 9309</u></a>, and abide by both the access and use preferences specified therein. AI bot operators should also stay up to date on how those preferences evolve as a result of a <a href="https://ietf-wg-aipref.github.io/drafts/draft-ietf-aipref-vocab.html"><u>draft vocabulary</u></a> currently under development by an IETF working group. The goal of the proposed vocabulary is to improve granularity in robots.txt files, so that website operators are empowered to control how their assets are used. </p><p>At the same time, new industry standards under discussion may involve the attachment of machine-readable preferences to different formats, such as individual files. AI bot operators should eventually be prepared to comply with these standards, too. One idea currently being explored is a way for site owners to list preferences via HTTP headers, which offer a server-level method of declaring how content should be used.</p>
    <div>
      <h3>Principle #5: Act with good intent</h3>
      <a href="#principle-5-act-with-good-intent">
        
      </a>
    </div>
    <p><b>AI bots must not flood sites with excessive traffic or engage in deceptive behavior.</b> AI bot behavior should be benign or helpful to website operators and their users. It is also incumbent on companies that operate AI bots to monitor their networks and resources for breaches and patch vulnerabilities. Jeopardizing a website’s security or performance or engaging in harmful tactics is unacceptable.</p><p>Nor is it appropriate to appear to comply with the principles, only to secretly circumvent them. Reaffirming a long-standing principle of acceptable bot behavior, AI bots must never engage in <a href="https://blog.cloudflare.com/perplexity-is-using-stealth-undeclared-crawlers-to-evade-website-no-crawl-directives/"><u>stealth crawling</u></a> or use other stealth tactics to try and dodge detection, such as modifying their user agent, changing their source <a href="https://www.cloudflare.com/learning/network-layer/what-is-an-autonomous-system/"><u>ASNs</u></a> to hide their crawling activity, or ignoring robots.txt files. Doing so would undermine the preceding four principles, hurting website operators and worsening the Internet for all.</p>
    <div>
      <h2>The road ahead: multi-stakeholder efforts to bring these principles to life</h2>
      <a href="#the-road-ahead-multi-stakeholder-efforts-to-bring-these-principles-to-life">
        
      </a>
    </div>
    <p>As we continue working on these principles and soliciting feedback, we strive to find a balance: we want the wishes of content creators respected while still encouraging AI innovation. It’s a privilege to sit at the intersection of these important interests and to play a crucial role in developing an agreeable path forward.</p><p>We are continuing to engage with right holders, AI companies, policy-makers, and regulators to shape global industry standards and regulatory frameworks accordingly. We believe that the influx of generative AI use need not threaten the Internet’s place as an open source of quality content. Protecting its integrity requires agreement on workable technical standards that reflect the interests of web publishers, content creators, and AI companies alike.  </p><p>The whole ecosystem must continue to come together and collaborate towards a better Internet that truly works for everyone. Cloudflare advocates for neutral forums where all affected parties can discuss the impact of AI developments on the Internet. One such example is the IETF, which has current work focused on some of the technical aspects being considered. Those efforts attempt to address some, but not all, of the issues in an area that deserves holistic consideration. We believe the principles we have proposed are a step in the right direction — but we hope others will join this complex and important conversation, so that norms and behavior on the Internet can successfully adapt to this exciting new technological age.</p> ]]></content:encoded>
            <category><![CDATA[AI Bots]]></category>
            <category><![CDATA[Birthday Week]]></category>
            <category><![CDATA[Better Internet]]></category>
            <category><![CDATA[Generative AI]]></category>
            <category><![CDATA[Policy & Legal]]></category>
            <guid isPermaLink="false">1sZkiH7eUUcU8zs4jpo6F8</guid>
            <dc:creator>Leah Romm</dc:creator>
            <dc:creator>Sebastian Hufnagel</dc:creator>
        </item>
        <item>
            <title><![CDATA[Enhance your website's security with Cloudflare’s free security.txt generator]]></title>
            <link>https://blog.cloudflare.com/security-txt/</link>
            <pubDate>Sun, 06 Oct 2024 23:00:00 GMT</pubDate>
            <description><![CDATA[ Cloudflare’s free security.txt generator lets users create and manage security.txt files. Enhance vulnerability disclosure, align with industry standards, and integrate into the dashboard. ]]></description>
            <content:encoded><![CDATA[ 
    <div>
      <h2>A story of security and simplicity</h2>
      <a href="#a-story-of-security-and-simplicity">
        
      </a>
    </div>
    <p>Meet Georgia, a diligent website administrator at a growing e-commerce company. Every day, Georgia juggles multiple tasks, from managing server uptime to ensuring customer data security. One morning, Georgia receives an email from a security researcher who discovered a potential vulnerability on the website. The researcher struggled to find the right contact information, leading to delays in reporting the issue. Georgia realizes the need for a standardized way to communicate with security researchers, ensuring that vulnerabilities are reported swiftly and efficiently. This is where security.txt comes in.</p>
    <div>
      <h2>Why security.txt matters</h2>
      <a href="#why-security-txt-matters">
        
      </a>
    </div>
    <p><a href="https://securitytxt.org/"><u>Security.txt</u></a> is becoming a widely adopted standard among security-conscious organizations. By providing a common location and format for vulnerability disclosure information, it helps bridge the gap between security researchers and organizations. This initiative is supported by major companies and aligns with global security best practices. By offering an automated security.txt generator for free, we aim to empower all of our users to enhance their security measures without additional costs.</p><p>In 2020, Cloudflare published the Cloudflare Worker for the security.txt generator as an <a href="https://github.com/cloudflare/securitytxt-worker?cf_history_state=%7B%22guid%22%3A%22C255D9FF78CD46CDA4F76812EA68C350%22%2C%22historyId%22%3A8%2C%22targetId%22%3A%22532D731DBD87B52B996FF5AD5ADDA824%22%7D"><u>open-source project on GitHub</u></a>, demonstrating our commitment to enhancing web security. This tool is actively used by Cloudflare to streamline vulnerability disclosure processes. However, over the past few years, we've observed a growing demand from our customers for an easier way to implement this standard. In response to this demand and to further support the adoption of security.txt across the Internet, we integrated it directly into our dashboard, making it simple for all our users to enhance their security practices. You can learn more about the initial release and its impact in our previous blog post <a href="https://blog.cloudflare.com/security-dot-txt/"><u>here</u></a>. </p>
    <div>
      <h3>Who can use the free Cloudflare security.txt generator</h3>
      <a href="#who-can-use-the-free-cloudflare-security-txt-generator">
        
      </a>
    </div>
    <p>This feature is designed for any Cloudflare user who manages a website, from <a href="https://www.cloudflare.com/small-business/">small business owners</a> to large enterprises, from developers to security professionals. Whether you're a seasoned security expert or new to website management, this tool provides an easy way to create and manage your security.txt file in your Cloudflare account, ensuring that you're prepared to handle vulnerability reports effectively.</p>
    <div>
      <h3>Technical insights: leveraging Cloudflare’s tools</h3>
      <a href="#technical-insights-leveraging-cloudflares-tools">
        
      </a>
    </div>
    <p>Our security.txt generator is seamlessly integrated into our dashboard. Here's how it works:</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2z7tEph5hu4T7LCkZU5KFQ/8bc9c8efe332cda618c5dd8bb51e38da/image1.png" />
          </figure><p>When the user enters their data in the Cloudflare Dashboard, the information is immediately stored in a highly available and geo-redundant <a href="https://blog.cloudflare.com/performance-isolation-in-a-multi-tenant-database-environment/"><u>PostgreSQL database</u></a>. This ensures that all user data is securely kept and can be accessed quickly from any location within our global network.</p><p>Instead of creating a static file at the point of data entry, we use a dynamic approach. When a request for the security.txt file is made via the standard .well-known path specified by <a href="https://www.rfc-editor.org/rfc/rfc9116"><u>RFC 9116</u></a>, our system dynamically constructs the file using the latest data from our database. This method ensures that any updates made by users are reflected in real-time without requiring manual intervention or file regeneration. The data entered by users is synchronized across Cloudflare’s global network using our <a href="https://blog.cloudflare.com/introducing-quicksilver-configuration-distribution-at-internet-scale/"><u>Quicksilver</u></a> technology. This allows for rapid propagation of changes, ensuring that any updates to the security.txt file are available almost instantaneously across all servers.</p><p>Each security.txt file includes an expiration timestamp, which is set during the initial configuration. This timestamp helps alert users when their information may be outdated, encouraging them to review and update their details regularly. For example, if a user sets an expiration date 365 days into the future, they will receive notifications as this date approaches, prompting them to refresh their information.</p><p>To ensure compliance with best practices, we also support optional fields such as encryption keys and signatures within the security.txt file. Users can link to their PGP keys for secure communications or include signatures to verify authenticity, enhancing trust with security researchers.</p><p>Users who prefer automation can manage their security.txt files through our <a href="https://developers.cloudflare.com/api/operations/update-security-txt"><u>API</u></a>, allowing seamless integration with existing workflows and tools. This feature enables developers to programmatically update their security.txt configurations without manual dashboard interactions.</p><p>Users can also find a view of any missing security.txt files via <a href="https://developers.cloudflare.com/security-center/security-insights/"><u>Security Insights</u></a> under Security Center.</p>
    <div>
      <h3>Available now, and free for all Cloudflare users</h3>
      <a href="#available-now-and-free-for-all-cloudflare-users">
        
      </a>
    </div>
    <p>By making this feature available to all our users at no cost, we aim to support the security efforts of our entire community, helping you protect your digital assets and foster trust with your audience.</p><p>With the introduction of our free security.txt generator, we're taking a significant step towards simplifying security management for everyone. Whether you're a small business owner or a large enterprise, this tool empowers you to adopt industry best practices and ensure that you're ready to handle vulnerability reports effectively. <a href="https://developers.cloudflare.com/security-center/infrastructure/security-file/"><u>Set up security.txt</u></a> on your websites today!</p> ]]></content:encoded>
            <category><![CDATA[Better Internet]]></category>
            <category><![CDATA[Security Posture]]></category>
            <category><![CDATA[Security]]></category>
            <category><![CDATA[Standards]]></category>
            <category><![CDATA[security.txt]]></category>
            <guid isPermaLink="false">1uvkAn3IB6vSEO91XsPyAO</guid>
            <dc:creator>Alexandra Moraru</dc:creator>
            <dc:creator>Sam Khawasé</dc:creator>
        </item>
        <item>
            <title><![CDATA[Expanding Cloudflare's support for open source projects with Project Alexandria]]></title>
            <link>https://blog.cloudflare.com/expanding-our-support-for-oss-projects-with-project-alexandria/</link>
            <pubDate>Fri, 27 Sep 2024 13:00:00 GMT</pubDate>
            <description><![CDATA[ At Cloudflare, we believe in the power of open source. With Project Alexandria, our expanded open source program, we’re helping open source projects have a sustainable and scalable future, providing them with the tools and protection needed to thrive. ]]></description>
            <content:encoded><![CDATA[ <p>At Cloudflare, we believe in the power of open source. It’s more than just code, it’s the spirit of collaboration, innovation, and shared knowledge that drives the Internet forward. Open source is the foundation upon which the Internet thrives, allowing developers and creators from around the world to contribute to a greater whole.</p><p>But oftentimes, open source maintainers struggle with the costs associated with running their projects and providing access to users all over the world. We’ve had the privilege of supporting incredible open source projects such as <a href="https://git-scm.com/"><u>Git</u></a> and the <a href="https://www.linuxfoundation.org/"><u>Linux Foundation</u></a> through our <a href="https://blog.cloudflare.com/cloudflare-new-oss-sponsorships-program/"><u>open source program</u></a> and learned first-hand about the places where Cloudflare can help the most.</p><p>Today, we're introducing a streamlined and expanded open source program: Project Alexandria. The ancient city of Alexandria is known for hosting a prolific library and a lighthouse that was one of the Seven Wonders of the Ancient World. The Lighthouse of Alexandria served as a beacon of culture and community, welcoming people from afar into the city. We think Alexandria is a great metaphor for the role open source projects play as a beacon for developers around the world and a source of knowledge that is core to making a better Internet. </p><p>This project offers recurring annual credits to even more open source projects to provide our products for free. In the past, we offered an upgrade to our Pro plan, but now we’re offering upgrades tailored to the size and needs of each project, along with access to a broader range of products like <a href="https://workers.cloudflare.com/"><u>Workers</u></a>, <a href="https://pages.cloudflare.com/"><u>Pages</u></a>, and more. Our goal with Project Alexandria is to ensure every OSS project not only survives but thrives, with access to Cloudflare’s enhanced security, performance optimization, and developer tools — all at no cost.</p>
    <div>
      <h2>Building a program based on your needs</h2>
      <a href="#building-a-program-based-on-your-needs">
        
      </a>
    </div>
    <p>We realize that open source projects have different needs. Some projects, like package repositories, may be most concerned about storage and transfer costs. Other projects need help protecting them from DDoS attacks. And some projects need a robust developer platform to enable them to quickly build and deploy scalable and secure applications.</p><p>With our new program we’ll work with your project to help unlock the following based on your needs:</p><ul><li><p>An upgrade to a Cloudflare Pro, Business, or Enterprise plan, which will give you more flexibility with more <a href="https://developers.cloudflare.com/rules/"><u>Cloudflare Rules</u></a> to manage traffic with, Image Optimization with <a href="https://developers.cloudflare.com/images/polish/"><u>Polish</u></a> to accelerate the speed of image downloads, and enhanced security with <a href="https://www.cloudflare.com/en-gb/application-services/products/waf/"><u>Web Application Firewall (WAF)</u></a>, <a href="https://developers.cloudflare.com/waf/analytics/security-analytics/"><u>Security Analytics</u></a>, and <a href="https://developers.cloudflare.com/page-shield/"><u>Page Shield</u></a>, to protect projects from potential threats and vulnerabilities.</p></li><li><p>Increased requests to Cloudflare <a href="https://workers.cloudflare.com/"><u>Workers</u></a> and <a href="https://pages.cloudflare.com/"><u>Pages</u></a>, allowing you to handle more traffic and scale your applications globally.</p></li><li><p>Increased <a href="https://developers.cloudflare.com/r2/"><u>R2</u></a> storage for builds and artifacts, ensuring you have the space needed to store and access your project’s assets efficiently.</p></li><li><p>Enhanced <a href="https://developers.cloudflare.com/cloudflare-one/"><u>Zero Trust</u></a> access, including <a href="https://developers.cloudflare.com/cloudflare-one/policies/browser-isolation/"><u>Remote Browser Isolation</u></a>, no user limits, and extended activity log retention to give you deeper insights and more control over your project’s security.</p></li></ul><p>Every open source project in the program will receive additional resources and support through a dedicated <a href="https://discord.com/channels/595317990191398933/1284158129474506802"><u>channel</u></a> on our <a href="https://discord.cloudflare.com"><u>Discord server</u></a>. And if there’s something you think we can do to help that we don’t currently offer, we’re here to figure out how to make it happen.</p><p>Many open source projects run within the limits of Cloudflare’s generous <a href="https://www.cloudflare.com/en-gb/plans/"><u>free tiers</u></a>. Our mission to help build a better Internet means that cost should not be a barrier to creating, securing, and distributing your open source packages globally, no matter the size of the project. Indie or niche open source projects can still run for free without the need for credits. For larger open source projects, the annual recurring credits are available to you, so your money can continue to be reinvested into innovation, instead of paying for infrastructure to store, secure, and deliver your packages and websites. </p><p>We’re dedicated to supporting projects that are not only innovative but also crucial to the continued growth and health of the internet. The criteria for the program remain the same:</p><ul><li><p>Operate solely on a non-profit basis and/or otherwise align with the project mission.</p></li><li><p>Be an open source project with a <a href="https://opensource.org/licenses/"><u>recognized OSS license</u></a>.</p></li></ul><p>If you’re an open source project that meets these requirements, you can <a href="https://www.cloudflare.com/lp/project-alexandria/"><u>apply for the program here</u></a>.</p>
    <div>
      <h2>Empowering the Open Source community</h2>
      <a href="#empowering-the-open-source-community">
        
      </a>
    </div>
    <p>We’re incredibly lucky to have open source projects that we admire, and the incredible people behind those <a href="https://developers.cloudflare.com/sponsorships/"><u>projects</u></a>, as part of our program — including the <a href="https://openjsf.org/"><u>OpenJS Foundation</u></a>, <a href="https://opentofu.org/"><u>OpenTofu</u></a>, and <a href="https://julialang.org/"><u>JuliaLang</u></a>.</p><p><b>OpenJS Foundation</b></p><p><a href="https://github.com/nodejs"><u>Node.js</u></a> has been part of our OSS Program since 2019, and we’ve recently partnered with the <a href="https://openjsf.org/"><u>OpenJS Foundation</u></a> to provide technical support and infrastructure improvements to other critical JavaScript projects hosted at the foundation, including <a href="https://github.com/fastify/fastify"><u>Fastify</u></a>, <a href="https://github.com/jquery/jquery"><u>jQuery</u></a>, <a href="https://github.com/electron/electron"><u>Electron</u></a>, and <a href="https://github.com/NativeScript/NativeScript"><u>NativeScript</u></a>.</p><p>One prominent example of the<a href="https://openjsf.org/"><u> OpenJS Foundation</u></a> using Cloudflare is the Node.js CDN Worker.  It’s currently in active development by the Node.js Web Infrastructure and Build teams and aims to serve all Node.js release assets (binaries, documentations, etc.) provided on their website. </p><p><a href="https://x.com/NodeConfEU/status/1823676122648715581"><u>Aaron Snell</u></a> explained that these release assets are currently being served by a single static origin file server fronted by Cloudflare. This worked fine up until a few years ago when issues began to pop up with new releases. With a new release came a cache purge, meaning that all the requests for the release assets were cache misses, causing Cloudflare to go forward directly to the static file server, overloading it. Because Node.js releases nightly builds, this issue occurs every day.</p><p>The CDN Worker plans to fix this by using Cloudflare Workers and R2 to serve requests for the release assets, taking all the load off the static file server, resulting in improved availability for Node.js downloads and documentation, and ultimately making the process more sustainable in the long run.</p><p><b>OpenTofu</b></p><p><a href="https://github.com/opentofu/opentofu"><u>OpenTofu</u></a> has been focused on building a free and open alternative to proprietary infrastructure-as-code platforms. One of their major challenges has been ensuring the reliability and scalability of their registry while keeping costs low. Cloudflare's <a href="https://developers.cloudflare.com/r2/"><u>R2</u></a> storage and caching services provided the perfect fit, allowing <a href="https://github.com/opentofu/opentofu"><u>OpenTofu</u></a> to serve static files at scale without worrying about bandwidth or performance bottlenecks.</p><p>The OpenTofu team noted that it was paramount for OpenTofu to keep the costs of running the registry as low as possible both in terms of bandwidth and also in human cost. However, they also needed to make sure that the registry had an uptime close to 100% since thousands upon thousands of developers would be left without a means to update their infrastructure if it went down.</p><p>The registry codebase (written in Go) pre-generates all possible answers of the OpenTofu Registry API and uploads the static files to an R2 bucket. With R2, OpenTofu has been able to run the registry essentially for free with no servers and scaling issues to worry about.</p><p><b>JuliaLang</b></p><p><a href="https://github.com/JuliaLang/julia"><u>JuliaLang</u></a> has recently joined our OSS Sponsorship Program, and we’re excited to support their critical infrastructure to ensure the smooth operation of their ecosystem. A key aspect of this support is enabling the use of Cloudflare’s services to help <a href="https://github.com/JuliaLang/julia"><u>JuliaLang</u></a> deliver packages to its user base.</p><p>According to <a href="https://staticfloat.github.io/"><u>Elliot Saba</u></a>, JuliaLang had been using Amazon Lightsail as a cost-effective global CDN to serve packages to their user base. However, as their user base grew they would occasionally exceed their bandwidth limits and rack up serious cloud costs, not to mention experiencing degraded performance due to load balancer VMs getting overloaded by traffic spikes. Now JuliaLang is using Cloudflare <a href="https://developers.cloudflare.com/r2/"><u>R2</u></a>, and the speed and reliability of <a href="https://www.cloudflare.com/developer-platform/products/r2/">R2 object storage</a> has so far exceeded that of their own within-datacenter solutions, and the lack of bandwidth charges means JuliaLang is now getting faster, more reliable service for less than a tenth of their previous spend.</p>
    <div>
      <h2>How can we help?</h2>
      <a href="#how-can-we-help">
        
      </a>
    </div>
    <p>If your project fits our criteria, and you’re looking to reduce costs and eliminate surprise bills, we invite you to apply! We’re eager to help the next generation of open source projects make their mark on the internet.</p><p>For more details and to apply, visit our new <a href="https://www.cloudflare.com/lp/project-alexandria/"><u>Project Alexandria page</u></a>. And if you know other projects that could benefit from this program, please spread the word!</p> ]]></content:encoded>
            <category><![CDATA[Birthday Week]]></category>
            <category><![CDATA[Open Source]]></category>
            <category><![CDATA[Better Internet]]></category>
            <guid isPermaLink="false">5LrF3eCtonOcP2Sf5BSVpe</guid>
            <dc:creator>Veronica Marin</dc:creator>
            <dc:creator>Gabby Shires</dc:creator>
        </item>
        <item>
            <title><![CDATA[Cloudflare’s 2024 Annual Founders’ Letter]]></title>
            <link>https://blog.cloudflare.com/cloudflare-2024-annual-founders-letter/</link>
            <pubDate>Sun, 22 Sep 2024 15:51:00 GMT</pubDate>
            <description><![CDATA[ Cloudflare launched on September 27, 2010. This week we celebrate our fourteenth birthday ]]></description>
            <content:encoded><![CDATA[ <p>This week Cloudflare will celebrate the fourteenth anniversary of our launch. We think of it as our birthday. As is our tradition <a href="https://blog.cloudflare.com/introducing-cloudflares-automatic-ipv6-gatewa/"><u>ever since our first anniversary</u></a>, we use our Birthday Week each year to launch new products that we think of as gifts back to the Internet. For the last five years, we also take this time to write our <a href="https://blog.cloudflare.com/tag/founders-letter/"><u>annual Founders’ Letter</u></a> reflecting on our business and the state of the Internet. This year is no different.</p><p>That said, one thing that is different is you may have noticed we've actually had fewer public innovation weeks over the last year than usual. That's been because a <a href="https://blog.cloudflare.com/thanksgiving-2023-security-incident/"><u>couple</u></a> of <a href="https://blog.cloudflare.com/post-mortem-on-cloudflare-control-plane-and-analytics-outage/"><u>incidents</u></a> nearly a year ago caused us to focus on improving our internal systems over releasing new features. We're incredibly proud of our team's focus to make security, resilience, and reliability the top priorities for the last year. Today, Cloudflare's underlying platform, and the products that run on top of it, are <a href="https://blog.cloudflare.com/major-data-center-power-failure-again-cloudflare-code-orange-tested/"><u>significantly more robust than ever before</u></a>.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/16Eu23FEtjrfzUCYjwbWuh/0d8f35f2bbf4841862bebbeaf13e069d/pencil1.png" />
          </figure><p>With that work largely complete, and our platform in its strongest shape ever, we plan to pick back up the usual cadence of new product launches that we're known for. This Birthday Week, you'll see many as we roll out performance improvements only our Connectivity Cloud can deliver to accelerate all our customers' websites by a mind-blowing 45 percent (automatically and for free), launch new features to make our developer platform faster and easier to use, plug the web's last encryption hole, accelerate AI inference globally, provide new levels of support for startups and the open source community, and much much more.</p><p>This is easily our favorite week of the year because of how it allows our team to give back to the Internet and live up to our mission.</p>
    <div>
      <h2>Challenges for the Internet ahead</h2>
      <a href="#challenges-for-the-internet-ahead">
        
      </a>
    </div>
    <p>The robustness of Cloudflare's platform today contrasts with what feels like an Internet that has become far more fragile over the previous year. When we first articulated our mission as helping build a better Internet, we assumed that “better” meant one that was faster, more reliable, more secure, more private, and more efficient. But today it seems like something more fundamental.</p><p>The last year has been characterized by a normalization of <a href="https://blog.cloudflare.com/tag/internet-shutdown/">Internet shutdowns</a> and limits on Internet access around the world. What were once tactics reserved for authoritarian regimes have spread to even Western democratic nations, where courts and legislatures have been emboldened to restrict fundamental protocols to control perceived harms.</p><p>We’ve seen a dramatic uptick in courts of limited jurisdiction ordering sites they found objectionable blocked globally at the DNS level, nations turning off the Internet for most their citizens in the name of preventing cheating on standardized tests (while it remains on in wealthy and politically connected neighborhoods), ISPs proposing legislation to impose new taxes on content creators, and whole services being banned in countries that had previously declared that more Internet was always better than less. </p><p>This is, unfortunately, a dark time in the history of the Internet.</p>
    <div>
      <h2>AI’s Threat to Original Content Creation</h2>
      <a href="#ais-threat-to-original-content-creation">
        
      </a>
    </div>
    <p>At the same time, the business model of the web is eroding. The quid pro quo of the web’s last era — the search era — was that you let a company like Google scrape data from your website in exchange for them sending you traffic. In that model, content creators could then generate value from that traffic through ads, selling products, or just getting the ego boost of knowing that someone cares enough about the thing you created to take the time to view it.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2cMITvSWFI9yZpfwNC4hBU/4bd45d7fc413be97893e4a00bc096e2b/pen1.png" />
          </figure><p>That same quid pro quo does not hold up in the era we’re moving into — the AI era — where answers are delivered to questions without ever having to visit the authoritative source. And, if content creators can no longer generate value from their creations, it’s inevitable they’ll generate less content and we’ll all, including the AI companies that need original content to train their models, lose out as a result.</p>
    <div>
      <h2>Picking Up the Mantle</h2>
      <a href="#picking-up-the-mantle">
        
      </a>
    </div>
    <p>The Internet remains a miracle, but it no longer feels inevitable. It is under attack from active adversaries and beginning to rot from benign neglect. And, with the largest tech companies distracted by their own regulatory challenges, it finds itself without a clear champion. We’re proud of our team for picking up that mantle. At Cloudflare, we believe in the Internet and we will fight for it.</p><p>That's why we invest in our public policy team to educate lawmakers and jurists on how best to control the harms created by some limited corners of the Internet without destabilizing its underlying protocols. Why we believe it’s important to provide so many of our services for free. And it's why this Birthday Week we'll announce new ways for the AI systems that hunger for original content to compensate content creators in a way that is equitable. Without a new paradigm, we worry that the incentives that allowed the Internet to flourish will shrivel and its miracle will fade.</p><p>Missions matter. Ours is to help build a better Internet. We, or one of our senior executives, still talk to every candidate we hire before extending an offer because we want to ensure we communicate the importance of our mission. One of the most common questions we’re asked is how we plan to preserve Cloudflare's culture? Our answer is always the same: the goal isn't how to preserve our culture, it's always how to improve it. The same has to be true for the Internet. We can't just try to preserve the past, we need to imagine new ways to improve it.</p><p>That requires champions to stand up and imagine a better Internet. It’s been too long since you’ve read a positive story about the Internet even though it continues to be a miracle. We are proud that we have the team, platform, and mantle to not just preserve, but improve on, that miracle. It is our mission and what motivates everything we do at Cloudflare. And nowhere is that more on display than during the week ahead. If you too are inspired by our mission, we encourage you to <a href="https://www.cloudflare.com/careers/"><u>apply to join our team</u></a>.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1LLnEP9Y10dOWw4NWAEcHe/700027bd46e496ff07c2910a3887b2cf/pen2.png" />
          </figure><p>Stay tuned for an incredible Birthday Week of new products that make progress on our mission. Thank you to our team around the world for everything you do. Cloudflare is stronger because of the work we've accomplished, and the Internet will be stronger because of Cloudflare.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6KvuXDwtmqb0nDoJqIWQWd/db265cb24d224458000d78a41cd55055/matthew-michelle.png" />
          </figure><p></p> ]]></content:encoded>
            <category><![CDATA[Birthday Week]]></category>
            <category><![CDATA[Founders' Letter]]></category>
            <category><![CDATA[Better Internet]]></category>
            <guid isPermaLink="false">7puHT1ajSilk9b0LGo3s2H</guid>
            <dc:creator>Matthew Prince</dc:creator>
            <dc:creator>Michelle Zatlyn</dc:creator>
        </item>
        <item>
            <title><![CDATA[Making progress on routing security: the new White House roadmap]]></title>
            <link>https://blog.cloudflare.com/white-house-routing-security/</link>
            <pubDate>Mon, 02 Sep 2024 23:00:00 GMT</pubDate>
            <description><![CDATA[ On September 3, 2024, the White House published a report on Internet routing security. We’ll talk about what that means and how you can help. ]]></description>
            <content:encoded><![CDATA[ <p>The Internet can feel like magic. When you load a webpage in your browser, many simultaneous requests for data fly back and forth to remote servers. Then, often in less than one second, a website appears. Many people know that DNS is used to look up a hostname, and resolve it to an IP address, but fewer understand how data flows from your home network to the network that controls the IP address of the web server.</p><p>The Internet is an interconnected network of networks, operated by thousands of independent entities. To allow these networks to communicate with each other, in 1989, <a href="https://weare.cisco.com/c/r/weare/amazing-stories/amazing-things/two-napkin.html"><u>on the back of two napkins</u></a>, three network engineers devised the <a href="https://www.cloudflare.com/learning/security/glossary/what-is-bgp/"><u>Border Gateway Protocol (BGP)</u></a>. It allows these independent networks to signal directions for IP prefixes they own, or that are reachable through their network. At that time, Internet security wasn’t a big deal — <a href="https://www.cloudflare.com/learning/ssl/what-is-ssl/"><u>SSL</u></a>, initially developed to secure websites, wasn’t developed until 1995, six years later. So BGP wasn’t originally built with security in mind, but over time, security and availability concerns have emerged.</p><p>Today, the <a href="https://bidenwhitehouse.archives.gov/oncd/"><u>White House Office of the National Cyber Director</u></a> issued the <a href="https://bidenwhitehouse.archives.gov/oncd/briefing-room/2024/09/03/fact-sheet-biden-harris-administration-releases-roadmap-to-enhance-internet-routing-security/"><u>Roadmap to Enhancing Internet Routing Security</u></a>, and we’re excited to highlight their recommendations. But before we get into that, let’s provide a quick refresher on what BGP is and why routing security is so important.</p>
    <div>
      <h2>BGP: pathways through the Internet</h2>
      <a href="#bgp-pathways-through-the-internet">
        
      </a>
    </div>
    <p>BGP is the core signaling protocol used on the Internet. It’s fully distributed, and managed independently by all the individual operators of the Internet. With BGP, operators will send messages to their neighbors (other networks they are directly connected with, either physically or through an <a href="https://www.cloudflare.com/learning/cdn/glossary/internet-exchange-point-ixp/"><u>Internet Exchange</u></a>) that indicate their network can be used to reach a specific IP prefix. These IP prefixes can be resources the network owns themselves, such as <a href="https://radar.cloudflare.com/routing/prefix/104.16.128.0/20"><u>104.16.128.0/20</u></a> for Cloudflare, or resources that are reachable through their network, by transiting the network.</p><p>By exchanging all of this information between peers, each individual network on the Internet can form a full map of what the Internet looks like, and ideally, how to reach each IP address on the Internet. This map is in an almost constant state of flux: networks disappear from the Internet for a wide variety of reasons, ranging from scheduled maintenance to catastrophic failures, like the <a href="https://blog.cloudflare.com/october-2021-facebook-outage/"><u>Facebook incident in 2021</u></a>. On top of this, the ideal path to take from point A (your home ISP) to point B (Cloudflare) can change drastically, depending on routing decisions made by your home ISP, and any or all intermediate networks between your home ISP and Cloudflare (<a href="https://blog.cloudflare.com/how-verizon-and-a-bgp-optimizer-knocked-large-parts-of-the-internet-offline-today/"><u>here’s an example from 2019</u></a>). These <a href="https://blog.cloudflare.com/prepends-considered-harmful/"><u>routing decisions</u></a> are entirely arbitrary, and left to the owners of the networks. Performance and security can be considered, but neither of these have been historically made visible through BGP itself.</p><p>As all the networks can independently make their own routing decisions, there are a lot of individual points where things can go wrong. Going wrong can have multiple meanings here: this can range from routing loops, causing Internet traffic to go back and forth repeatedly between two networks, never reaching its destination, to more malicious problems, such as traffic interception or traffic manipulation.</p><p>As routing security wasn’t accounted for in that initial two-napkin draft, it is easy for a malicious actor on the Internet to <a href="https://www.cloudflare.com/en-gb/learning/security/glossary/bgp-hijacking/"><u>pretend to either be an originating network</u></a> (where they claim to own the IP prefix, positioning themselves as the destination network), or they can pretend to be a viable middle network, getting traffic to transit through their network.</p><p>In either of these examples, the actor can manipulate the Internet traffic of unsuspecting end users and potentially steal passwords, cryptocurrency, or any other data that can be of value. While transport security (<a href="https://www.cloudflare.com/learning/ssl/transport-layer-security-tls/"><u>TLS</u></a> for HTTP/1.x and HTTP/2, <a href="https://blog.cloudflare.com/the-road-to-quic/"><u>QUIC</u></a> for HTTP/3) has reduced this risk significantly, there’s still ways this can be bypassed. Over time, the Internet community has acknowledged the security concerns with BGP, and has built infrastructure to mitigate some of these problems. </p>
    <div>
      <h3>BGP security: The RPKI is born</h3>
      <a href="#bgp-security-the-rpki-is-born">
        
      </a>
    </div>
    <p>This journey is now coming to a final destination with the development and adoption of the Resource Public Key Infrastructure (RPKI). The RPKI is a <a href="https://research.cloudflare.com/projects/internet-infrastructure/pki/"><u>PKI</u></a>, just like the Web PKI which provides security certificates for the websites we browse (the “s” in https). The RPKI is a PKI specifically with the Internet in mind: it provides core constructs for <a href="https://www.cloudflare.com/learning/dns/glossary/what-is-my-ip-address/"><u>IP addresses</u></a> and <a href="https://www.cloudflare.com/learning/network-layer/what-is-an-autonomous-system/"><u>Autonomous System Numbers (ASNs</u></a>), the numbers used to identify these individual operating networks mentioned earlier.</p><p>Through the RPKI, it’s possible for an operator to establish a cryptographically secure relationship between the IP prefixes they originate, and their ASN, through the issuance of <a href="https://www.arin.net/resources/manage/rpki/roa_request/"><u>Route Origin Authorization records (ROAs)</u></a>. These ROAs can be used by all other networks on the Internet to validate that the IP prefix update they just received for a given origin network actually belongs to that origin network, a process called <a href="https://blog.cloudflare.com/rpki-updates-data/"><u>Route Origin Validation (ROV)</u></a>. If a malicious party tries to hijack an IP prefix that has a ROA to their (different) origin network, validating networks would know this update is invalid and reject it, maintaining the origin security and ensuring reachability.</p>
    <div>
      <h2>Why does BGP security matter? Examples of route hijacks and leaks</h2>
      <a href="#why-does-bgp-security-matter-examples-of-route-hijacks-and-leaks">
        
      </a>
    </div>
    <p>But why should you care about BGP? And more importantly, why does the White House care about BGP? Put simply: BGP (in)security can cost people and companies millions of dollars and cause widespread disruptions for critical services.</p><p>In February 2022, Korean crypto platform KLAYswap was the target of a <a href="https://manrs.org/2022/02/klayswap-another-bgp-hijack-targeting-crypto-wallets/"><u>malicious BGP hijack</u></a>, which was used to steal $1.9 million of cryptocurrency from their customers. The attackers were able to serve malicious code that mimicked the service KLAYswap was using for technical support. They were able to do this by announcing the IP prefix used to serve the JavaScript SDK KLAYswap was using. When other networks accepted this announcement, end user traffic loading the technical support page instead received malicious JavaScript, which was used to drain customer wallets. As the attackers hijacked the IP address, they were also able to register a <a href="https://www.cloudflare.com/application-services/products/ssl/">TLS certificate</a> for the domain name used to serve the SDK. As a result, nothing looked out of the ordinary for Klayswap’s customers until they noticed their wallets had been drained.</p><p>However, not all BGP problems are intentional hijacks. In March 2022, <a href="https://radar.cloudflare.com/as8342"><u>RTComm (AS8342)</u></a>, a Russian ISP, announced itself as the origin of <a href="https://radar.cloudflare.com/routing/prefix/104.244.42.0/24"><u>104.244.42.0/24</u></a>, which is an IP prefix actually owned by <a href="https://radar.cloudflare.com/as13414"><u>Twitter (now X) (AS13414)</u></a>. In this case, all researchers have drawn a similar conclusion: RTComm wanted to block its users from accessing Twitter, but inadvertently advertised the route to its peers and upstream providers. Thankfully, the impact was limited, in large part due to Twitter issuing ROA records for their IP prefixes, which meant the hijack was blocked at all networks that had implemented ROV and were validating announcements.</p><p>Inadvertent incorrect advertisements passing from one network to another, or route leaks, can happen to anyone, even Cloudflare. Our <a href="https://1.1.1.1/dns"><u>1.1.1.1 public DNS service</u></a> — used by millions of consumers and businesses — is often the unintended victim. Consider this situation (versions of which have happened numerous times): a network engineer running a local ISP is testing a configuration on their router and announces to the Internet that you can reach the IP address 1.1.1.1 through their network. They will often pick this address because it’s easy to input on the router and observe in network analytics. They accidentally push that change out to all their peer networks — the networks they’re connected to — and now, if proper routing security isn’t in place, users on multiple networks around the Internet trying to reach 1.1.1.1 might be directed to this local ISP where there is no DNS service to be found. This can lead to widespread outages.</p><p>The types of routing security measures in the White House roadmap can prevent these issues. In the case of 1.1.1.1, <a href="https://rpki.cloudflare.com/?view=explorer&amp;prefix=1.1.1.0%2F24"><u>Cloudflare has ROAs in place</u></a> that tell the Internet that we originate the IP prefix that contains 1.1.1.1. If someone else on the Internet is advertising 1.1.1.1, that’s an invalid route, and other networks should stop accepting it. In the case of KLAYswap, had there been ROAs in place, other networks could have used common filtering techniques to filter out the routes pointing to the attackers malicious JavaScript. So now let’s talk more about the plan the White House has to improve routing security on the Internet, and how the US government developed its recommendations.</p>
    <div>
      <h2>Work leading to the roadmap</h2>
      <a href="#work-leading-to-the-roadmap">
        
      </a>
    </div>
    <p>The new routing security roadmap from the <a href="https://www.whitehouse.gov/oncd/"><u>Office of the National Cyber Director (ONCD)</u></a> is the product of years of work, throughout both government and industry. The <a href="https://www.nist.gov/"><u>National Institute of Standards and Technology (NIST)</u></a> has been a longstanding proponent of improving routing security, developing <a href="https://www.nist.gov/news-events/news/2014/05/nist-develops-test-and-measurement-tools-internet-routing-security"><u>test and measurement</u></a> <a href="https://rpki-monitor.antd.nist.gov/"><u>tools</u></a> and publishing <a href="https://nvlpubs.nist.gov/nistpubs/SpecialPublications/NIST.SP.1800-14.pdf"><u>special publication 1800-14</u></a> on Protecting the Integrity of Internet Routing, among many other initiatives. They are active participants in the Internet community, and an important voice for routing security.</p><p>Cloudflare first started publicly <a href="https://blog.cloudflare.com/is-bgp-safe-yet-rpki-routing-security-initiative/"><u>advocating</u></a> for adoption of security measures like RPKI after a <a href="https://blog.cloudflare.com/how-verizon-and-a-bgp-optimizer-knocked-large-parts-of-the-internet-offline-today/"><u>massive BGP route leak</u></a> took down a portion of the Internet, including websites using Cloudflare’s services, in 2019. </p><p>Since that time, the federal government has increasingly recognized the need to elevate efforts to secure Internet routing, a process that Cloudflare has helped support along the way. The <a href="https://www.solarium.gov/"><u>Cyberspace Solarium Commission report</u></a>, published in 2020, encouraged the government to develop a strategy and recommendations to define “common, implementable guidance for securing the DNS and BGP.”    </p><p>In February 2022, the Federal Communication Commission <a href="https://www.fcc.gov/document/fcc-launches-inquiry-internet-routing-vulnerabilities"><u>launched</u></a> a notice of inquiry to better understand Internet routing. Cloudflare <a href="https://www.fcc.gov/ecfs/document/10412234101460/1"><u>responded</u></a> with a detailed explanation of our history with RPKI and routing security. In July 2023, the FCC, jointly with the Director of the <a href="https://cisa.gov/"><u>Cybersecurity and Infrastructure Security Agency</u></a>, held a <a href="https://www.fcc.gov/news-events/events/2023/07/bgp-security-workshop"><u>workshop</u></a> for stakeholders, with <a href="https://youtu.be/VQhoNX2Q0aM?si=VHbB5uc-0DzHaWpL&amp;t=11462"><u>Cloudflare as one of the presenters</u></a>. In June 2024, the FCC issued a <a href="https://docs.fcc.gov/public/attachments/FCC-24-62A1.pdf"><u>Notice of Proposed Rulemaking</u></a> that would require large service providers to develop security risk management plans and report on routing security efforts, including RPKI adoption. </p><p>The White House has been involved as well. In March 2023, they cited the need to secure the technical foundation of the Internet, from issued such as BGP vulnerabilities, as one of the strategic objectives of the <a href="https://www.whitehouse.gov/wp-content/uploads/2023/03/National-Cybersecurity-Strategy-2023.pdf"><u>National Cybersecurity Strategy</u></a>. Citing those efforts, in May 2024, the Department of Commerce <a href="https://www.commerce.gov/news/press-releases/2024/05/us-department-commerce-implements-internet-routing-security"><u>issued</u></a> <a href="https://rpki.cloudflare.com/?view=explorer&amp;asn=3477"><u>ROAs signing some of its IP space</u></a>, and this roadmap strongly encourages other departments and agencies to do the same. All of those efforts and the focus on routing security have resulted in increased adoption of routing security measures. </p>
    <div>
      <h2>Report observations and recommendations</h2>
      <a href="#report-observations-and-recommendations">
        
      </a>
    </div>
    <p>The report released by the White House Office of the National Cyber Director details the current state of BGP security, and the challenges associated with Resource Public Key Infrastructure (RPKI) Route Origin Authorization (ROA) issuance and RPKI Route Origin Validation (ROV) adoption. It also provides network operators and government agencies with next steps and recommendations for BGP security initiatives. </p><p>One of the first recommendations is for all networks to create and publish ROAs. It’s important that every network issues ROAs for their IP prefixes, as it’s the only way for other networks to validate they are the authorized originator of those prefixes. If one network is advertising an IP address as their own, but a different network issued the ROA, that’s an important sign that something might be wrong!</p><p>As shown in the chart below from <a href="https://rpki-monitor.antd.nist.gov/"><u>NIST’s RPKI Monitor</u></a>, as of September 2024, at least 53% of all the IPv4 prefixes on the Internet have a valid ROA record available (IPv6 reached this milestone in late 2023), up from only 6% in 2017. (The metric is even better when measured as a percent of Internet traffic: data from <a href="https://kentik.com/"><u>Kentik</u></a>, a network observability company, <a href="https://www.kentik.com/blog/rpki-rov-deployment-reaches-major-milestone/"><u>shows</u></a> that 70.3% of Internet traffic is exchanged with IP prefixes that have a valid ROA.) This increase in the number of signed IP prefixes (ROAs) is foundational to secure Internet routing.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4f4Y1fXcdxYRxUhQYjxlWp/7f26d617648539980f2c8e65873139e4/image2.png" />
          </figure><p>Unfortunately, the US is lagging behind: <a href="https://radar.cloudflare.com/routing/us"><u>Only 39% of IP prefixes</u></a> originated by US networks have a valid ROA. This is not surprising, considering the US has significantly more Internet address resources than other parts of the world. However, the report highlights the need for the US to overcome the common barriers network operators face when implementing BGP security measures. Administrative challenges, the perception of risk, and prioritization and resourcing constraints are often cited as the problems networks face when attempting to move forward with ROV and RPKI.</p><p>A related area of the roadmap highlights the need for networks that allow their customers to control IP address space to still create ROAs for those addresses. The reality of how every ISP, government, and large business allocates its IP address space is undoubtedly messy, but that doesn’t reduce the importance of making sure that the correct entity is identified in the official records with a ROA. </p><p>A network signing routes for its IP addresses is an important step, but it isn’t enough. To prevent incorrect routes — malicious or not — from spreading around the Internet, networks need to implement Route Origin Validation (ROV) and implement other BGP best practices, outlined by <a href="https://manrs.org/"><u>MANRS</u></a> in their <a href="https://manrs.org/wp-content/uploads/2023/12/The_Zen_of_BGP_Sec_Policy_Nov2023.docx.pdf"><u>Zen Guide to Routing Security Policy</u></a>. If one network incorrectly announces itself as the origin for 1.1.1.1, that won’t have any effect beyond its own borders if no other networks pick up that invalid route. The Roadmap calls out filtering invalid routes as another action for network service providers. </p><p>As of <a href="https://blog.cloudflare.com/rpki-updates-data/"><u>2022</u></a>, our data<a href="https://blog.cloudflare.com/rpki-updates-data/"><u> showed</u></a> that around 15 percent of networks were validating routes. Ongoing measurements from APNIC show progress: this year about 20 percent <a href="https://stats.labs.apnic.net/rpki/XA"><u>of APNIC probes</u></a> globally correctly filter invalid routes with ROV. <a href="https://stats.labs.apnic.net/rpki/US"><u>In the US</u></a>, it’s 70 percent. Continued growth of ROV is a critical step towards achieving better BGP security.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5Ne3sPYqAEytLjO0Vm53yA/ad573ba885e61d249d0a4601b70c8df6/image1.png" />
          </figure><p>Filtering out invalid routes is prominently highlighted in the report’s recommendations. While recognizing that there’s been dramatic improvement in filtering by the large transit networks, the first report recommendation is for network service providers — large and small —  to fully deploy ROV. </p><p>In addition, the Roadmap proposes using the federal government’s considerable weight as a purchaser, writing, “<i>[Office of Management and Budget] should require the Federal Government’s contracted service providers to adopt and deploy current commercially-viable Internet routing security technologies.</i>” It goes on to say that grant programs, particularly broadband grants, “<i>should require grant recipients to incorporate routing security measures into their projects.</i>”</p><p>The roadmap doesn’t only cover well-established best practices, but also highlights emerging security technologies, such as <a href="https://datatracker.ietf.org/doc/draft-ietf-sidrops-aspa-profile/"><u>Autonomous System Provider Authorization (ASPA)</u></a> and <a href="https://datatracker.ietf.org/doc/html/rfc8205"><u>BGPsec</u></a>. ROAs only cover part of the BGP routing ecosystem, so additional work is needed to ensure we secure everything. It’s encouraging to see the work being done by the wider community to address these concerns is acknowledged, and more importantly, actively followed.</p>
    <div>
      <h2>What’s next for the Internet community</h2>
      <a href="#whats-next-for-the-internet-community">
        
      </a>
    </div>
    <p>The new roadmap is an important step in outlining actions that can be taken today to improve routing security. But as the roadmap itself recognizes, there’s more work to be done both in making sure that the steps are implemented, and that we continue to push routing security forward.</p><p>From an implementation standpoint, our hope is that the government’s focus on routing security through all the levers outlined in the roadmap will speed up ROA adoption, and encourage wider implementation of ROV and other best practices. At Cloudflare, we’ll continue to report on routing practices on <a href="https://radar.cloudflare.com/routing/us"><u>Cloudflare Radar</u></a> to help assess progress against the goals in the roadmap.</p><p>At a technical level, the wider Internet community has made massive strides in adopting RPKI ROV, and have set their sights on the next problem: we are securing the IP-to-originating network relationship, but what about the relationships between the individual networks?</p><p>Through the adoption of BGPsec and ASPA, network operators are able to not only validate the destination of a prefix, but also validate the path to get there. These two new technical additions within the RPKI will combine with ROV to ultimately provide a fully secure signaling protocol for the modern Internet. The community has actively undertaken this work, and we’re excited to see it progress!</p><p>Outside the RPKI, the community has also ratified the formalization of customer roles through <a href="https://datatracker.ietf.org/doc/rfc9234/"><u>RFC9234: Route Leak Prevention and Detection Using Roles in UPDATE and OPEN Messages</u></a>. As this new BGP feature gains support, we’re hopeful that this will be another helpful tool in the operator toolbox in preventing route leaks of any kind.</p>
    <div>
      <h2>How you can help keep the Internet secure</h2>
      <a href="#how-you-can-help-keep-the-internet-secure">
        
      </a>
    </div>
    <p>If you’re a network operator, you’ll need to sign your routes, and validate incoming prefixes. This consists of signing Route Origin Authorization (ROA) records, and performing Route Origin Validation (ROV). Route signing involves creating records with your local <a href="https://www.nro.net/about/rirs/"><u>Regional Internet Registry (RIR)</u></a> and signing to their PKI. Route validation involves only accepting routes that are signed with a ROA. This will help ensure that only secure routes get through. You can learn more about that <a href="https://blog.cloudflare.com/rpki-updates-data/"><u>here</u></a>.</p><p>If you’re not a network operator, head to <a href="http://isbgpsafeyet.com"><u>isbgpsafeyet.com</u></a>, and test your ISP. If your ISP is not keeping BGP safe, be sure to let them know how important it is. If the government has pointed out prioritization is a consistent problem, let’s help increase the priority of routing security.</p>
    <div>
      <h2>A secure Internet is an open Internet</h2>
      <a href="#a-secure-internet-is-an-open-internet">
        
      </a>
    </div>
    <p>As the report points out, one of the keys to keeping the Internet open is ensuring that users can feel safe accessing any site they need to without worrying about attacks that they can’t control. Cloudflare wholeheartedly supports the US government’s efforts to bolster routing security around the world and is eager to work to ensure that we can help create a safe, open Internet for every user.</p> ]]></content:encoded>
            <category><![CDATA[BGP]]></category>
            <category><![CDATA[RPKI]]></category>
            <category><![CDATA[Routing Security]]></category>
            <category><![CDATA[Better Internet]]></category>
            <guid isPermaLink="false">10dR1e1P8WbOojN0JGTPOp</guid>
            <dc:creator>Mike Conlow</dc:creator>
            <dc:creator>Emily Music</dc:creator>
            <dc:creator>Tom Strickx</dc:creator>
        </item>
        <item>
            <title><![CDATA[The backbone behind Cloudflare’s Connectivity Cloud]]></title>
            <link>https://blog.cloudflare.com/backbone2024/</link>
            <pubDate>Tue, 06 Aug 2024 14:00:00 GMT</pubDate>
            <description><![CDATA[ Read through the latest milestones and expansions of Cloudflare's global backbone and how it supports our Connectivity Cloud and our services ]]></description>
            <content:encoded><![CDATA[ <p>The modern use of "cloud" arguably traces its origins to the cloud icon, omnipresent in network diagrams for decades. A cloud was used to represent the vast and intricate infrastructure components required to deliver network or Internet services without going into depth about the underlying complexities. At Cloudflare, we embody this principle by providing critical infrastructure solutions in a user-friendly and easy-to-use way. Our logo, featuring the cloud symbol, reflects our commitment to simplifying the complexities of Internet infrastructure for all our users.</p><p>This blog post provides an update about our infrastructure, focusing on our global backbone in 2024, and highlights its benefits for our customers, our competitive edge in the market, and the impact on our mission of helping build a better Internet. Since the time of our last backbone-related <a href="http://blog.cloudflare.com/cloudflare-backbone-internet-fast-lane">blog post</a> in 2021, we have increased our backbone capacity (Tbps) by more than 500%, unlocking new use cases, as well as reliability and performance benefits for all our customers.</p>
    <div>
      <h3>A snapshot of Cloudflare’s infrastructure</h3>
      <a href="#a-snapshot-of-cloudflares-infrastructure">
        
      </a>
    </div>
    <p>As of July 2024, Cloudflare has data centers in 330 cities across more than 120 countries, each running Cloudflare equipment and services. The goal of delivering Cloudflare products and services everywhere remains consistent, although these data centers vary in the number of servers and amount of computational power.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/38RRu7BaumWFemL23JcFLW/fd1e4aced5095b1e04384984c88e48be/BLOG-2432-2.png" />
          </figure><p></p><p>These data centers are strategically positioned around the world to ensure our presence in all major regions and to help our customers comply with local regulations. It is a programmable smart network, where your traffic goes to the best data center possible to be processed. This programmability allows us to keep sensitive data regional, with our <a href="https://www.cloudflare.com/data-localization/">Data Localization Suite solutions</a>, and within the constraints that our customers impose. Connecting these sites, exchanging data with customers, public clouds, partners, and the broader Internet, is the role of our network, which is managed by our infrastructure engineering and network strategy teams. This network forms the foundation that makes our products lightning fast, ensuring our global reliability, security for every customer request, and helping customers comply with <a href="https://www.cloudflare.com/the-net/building-cyber-resilience/challenges-data-sovereignty/">data sovereignty requirements</a>.</p>
    <div>
      <h3>Traffic exchange methods</h3>
      <a href="#traffic-exchange-methods">
        
      </a>
    </div>
    <p>The Internet is an interconnection of different networks and separate <a href="https://www.cloudflare.com/learning/network-layer/what-is-an-autonomous-system/">autonomous systems</a> that operate by exchanging data with each other. There are multiple ways to exchange data, but for simplicity, we'll focus on two key methods on how these networks communicate: Peering and IP Transit. To better understand the benefits of our global backbone, it helps to understand these basic connectivity solutions we use in our network.</p><ol><li><p><b>Peering</b>: The voluntary interconnection of administratively separate Internet networks that allows for traffic exchange between users of each network is known as “<a href="https://www.netnod.se/ix/what-is-peering">peering</a>”. Cloudflare is one of the <a href="https://bgp.he.net/report/exchanges#_participants">most peered networks</a> globally. We have peering agreements with ISPs and other networks in 330 cities and across all major </p><p><a href="https://www.cloudflare.com/learning/cdn/glossary/internet-exchange-point-ixp/">Internet Exchanges (IX’s)</a>. Interested parties can register to <a href="https://www.cloudflare.com/partners/peering-portal/">peer with us</a> anytime, or directly connect to our network with a link through a <a href="https://developers.cloudflare.com/network-interconnect/pni-and-peering/">private network interconnect (PNI)</a>.</p></li><li><p><b>IP transit</b>: A paid service that allows traffic to cross or "transit" somebody else's network, typically connecting a smaller Internet service provider (ISP) to the larger Internet. Think of it as paying a toll to access a private highway with your car.</p></li></ol><p>The backbone is a dedicated high-capacity optical fiber network that moves traffic between Cloudflare’s global data centers, where we interconnect with other networks using these above-mentioned traffic exchange methods. It enables data transfers that are more reliable than over the public Internet. For the connectivity within a city and long distance connections we manage our own dark fiber or lease wavelengths using Dense Wavelength Division Multiplexing (DWDM). DWDM is a fiber optic technology that enhances network capacity by transmitting multiple data streams simultaneously on different wavelengths of light within the same fiber. It’s like having a highway with multiple lanes, so that more cars can drive on the same highway. We buy and lease these services from our global carrier partners all around the world.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1RgjDtW5LehGZEYXey4AQH/cfef08965313f67c84a052e0541fc42b/BLOG-2432-3.png" />
          </figure><p></p>
    <div>
      <h3>Backbone operations and benefits</h3>
      <a href="#backbone-operations-and-benefits">
        
      </a>
    </div>
    <p>Operating a global backbone is challenging, which is why many competitors don’t do it. We take this challenge for two key reasons: traffic routing control and cost-effectiveness.</p><p>With IP transit, we rely on our transit partners to carry traffic from Cloudflare to the ultimate destination network, introducing unnecessary third-party reliance. In contrast, our backbone gives us full control over routing of both internal and external traffic, allowing us to manage it more effectively. This control is crucial because it lets us optimize traffic routes, usually resulting in the lowest latency paths, as previously mentioned. Furthermore, the cost of serving large traffic volumes through the backbone is, on average, more cost-effective than IP transit. This is why we are doubling down on backbone capacity in regions such as Frankfurt, London, Amsterdam, and Paris and Marseille, where we see continuous traffic growth and where connectivity solutions are widely available and competitively priced.</p><p>Our backbone serves both internal and external traffic. Internal traffic includes customer traffic using our security or performance products and traffic from Cloudflare's internal systems that shift data between our data centers. <a href="http://blog.cloudflare.com/introducing-regional-tiered-cache">Tiered caching</a>, for example, optimizes our caching delivery by dividing our data centers into a hierarchy of lower tiers and upper tiers. If lower-tier data centers don’t have the content, they request it from the upper tiers. If the upper tiers don’t have it either, they then request it from the origin server. This process reduces origin server requests and improves cache efficiency. Using our backbone to transport the cached content between lower and upper-tier data centers and the origin is often the most cost-effective method, considering the scale of our network. <a href="https://www.cloudflare.com/network-services/products/magic-transit/">Magic Transit</a> is another example where we attract traffic, by means of BGP anycast, to the Cloudflare data center closest to the end user and implement our DDoS solution. Our backbone transports the clean traffic to our customer’s data center, which they connect through a <a href="http://blog.cloudflare.com/cloudflare-network-interconnect">Cloudflare Network Interconnect (CNI)</a>.</p><p>External traffic that we carry on our backbone can be traffic from other origin providers like AWS, Oracle, Alibaba, Google Cloud Platform, or Azure, to name a few. The origin responses from these cloud providers are transported through peering points and our backbone to the Cloudflare data center closest to our customer. By leveraging our backbone we have more control over how we backhaul this traffic throughout our network, which results in more reliability and better performance and less dependency on the public Internet.</p><p>This interconnection between public clouds, offices, and the Internet with a controlled layer of performance, security, programmability, and visibility running on our global backbone is our <a href="http://blog.cloudflare.com/welcome-to-connectivity-cloud">Connectivity Cloud</a>.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1Fk6k5NOgfOM3qpK0z3wb0/2fe9631dbe6b2dfc6b3c3cd0156f293e/Screenshot_2024-08-28_at_3.21.50_PM.png" />
          </figure><p><sub><i>This map is a simplification of our current backbone network and does not show all paths</i></sub></p><p></p>
    <div>
      <h3>Expanding our network</h3>
      <a href="#expanding-our-network">
        
      </a>
    </div>
    <p>As mentioned in the introduction, we have increased our backbone capacity (Tbps) by more than 500% since 2021. With the addition of sub-sea cable capacity to Africa, we achieved a big milestone in 2023 by completing our global backbone ring. It now reaches six continents through terrestrial fiber and subsea cables.</p><p>Building out our backbone within regions where Internet infrastructure is less developed compared to markets like Central Europe or the US has been a key strategy for our latest network expansions. We have a shared goal with regional ISP partners to keep our data flow localized and as close as possible to the end user. Traffic often takes inefficient routes outside the region due to the lack of sufficient local peering and regional infrastructure. This phenomenon, known as traffic tromboning, occurs when data is routed through more cost-effective international routes and existing peering agreements.</p><p>Our regional backbone investments in countries like India or Turkey aim to reduce the need for such inefficient routing. With our own in-region backbone, traffic can be directly routed between in-country Cloudflare data centers, such as from Mumbai to New Delhi to Chennai, reducing latency, increasing reliability, and helping us to provide the same level of service quality as in more developed markets. We can control that data stays local, supporting our Data Localization Suite (<a href="https://www.cloudflare.com/data-localization/">DLS</a>), which helps businesses comply with regional data privacy laws by controlling where their data is stored and processed.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4WCNB78y1jHHsid46pBZOo/e950ced1e510cb8caeea0961c43ea8a0/BLOG-2432-5.png" />
          </figure><p></p>
    <div>
      <h3>Improved latency and performance</h3>
      <a href="#improved-latency-and-performance">
        
      </a>
    </div>
    <p>This strategic expansion has not only extended our global reach but has also significantly improved our overall latency. One illustration of this is that since the deployment of our backbone between Lisbon and Johannesburg, we have seen a major performance improvement for users in Johannesburg. Customers benefiting from this improved latency can be, for example, a financial institution running their APIs through us for real-time trading, where milliseconds can impact trades, or our <a href="https://www.cloudflare.com/network-services/products/magic-wan/">Magic WAN</a> users, where we facilitate site-to-site connectivity between their branch offices.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1o0H8BNLf5ca8BBx38Q5Ee/5b22f7c0ad1c5c49a67bc5149763e81d/BLOG-2432-6.png" />
          </figure><p></p><p>The table above shows an example where we measured the round-trip time (RTT) for an uncached origin fetch, from an end-user in Johannesburg to various origin locations, comparing our backbone and the public Internet. By carrying the origin request over our backbone, as opposed to IP transit or peering, local users in Johannesburg get their content up to 22% faster. By using our own backbone to long-haul the traffic to its final destination, we are in complete control of the path and performance. This improvement in latency varies by location, but consistently demonstrates the superiority of our backbone infrastructure in delivering high performance connectivity.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4ZEEZJERWQ2UB1sdTjWUtM/f90b11507ab24edbf84e9b4cfb9b1155/BLOG-2432-7.png" />
          </figure><p></p>
    <div>
      <h3>Traffic control</h3>
      <a href="#traffic-control">
        
      </a>
    </div>
    <p>Consider a navigation system using 1) GPS to identify the route and 2) a highway toll pass that is valid until your final destination and allows you to drive straight through toll stations without stopping. Our backbone works quite similarly.</p><p>Our global backbone is built upon two key pillars. The first is BGP (<a href="https://www.cloudflare.com/learning/security/glossary/what-is-bgp/">Border Gateway Protocol</a>), the routing protocol for the Internet, and the second is Segment Routing MPLS (<a href="https://www.cloudflare.com/learning/network-layer/what-is-mpls/">Multiprotocol label switching</a>), a technique for steering traffic across predefined forwarding paths in an IP network. By default, Segment Routing provides end-to-end encapsulation from ingress to egress routers where the intermediate nodes execute no route lookup. Instead, they forward traffic across an end-to-end virtual circuit, or tunnel, called a label-switched path. Once traffic is put on a label-switched path, it cannot detour onto the public Internet and must continue on the predetermined route across Cloudflare’s backbone. This is nothing new, as many networks will even run a “BGP Free Core” where all the route intelligence is carried at the edge of the network, and intermediate nodes only participate in forwarding from ingress to egress.</p><p>While leveraging Segment Routing Traffic Engineering (SR-TE) in our backbone, we can automatically select paths between our data centers that are optimized for latency and performance. Sometimes the “shortest path” in terms of routing protocol cost is not the lowest latency or highest performance path.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6QettBytPdJxacwVLVHYFN/de95a8e5a67514e64931fbe4d26967b6/BLOG-2432-8.png" />
          </figure>
    <div>
      <h3>Supercharged: Argo and the global backbone</h3>
      <a href="#supercharged-argo-and-the-global-backbone">
        
      </a>
    </div>
    <p><a href="https://www.cloudflare.com/lp/pg-argo-smart-routing/?utm_source=google&amp;utm_medium=cpc&amp;utm_campaign=ao-fy-pay-gbl_en_native-applications-ge-ge-general-core_paid_apo_argo&amp;utm_content=argo&amp;utm_term=cloudflare+argo&amp;campaignid=71700000092259497&amp;adgroupid=58700007751943324&amp;creativeid=666481290143&amp;&amp;_bt=666481290143&amp;_bk=cloudflare%20argo&amp;_bm=e&amp;_bn=g&amp;_bg=138787490550&amp;_placement=&amp;_target=&amp;_loc=1017825&amp;_dv=c&amp;awsearchcpc=1&amp;gad_source=1&amp;gclid=Cj0KCQjwvb-zBhCmARIsAAfUI2uj2VOkHjvM2qspAfBodOROAH_bG040P6bjvQeEbVwFF1qwdEKLXLkaAllMEALw_wcB&amp;gclsrc=aw.ds">Argo Smart Routing</a> is a service that uses Cloudflare’s portfolio of backbone, transit, and peering connectivity to find the most optimal path between the data center where a user’s request lands and your back-end origin server. Argo may forward a request from one Cloudflare data center to another on the way to an origin if the performance would improve by doing so. <a href="http://blog.cloudflare.com/orpheus-saves-internet-requests-while-maintaining-speed">Orpheus</a> is the counterpart to Argo, and routes around degraded paths for all customer origin requests free of charge. Orpheus is able to analyze network conditions in real-time and actively avoid reachability failures. Customers with Argo enabled get optimal performance for requests from Cloudflare data centers to their origins, while Orpheus provides error self-healing for all customers universally. By mixing our global backbone using Segment Routing as an underlay with <a href="https://www.cloudflare.com/application-services/products/argo-smart-routing/">Argo Smart Routing</a> and Orpheus as our connectivity overlay, we are able to transport critical customer traffic along the most optimized paths that we have available.</p><p>So how exactly does our global backbone fit together with Argo Smart Routing? <a href="http://blog.cloudflare.com/argo-and-the-cloudflare-global-private-backbone">Argo Transit Selection</a> is an extension of Argo Smart Routing where the lowest latency path between Cloudflare data center hops is explicitly selected and used to forward customer origin requests. The lowest latency path will often be our global backbone, as it is a more dedicated and private means of connectivity, as opposed to third-party transit networks.</p><p>Consider a multinational Dutch pharmaceutical company that relies on Cloudflare's network and services with our <a href="https://www.cloudflare.com/learning/access-management/what-is-sase/">SASE solution</a> to connect their global offices, research centers, and remote employees. Their Asian branch offices depend on Cloudflare's security solutions and network to provide secure access to important data from their central data centers back to their offices in Asia. In case of a cable cut between regions, our network would automatically look for the best alternative route between them so that business impact is limited.</p><p>Argo measures every potential combination of the different provider paths, including our own backbone, as an option for reaching origins with smart routing. Because of our vast interconnection with so many networks, and our global private backbone, Argo is able to identify the most performant network path for requests. The backbone is consistently one of the lowest latency paths for Argo to choose from.</p><p>In addition to high performance, we care greatly about network reliability for our customers. This means we need to be as resilient as possible from fiber cuts and third-party transit provider issues. During a disruption of the <a href="https://en.wikipedia.org/wiki/AAE-1">AAE-1</a> (<a href="https://www.submarinecablemap.com/submarine-cable/asia-africa-europe-1-aae-1">Asia Africa Europe-1</a>) submarine cable, this is what Argo saw between Singapore and Amsterdam across some of our transit provider paths vs. the backbone.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/66CGBePnLzuLRuTErvf8Cr/813b4b60a95935491e967214851e5a04/BLOG-2432-9.png" />
          </figure><p>The large (purple line) spike shows a latency increase on one of our third-party IP transit provider paths due to congestion, which was eventually resolved following likely traffic engineering within the provider’s network. We saw a smaller latency increase (yellow line) over other transit networks, but still one that is noticeable. The bottom (green) line on the graph is our backbone, where round-trip time more or less remains flat throughout the event, due to our diverse backbone connectivity between Asia and Europe. Throughout the fiber cut, we remained stable at around 200ms between Amsterdam and Singapore. There was no noticeable network hiccup as was seen on the transit provider paths, so Argo actively leveraged the backbone for optimal performance.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1A8CdaGq8P2hF3DtIs9dQI/a10fdf3af9de917fb0036d38eace9905/BLOG-2432-10.png" />
          </figure>
    <div>
      <h3>Call to action</h3>
      <a href="#call-to-action">
        
      </a>
    </div>
    <p>As Argo improves performance in our network, Cloudflare Network Interconnects (<a href="https://developers.cloudflare.com/network-interconnect/">CNIs</a>) optimize getting onto it. We encourage our Enterprise customers to use our free CNI’s as on-ramps onto our network whenever practical. In this way, you can fully leverage our network, including our robust backbone, and increase overall performance for every product within your Cloudflare Connectivity Cloud. In the end, our global network is our main product and our backbone plays a critical role in it. This way we continue to help build a better Internet, by improving our services for everybody, everywhere.</p><p>If you want to be part of our mission, join us as a Cloudflare network on-ramp partner to offer secure and reliable connectivity to your customers by integrating directly with us. Learn more about our on-ramp partnerships and how they can benefit your business <a href="https://www.cloudflare.com/network-onramp-partners/">here</a>.</p> ]]></content:encoded>
            <category><![CDATA[Connectivity Cloud]]></category>
            <category><![CDATA[Anycast]]></category>
            <category><![CDATA[Argo Smart Routing]]></category>
            <category><![CDATA[Athenian Project]]></category>
            <category><![CDATA[BGP]]></category>
            <category><![CDATA[Better Internet]]></category>
            <category><![CDATA[Cloudflare Network]]></category>
            <category><![CDATA[Magic Transit]]></category>
            <category><![CDATA[Product News]]></category>
            <guid isPermaLink="false">WiHZr8Fb6WzdVjo0egsWW</guid>
            <dc:creator>Shozo Moritz Takaya</dc:creator>
            <dc:creator>Bryton Herdes</dc:creator>
        </item>
        <item>
            <title><![CDATA[Automatically replacing polyfill.io links with Cloudflare’s mirror for a safer Internet]]></title>
            <link>https://blog.cloudflare.com/automatically-replacing-polyfill-io-links-with-cloudflares-mirror-for-a-safer-internet/</link>
            <pubDate>Wed, 26 Jun 2024 20:23:41 GMT</pubDate>
            <description><![CDATA[ polyfill.io, a popular JavaScript library service, can no longer be trusted and should be removed from websites ]]></description>
            <content:encoded><![CDATA[ <p></p><p>polyfill.io, a popular JavaScript library service, can no longer be trusted and should be removed from websites.</p><p><a href="https://sansec.io/research/polyfill-supply-chain-attack">Multiple reports</a>, corroborated with data seen by our own client-side security system, <a href="https://developers.cloudflare.com/page-shield/">Page Shield</a>, have shown that the polyfill service was being used, and could be used again, to inject malicious JavaScript code into users’ browsers. This is a real threat to the Internet at large given the popularity of this library.</p><p>We have, over the last 24 hours, released an automatic JavaScript URL rewriting service that will rewrite any link to polyfill.io found in a website proxied by Cloudflare <a href="https://cdnjs.cloudflare.com/polyfill/">to a link to our mirror under cdnjs</a>. This will avoid breaking site functionality while mitigating the risk of a supply chain attack.</p><p>Any website on the free plan has this feature automatically activated now. Websites on any paid plan can turn on this feature with a single click.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5R0ht5q4fAwm8gm3a2Xe5U/6b3ec28498e76ff75e37b58f3673e49a/image1-22.png" />
            
            </figure><p>You can find this new feature under <a href="https://dash.cloudflare.com/?to=/:account/:zone/security/settings">Security ⇒ Settings</a> on any zone using Cloudflare.</p><p>Contrary to what is stated on the polyfill.io website, Cloudflare has never recommended the polyfill.io service or authorized their use of Cloudflare’s name on their website. We have asked them to remove the false statement, and they have, so far, ignored our requests. This is yet another warning sign that they cannot be trusted.</p><p>If you are not using Cloudflare today, we still highly recommend that you remove any use of polyfill.io and/or find an alternative solution. And, while the automatic replacement function will handle most cases, the best practice is to remove polyfill.io from your projects and replace it with a secure alternative mirror like Cloudflare’s even if you are a customer.</p><p>You can do this by searching your code repositories for instances of polyfill.io and replacing it with <a href="https://cdnjs.cloudflare.com/polyfill/">cdnjs.cloudflare.com/polyfill/</a> (Cloudflare’s mirror). This is a non-breaking change as the two URLs will serve the same polyfill content. All website owners, regardless of the website using Cloudflare, should do this now.</p>
    <div>
      <h2>How we came to this decision</h2>
      <a href="#how-we-came-to-this-decision">
        
      </a>
    </div>
    <p>Back in February, the domain polyfill.io, which hosts a popular JavaScript library, was sold to a new owner: Funnull, a relatively unknown company. <a href="/polyfill-io-now-available-on-cdnjs-reduce-your-supply-chain-risk">At the time, we were concerned</a> that this created a supply chain risk. This led us to spin up our own mirror of the polyfill.io code hosted under cdnjs, a JavaScript library repository sponsored by Cloudflare.</p><p>The new owner was unknown in the industry and did not have a track record of trust to administer a project such as polyfill.io. The concern, <a href="https://x.com/triblondon/status/1761852117579427975">highlighted even by the original author</a>, was that if they were to abuse polyfill.io by injecting additional code to the library, it could cause far-reaching security problems on the Internet affecting several hundreds of thousands websites. Or it could be used to perform a targeted supply-chain attack against specific websites.</p><p>Unfortunately, that worry came true on June 25, 2024, as the polyfill.io service was being used to inject nefarious code that, under certain circumstances, redirected users to other websites.</p><p>We have taken the exceptional step of using our ability to modify HTML on the fly to replace references to the polyfill.io CDN in our customers’ websites with links to our own, safe, mirror created back in February.</p><p>In the meantime, additional threat feed providers have also taken the decision to <a href="https://github.com/uBlockOrigin/uAssets/commit/91dfc54aed0f0aa514c1a481c3e63ea16da94c03">flag the domain as malicious</a>. We have not outright blocked the domain through any of the mechanisms we have because we are concerned it could cause widespread web outages given how broadly polyfill.io is used with some estimates indicating <a href="https://w3techs.com/technologies/details/js-polyfillio">usage on nearly 4% of all websites</a>.</p>
    <div>
      <h3>Corroborating data with Page Shield</h3>
      <a href="#corroborating-data-with-page-shield">
        
      </a>
    </div>
    <p>The original report indicates that malicious code was injected that, under certain circumstances, would redirect users to betting sites. It was doing this by loading additional JavaScript that would perform the redirect, under a set of additional domains which can be considered Indicators of Compromise (IoCs):</p>
            <pre><code>https://www.googie-anaiytics.com/analytics.js
https://www.googie-anaiytics.com/html/checkcachehw.js
https://www.googie-anaiytics.com/gtags.js
https://www.googie-anaiytics.com/keywords/vn-keyword.json
https://www.googie-anaiytics.com/webs-1.0.1.js
https://www.googie-anaiytics.com/analytics.js
https://www.googie-anaiytics.com/webs-1.0.2.js
https://www.googie-anaiytics.com/ga.js
https://www.googie-anaiytics.com/web-1.0.1.js
https://www.googie-anaiytics.com/web.js
https://www.googie-anaiytics.com/collect.js
https://kuurza.com/redirect?from=bitget</code></pre>
            <p>(note the intentional misspelling of Google Analytics)</p><p>Page Shield, our client side security solution, is available on all paid plans. When turned on, it collects information about JavaScript files loaded by end user browsers accessing your website.</p><p>By looking at the database of detected JavaScript files, we immediately found matches with the IoCs provided above starting as far back as 2024-06-08 15:23:51 (first seen timestamp on Page Shield detected JavaScript file). This was a clear indication that malicious activity was active and associated with polyfill.io.</p>
    <div>
      <h3>Replacing insecure JavaScript links to polyfill.io</h3>
      <a href="#replacing-insecure-javascript-links-to-polyfill-io">
        
      </a>
    </div>
    <p>To achieve performant HTML rewriting, we need to make blazing-fast HTML alterations as responses stream through Cloudflare’s network. This has been made possible by leveraging <a href="/rust-nginx-module">ROFL (Response Overseer for FL)</a>. ROFL powers various Cloudflare products that need to alter HTML as it streams, such as <a href="https://developers.cloudflare.com/speed/optimization/content/fonts/">Cloudflare Fonts,</a> <a href="https://developers.cloudflare.com/waf/tools/scrape-shield/email-address-obfuscation/">Email Obfuscation</a> and <a href="https://developers.cloudflare.com/speed/optimization/content/rocket-loader/">Rocket Loader</a></p><p>ROFL is developed entirely in Rust. The memory-safety features of Rust are indispensable for ensuring protection against memory leaks while processing a staggering volume of requests, measuring in the millions per second. Rust's compiled nature allows us to finely optimize our code for specific hardware configurations, delivering performance gains compared to interpreted languages.</p><p>The performance of ROFL allows us to rewrite HTML on-the-fly and modify the polyfill.io links quickly, safely, and efficiently. This speed helps us reduce any additional latency added by processing the HTML file.</p><p>If the feature is turned on, for any HTTP response with an HTML Content-Type, we parse all JavaScript script tag source attributes. If any are found linking to polyfill.io, we rewrite the src attribute to link to our mirror instead. We map to the correct version of the polyfill service while the query string is left untouched.</p><p>The logic will not activate if a Content Security Policy (CSP) header is found in the response. This ensures we don’t replace the link while breaking the CSP policy and therefore potentially breaking the website.</p>
    <div>
      <h3>Default on for free customers, optional for everyone else</h3>
      <a href="#default-on-for-free-customers-optional-for-everyone-else">
        
      </a>
    </div>
    <p>Cloudflare proxies millions of websites, and a large portion of these sites are on our free plan. Free plan customers tend to have simpler applications while not having the resources to update and react quickly to security concerns. We therefore decided to turn on the feature by default for sites on our free plan, as the likelihood of causing issues is reduced while also helping keep safe a very large portion of applications using polyfill.io.</p><p>Paid plan customers, on the other hand, have more complex applications and react quicker to security notices. We are confident that most paid customers using polyfill.io and Cloudflare will appreciate the ability to virtually patch the issue with a single click, while controlling when to do so.</p><p>All customers can turn off the feature at any time.</p><p>This isn’t the first time we’ve decided a security problem was so widespread and serious that we’d enable protection for all customers regardless of whether they were a paying customer or not. Back in 2014, we enabled <a href="/shellshock-protection-enabled-for-all-customers">Shellshock protection</a> for everyone. In 2021, when the log4j vulnerability was disclosed <a href="/cve-2021-44228-log4j-rce-0-day-mitigation/">we rolled out protection</a> for all customers.</p>
    <div>
      <h2>Do not use polyfill.io</h2>
      <a href="#do-not-use-polyfill-io">
        
      </a>
    </div>
    <p>If you are using Cloudflare, you can remove polyfill.io with a single click on the Cloudflare dashboard by heading over to <a href="https://dash.cloudflare.com/?to=/:account/:zone/security/settings">your zone ⇒ Security ⇒ Settings</a>. If you are a free customer, the rewrite is automatically active. This feature, we hope, will help you quickly patch the issue.</p><p>Nonetheless, you should ultimately search your code repositories for instances of polyfill.io and replace them with an alternative provider, such as Cloudflare’s secure mirror under cdnjs (<a href="https://cdnjs.cloudflare.com/polyfill/">https://cdnjs.cloudflare.com/polyfill/</a>). Website owners who are not using Cloudflare should also perform these steps.</p><p>The underlying bundle links you should use are:</p><p>For minified: <a href="https://cdnjs.cloudflare.com/polyfill/v3/polyfill.min.js">https://cdnjs.cloudflare.com/polyfill/v3/polyfill.min.js</a>
For unminified: <a href="https://cdnjs.cloudflare.com/polyfill/v3/polyfill.js">https://cdnjs.cloudflare.com/polyfill/v3/polyfill.js</a></p><p>Doing this ensures your website is no longer relying on polyfill.io.</p> ]]></content:encoded>
            <category><![CDATA[CDNJS]]></category>
            <category><![CDATA[JavaScript]]></category>
            <category><![CDATA[Vulnerabilities]]></category>
            <category><![CDATA[Application Security]]></category>
            <category><![CDATA[Application Services]]></category>
            <category><![CDATA[Supply Chain Attacks]]></category>
            <category><![CDATA[Attacks]]></category>
            <category><![CDATA[Better Internet]]></category>
            <guid isPermaLink="false">3NHy1gOkql57RbBcdjWs5g</guid>
            <dc:creator>Matthew Prince</dc:creator>
            <dc:creator>John Graham-Cumming</dc:creator>
            <dc:creator>Michael Tremante</dc:creator>
        </item>
        <item>
            <title><![CDATA[Patrick Finn: why I joined Cloudflare as VP Sales for the Americas]]></title>
            <link>https://blog.cloudflare.com/patrick-finn/</link>
            <pubDate>Thu, 20 Jun 2024 13:00:44 GMT</pubDate>
            <description><![CDATA[ Patrick S. Finn is joining Cloudflare as Vice President of Sales in the US, Canada, and Latin America ]]></description>
            <content:encoded><![CDATA[ <p></p><p>I’m delighted to be joining Cloudflare as Vice President of Sales in the US, Canada, and Latin America.</p><p>I’ve had the privilege of leading sales for some of the world’s most iconic tech companies, including IBM and Cisco. During my career I’ve led international teams numbering in the thousands and driving revenue in the billions of dollars while serving some of the world's largest enterprise customers. I’ve seen first-hand the evolution of technology and what it can achieve for businesses, from robotics, automation, and data analytics, to cloud computing, cybersecurity, and AI.</p><p>I firmly believe Cloudflare is well on its way to being one of the next iconic tech companies.</p>
    <div>
      <h3>Why Cloudflare</h3>
      <a href="#why-cloudflare">
        
      </a>
    </div>
    <p>Cloudflare has a unique opportunity to help businesses navigate an enduring wave of technological change. There are few companies in the world that operate in the three most exciting fields of innovation that will continue to shape our world in the coming years: cloud computing, AI, and cybersecurity. Cloudflare is one of those companies. When I was approached for this role, I spoke to a wide range of connections across the financial sector, private companies, and government. The feedback was unanimous that Cloudflare is poised on the edge of exhilarating growth.</p>
    <div>
      <h3>Driving predictable, profitable revenue</h3>
      <a href="#driving-predictable-profitable-revenue">
        
      </a>
    </div>
    <p>I was fortunate to join Cisco two years after its annual revenue passed the \$1 billion mark and had the privilege of helping scale the business to more than \$49 billion in revenue the year I left. Cloudflare passed the \$1 billion milestone just last year, and I see the same potential for growth here as I saw at Cisco.</p><p>Cloudflare's global sales organization is growing. I’m excited to help accelerate that process in a way that delivers recurring revenue for the business while ensuring we retain a very high bar in terms of the talent we bring onto the team. My experience leading complex, cross-functional sales organizations within large global companies has taught me a great deal about the common traits among highly effective sales functions.</p><p>The groups of individuals that come together to make true teams are the ones that successfully focus on a unifying goal and develop skills like communication, attitude, process, organization, consistency, collaboration, partnership, and accountability.  These teams embrace diversity and bring out of each other the best expertise, creativity, and skills, making the team stronger and keeping the goal in focus.</p>
    <div>
      <h3>Making our customers our north star</h3>
      <a href="#making-our-customers-our-north-star">
        
      </a>
    </div>
    <p>We will achieve the opportunity ahead of us only as long as we have our customers as our north star. Today, the Americas represent more than half of Cloudflare’s revenue worldwide and are home to some of our largest and most strategic customers – both in the private and public sectors – including 30% of the Fortune 1000. Brands from Zendesk to Shopify and from Colgate-Palmolive to Mars rely on Cloudflare to operate their businesses in a fast, secure, and reliable way.</p><p>Whatever the technology, there are three common fundamentals I’ve found essential to creating value for customers: being the expert on their challenges, understanding how to pick the right combination of products, services, and solutions from those available, and knowing your competition.</p><p>Cloudflare already has an incredible and growing range of products and services that are helping millions of individuals and organizations maximize the opportunities presented by cloud computing and generative AI, all while staying safe from the threat of cyberattacks.</p>
    <div>
      <h3>What helping to build a better Internet means to me</h3>
      <a href="#what-helping-to-build-a-better-internet-means-to-me">
        
      </a>
    </div>
    <p>If it were needed, one additional deciding factor behind my excitement in joining Cloudflare is its ambitious mission to help build a better Internet. As a father, I want the Internet to be a safe and valuable resource for my family and friends and for generations to come. I don’t want my daughter to have to worry about her personal data and privacy as she’s buying Billie Eilish concert tickets online (and, yes, I’m going too).</p><p>Today Cloudflare’s connectivity cloud protects nearly 20% of all websites online and stops 209 billion cyber attacks daily. In addition to its growing customer base, Cloudflare is living up to its mission by offering its services for free to millions more <a href="https://www.cloudflare.com/personal/">individuals</a> and <a href="https://www.cloudflare.com/small-business/">small businesses</a>, including the most vulnerable voices online through its <a href="https://www.cloudflare.com/galileo/">Project Galileo</a> initiative.</p><p>The combination of a strong mission, genuine values, a great team, and incredible technology isn’t a given in every company, but is evident at Cloudflare. I’m excited to play a part as Cloudflare continues to scale its business and help build a better Internet for everyone.</p><p>If you’re interested in learning more about what Cloudflare can do for your organization, please get in touch <a href="https://www.cloudflare.com/plans/enterprise/contact/">here</a>. If you’re an ambitious, talented sales professional looking for your next challenging and rewarding career move, check out our open positions <a href="https://www.cloudflare.com/careers/">here</a>.</p> ]]></content:encoded>
            <category><![CDATA[Life at Cloudflare]]></category>
            <category><![CDATA[Careers]]></category>
            <category><![CDATA[Customer Success]]></category>
            <category><![CDATA[Project Galileo]]></category>
            <category><![CDATA[Better Internet]]></category>
            <category><![CDATA[USA]]></category>
            <category><![CDATA[Canada]]></category>
            <category><![CDATA[Mexico]]></category>
            <guid isPermaLink="false">677tIhUTTGxWGakLrIlsOJ</guid>
            <dc:creator>Patrick S. Finn</dc:creator>
        </item>
        <item>
            <title><![CDATA[Protecting vulnerable communities for 10 years with Project Galileo]]></title>
            <link>https://blog.cloudflare.com/galileo10anniversaryradardashboard/</link>
            <pubDate>Thu, 06 Jun 2024 10:00:23 GMT</pubDate>
            <description><![CDATA[ In celebration of Project Galileo's 10th anniversary, we want to give you a snapshot of what organizations that work in the public interest experience on an everyday basis when it comes to keeping ]]></description>
            <content:encoded><![CDATA[ <p></p><p>In celebration of Project Galileo's 10th anniversary, we want to give you a snapshot of what organizations that work in the public interest experience on an everyday basis when it comes to keeping their websites online. With this, we are publishing the <a href="https://radar.cloudflare.com/reports/project-galileo-10th-anniv">Project Galileo 10th anniversary Radar dashboard</a> with the aim of providing valuable insights to researchers, civil society members, and targeted organizations, equipping them with effective strategies for protecting both internal information and their public online presence.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2KCnZyHypSGrxsvo3fx6zI/714a7b5efb439ffd3be7f7aad3f87cb4/image8.png" />
            
            </figure>
    <div>
      <h2>Key Statistics</h2>
      <a href="#key-statistics">
        
      </a>
    </div>
    <ul><li><p>Under Project Galileo, we protect more than 2,600 Internet properties in 111 countries.</p></li><li><p>Between May 1, 2023, and March 31, 2024, Cloudflare blocked 31.93 billion cyber threats against organizations protected under Project Galileo. This is an average of nearly 95.89 million cyber attacks per day over the 11-month period.</p></li><li><p>When looking at the different organizational categories, journalism and media organizations were the most attacked, accounting for 34% of all attacks targeting the Internet properties protected under the Project in the last year, followed by human rights organizations at 17%.</p></li><li><p>On October 11, 2023, Cloudflare detected one of the largest attacks we’ve seen against an organization under Project Galileo, targeting a prominent independent journalism website covering stories in Russia and across Eastern Europe. We identified a DDoS attack that peaked at 7 million requests per second, with an attack duration of 7 minutes. In total, 1.9 billion DDoS requests targeting the attacked organization were mitigated that day.</p></li><li><p>We saw two attacks against an organization that manages vital Internet infrastructure in the Middle East. We mitigated 177 million DDoS requests targeting the organization over a three-hour period in October 2023. The second attack in December 2023 reached 42.6 million requests that were mitigated over a two-hour period.</p></li><li><p>We observed an attack targeting <a href="https://lgbt.foundation/">LGBT Foundation</a>, a UK-based LGBTQ+ organization, during the beginning of Pride Month in June 2023. Cloudflare mitigated 144.7 million requests to this organization on June 2, 2023. In addition to this spike in June, we also saw another attack on August 26, 2023, which coincided with Manchester Pride. This second attack peaked at 1.46 million requests per second before finally subsiding on August 29.</p></li></ul><p>This year, we broke down the dashboard into several sections:</p><ul><li><p>Global civil society and human rights organizations</p></li><li><p>Global journalism and media organizations</p></li><li><p>Organizations based in Ukraine</p></li><li><p>Organizations in Israel and Palestine</p></li><li><p>Voting rights organizations based in the United States</p></li></ul><p>Check out the full report <a href="https://radar.cloudflare.com/reports/project-galileo-10th-anniv">here</a>.</p>
    <div>
      <h2>Highlights of the Report</h2>
      <a href="#highlights-of-the-report">
        
      </a>
    </div>
    
    <div>
      <h3>Protecting free speech and a free press</h3>
      <a href="#protecting-free-speech-and-a-free-press">
        
      </a>
    </div>
    <p>The number of journalists imprisoned worldwide has <a href="https://www.statista.com/chart/16414/jailed-journalists-timeline/">grown</a> in recent years. Reporters are increasingly at risk of being <a href="https://au.news.yahoo.com/israel-shuts-down-associated-press-180453932.html">censored</a> or shut down by governments or falling victim to <a href="https://therecord.media/meduza-independent-russian-media-organization-cyberattacks">cyberattacks</a>. Project Galileo started as an initiative to protect free expression online. It’s grown to not only protect journalists, but also organizations working in the public interest such as voting rights groups, environmental activists, human rights defenders and more. <a href="/the-deluge-of-digital-attacks-against-journalists">We’ve seen journalists targeted</a> on the Internet for various reasons, often stemming from the sensitive and impactful nature of their work. To that end, we’ve partnered with prominent organizations such as <a href="https://internews.org/">Internews</a>, <a href="https://www.cima.ned.org/">Center for International Media Assistance</a>, <a href="https://ipi.media/">International Press Institute</a>, <a href="https://www.mediasupport.org/">International Media Support</a>, and many more to identify where our services are needed.</p>
    <div>
      <h3>“Truth is the first casualty of war”</h3>
      <a href="#truth-is-the-first-casualty-of-war">
        
      </a>
    </div>
    <p>As the conflict in Ukraine continues, Cloudflare has been providing protection to journalists reporting on the conflict, human rights organizations helping refugees on the ground, and groups that have built mobile apps giving people early warnings of missile strikes.</p><p>Among them is Russian-born Galina Timchenko, co-founder, CEO, and owner of independent news outlet <a href="https://meduza.io/en">Meduza</a>. <a href="https://www.accessnow.org/publication/hacking-meduza-pegasus-spyware-used-to-target-putins-critic/">A recent investigation</a> by <a href="https://www.accessnow.org/">Access Now</a> and the <a href="https://citizenlab.ca/">Citizen Lab</a> reveals Timchenko had her iPhone infected with NSO Group's Pegasus spyware during a trip to Berlin, Germany around February 10, 2023. This is the first documented case of Pegasus infection against a Russian journalist, which shows the growing suspicions among European Union governments regarding Russian civil society in exile. Labeled as an "undesirable organization" and blocked by the Russian government, Meduza operates out of Latvia to maintain editorial independence as it continues to publish news focused on covering stories in Russia and the former Soviet Union, including the conflict in Ukraine.</p><p>Meduza is an example of an important organization that lacks the resources to protect itself against intensive online attacks. On a single day in October 2023, Meduza came under DDoS attack peaking at 7 million requests per second and lasting 7 minutes—an onslaught which would have disabled the site under normal circumstances.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4OVuST1bA6lTkmgLgnCxAd/85028080270290bab6cb3bd02cf55eff/image7.png" />
            
            </figure>
    <div>
      <h2>Protecting organizations in a time of conflict</h2>
      <a href="#protecting-organizations-in-a-time-of-conflict">
        
      </a>
    </div>
    <p>We’ve reported on patterns of wartime <a href="/tag/ukraine">violence coinciding with cyberattacks</a>. Unfortunately, these <a href="/internet-traffic-patterns-in-israel-and-palestine-following-the-october-2023-attacks">trends</a> have continued during the war between Israel and Hamas, and the humanitarian crisis in Gaza. Under Project Galileo, we protect a range of organizations based in the region that work to provide emergency response service, vital equipment for hospitals, crowdfunding platforms supporting the Muslim community worldwide, and more. We saw an increase in traffic after October 7, 2023, to both Israeli and Palestinian organizations, coinciding with the start of the Israel-Hamas war.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7sIzne6jwfFI4hnbFLMSuo/69f6b97e351eb8517ea3583e60fd7259/image4-1.png" />
            
            </figure><p>As we explored the data further, we saw an attack against a prominent organization based in the United Kingdom that works to secure Palestinian human rights, observing two dates on which there was an increase in mitigated traffic. The first, on October 15, 2023, coincided with the national demonstration in London in support of Palestine. We see in the first spike the requests go from 0 to 44,500 mitigated requests per second within two minutes. When we took a closer look, we identified that many of the requests were mitigated by <a href="https://developers.cloudflare.com/waf/tools/security-level/">Cloudflare’s Security Level</a>, a product that uses the threat score (IP reputation) to decide whether to present a <a href="https://developers.cloudflare.com/waf/reference/cloudflare-challenges/">challenge</a> to the visitor. The second spike, on February 21, 2024, coincided with <a href="https://apnews.com/article/uk-parliament-gaza-cease-fire-vote-c394d17657c32ab861b3a121d0954f18">UK lawmakers calling for cease-fire</a> in the Israel-Hamas war. This peaked at 10,500 mitigations per second that lasted 40 minutes with an average of 6,638 requests per second.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6OAFbXABuNMPzevxHbdIG6/90a5af7521743a97945247bdace22106/unnamed--1-.png" />
            
            </figure>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5QO6qTlzDCBooVocOSKq27/e4df73317f6129e284a325befdd3668e/unnamed.png" />
            
            </figure><p>As we reviewed the data, we saw two attacks against an organization that manages vital Internet infrastructure in the Middle East. Attacking infrastructure entities like domain name registries and <a href="https://www.cloudflare.com/learning/dns/glossary/what-is-a-domain-name-registrar/">registrars</a> is not new, <a href="https://www.heise.de/hintergrund/Running-the-ua-top-level-domain-in-times-of-war-6611777.html">as we saw in Ukraine during the beginning of the war in March 2022</a>, and follows an unsettling trend of targeting broad swaths of a country’s Internet infrastructure.</p><p>We saw two notable spikes in traffic, the first in October and second in December 2023. The first attack took place in three waves on October 18 and 19th, peaking around 78,500 requests per second. In total, the attack went from 2.48 million requests to 177.42 million requests mitigated per day.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6uyqttXU8eLsFSti8hpw39/36f823914d657dfae641e8f3079125ca/unnamed--2-.png" />
            
            </figure><p>On December 20-21, 2023, there was an attack that lasted more than 2 hours, averaging 8,600 requests per second throughout that period, reaching as high as 13,830 requests per second. In total, this attack saw 42.6 million daily requests mitigated.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2KxZ7tAkeHXRi9Rpa77kA5/907fede043f4ff20e0d09c18aa2c5f5c/unnamed--3-.png" />
            
            </figure>
    <div>
      <h2>And more…</h2>
      <a href="#and-more">
        
      </a>
    </div>
    <p>Here we’ve provided just a snapshot of what organizations see on a daily basis when it comes to keeping their websites online. For more information on attacks against organizations protected under Project Galileo, check out the <a href="https://radar.cloudflare.com/reports/project-galileo-10th-anniv">full Radar report</a>.</p><p>If you are an organization looking for protection under Project Galileo, please visit our website: <a href="https://www.cloudflare.com/galileo/">cloudflare.com/galileo</a>.</p> ]]></content:encoded>
            <category><![CDATA[Project Galileo]]></category>
            <category><![CDATA[Radar]]></category>
            <category><![CDATA[Better Internet]]></category>
            <category><![CDATA[Policy & Legal]]></category>
            <guid isPermaLink="false">1hLYT57YQjvUN7Lg2VmGdp</guid>
            <dc:creator>Jocelyn Woolbright</dc:creator>
        </item>
        <item>
            <title><![CDATA[Zero Trust WARP: tunneling with a MASQUE]]></title>
            <link>https://blog.cloudflare.com/zero-trust-warp-with-a-masque/</link>
            <pubDate>Wed, 06 Mar 2024 14:00:15 GMT</pubDate>
            <description><![CDATA[ This blog discusses the introduction of MASQUE to Zero Trust WARP and how Cloudflare One customers will benefit from this modern protocol ]]></description>
            <content:encoded><![CDATA[ 
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3gjB6Xaz5umz7Thed17Fb8/831d6d87a94f651c4f4803a6444d0f5c/image5-11.png" />
            
            </figure>
    <div>
      <h2>Slipping on the MASQUE</h2>
      <a href="#slipping-on-the-masque">
        
      </a>
    </div>
    <p>In June 2023, we <a href="/masque-building-a-new-protocol-into-cloudflare-warp/">told you</a> that we were building a new protocol, <a href="https://datatracker.ietf.org/wg/masque/about/">MASQUE</a>, into WARP. MASQUE is a fascinating protocol that extends the capabilities of <a href="https://www.cloudflare.com/learning/performance/what-is-http3/">HTTP/3</a> and leverages the unique properties of the QUIC transport protocol to efficiently proxy IP and UDP traffic without sacrificing performance or privacy</p><p>At the same time, we’ve seen a rising demand from <a href="https://www.cloudflare.com/learning/security/glossary/what-is-zero-trust/">Zero Trust</a> customers for features and solutions that only MASQUE can deliver. All customers want WARP traffic to look like HTTPS to avoid detection and blocking by firewalls, while a significant number of customers also require FIPS-compliant encryption. We have something good here, and it’s been proven elsewhere (more on that below), so we are building MASQUE into Zero Trust WARP and will be making it available to all of our Zero Trust customers — at WARP speed!</p><p>This blog post highlights some of the key benefits our Cloudflare One customers will realize with MASQUE.</p>
    <div>
      <h2>Before the MASQUE</h2>
      <a href="#before-the-masque">
        
      </a>
    </div>
    <p>Cloudflare is on a mission to help build a better Internet. And it is a journey we’ve been on with our device client and WARP for almost five years. The precursor to WARP was the 2018 launch of <a href="/announcing-1111/">1.1.1.1</a>, the Internet’s fastest, privacy-first consumer DNS service. WARP was introduced in 2019 with the <a href="/1111-warp-better-vpn/">announcement</a> of the 1.1.1.1 service with WARP, a high performance and secure consumer DNS and VPN solution. Then in 2020, we <a href="/introducing-cloudflare-for-teams">introduced</a> Cloudflare’s Zero Trust platform and the Zero Trust version of WARP to help any IT organization secure their environment, featuring a suite of tools we first built to protect our own IT systems. Zero Trust WARP with MASQUE is the next step in our journey.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1zi7uOkKEYkgp6dpBwQRo4/cb0147f0558ed92bb83a0f61a4ebbacc/image4-14.png" />
            
            </figure>
    <div>
      <h2>The current state of WireGuard</h2>
      <a href="#the-current-state-of-wireguard">
        
      </a>
    </div>
    <p><a href="https://www.wireguard.com/">WireGuard</a> was the perfect choice for the 1.1.1.1 with WARP service in 2019. WireGuard is fast, simple, and secure. It was exactly what we needed at the time to guarantee our users’ privacy, and it has met all of our expectations. If we went back in time to do it all over again, we would make the same choice.</p><p>But the other side of the simplicity coin is a certain rigidity. We find ourselves wanting to extend WireGuard to deliver more capabilities to our Zero Trust customers, but WireGuard is not easily extended. Capabilities such as better session management, advanced congestion control, or simply the ability to use FIPS-compliant cipher suites are not options within WireGuard; these capabilities would have to be added on as proprietary extensions, if it was even possible to do so.</p><p>Plus, while WireGuard is popular in VPN solutions, it is not standards-based, and therefore not treated like a first class citizen in the world of the Internet, where non-standard traffic can be blocked, sometimes intentionally, sometimes not. WireGuard uses a non-standard port, port 51820, by default. Zero Trust WARP changes this to use port 2408 for the WireGuard tunnel, but it’s still a non-standard port. For our customers who control their own firewalls, this is not an issue; they simply allow that traffic. But many of the large number of public Wi-Fi locations, or the approximately 7,000 ISPs in the world, don’t know anything about WireGuard and block these ports. We’ve also faced situations where the ISP does know what WireGuard is and blocks it intentionally.</p><p>This can play havoc for roaming Zero Trust WARP users at their local coffee shop, in hotels, on planes, or other places where there are captive portals or public Wi-Fi access, and even sometimes with their local ISP. The user is expecting reliable access with Zero Trust WARP, and is frustrated when their device is blocked from connecting to Cloudflare’s global network.</p><p>Now we have another proven technology — MASQUE — which uses and extends HTTP/3 and QUIC. Let’s do a quick review of these to better understand why Cloudflare believes MASQUE is the future.</p>
    <div>
      <h2>Unpacking the acronyms</h2>
      <a href="#unpacking-the-acronyms">
        
      </a>
    </div>
    <p>HTTP/3 and QUIC are among the most recent advancements in the evolution of the Internet, enabling faster, more reliable, and more secure connections to endpoints like websites and APIs. Cloudflare worked closely with industry peers through the <a href="https://www.ietf.org/">Internet Engineering Task Force</a> on the development of <a href="https://datatracker.ietf.org/doc/html/rfc9000">RFC 9000</a> for QUIC and <a href="https://datatracker.ietf.org/doc/html/rfc9114">RFC 9114</a> for HTTP/3. The technical background on the basic benefits of HTTP/3 and QUIC are reviewed in our 2019 blog post where we announced <a href="/http3-the-past-present-and-future/">QUIC and HTTP/3 availability</a> on Cloudflare’s global network.</p><p>Most relevant for Zero Trust WARP, QUIC delivers better performance on low-latency or high packet loss networks thanks to packet coalescing and multiplexing. QUIC packets in separate contexts during the handshake can be coalesced into the same UDP datagram, thus reducing the number of receive and system interrupts. With multiplexing, QUIC can carry multiple HTTP sessions within the same UDP connection. Zero Trust WARP also benefits from QUIC’s high level of privacy, with TLS 1.3 designed into the protocol.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1ARWf5TO9CaOucOU527M2X/b53da149e40b8c28fc812552cfcaca26/image2-11.png" />
            
            </figure><p>MASQUE unlocks QUIC’s potential for proxying by providing the application layer building blocks to support efficient tunneling of TCP and UDP traffic. In Zero Trust WARP, MASQUE will be used to establish a tunnel over HTTP/3, delivering the same capability as WireGuard tunneling does today. In the future, we’ll be in position to add more value using MASQUE, leveraging Cloudflare’s ongoing participation in the <a href="https://datatracker.ietf.org/wg/masque/about/">MASQUE Working Group</a>. This blog post is a good read for those interested in <a href="/unlocking-quic-proxying-potential/">digging deeper into MASQUE</a>.</p><p>OK, so Cloudflare is going to use MASQUE for WARP. What does that mean to you, the Zero Trust customer?</p>
    <div>
      <h2>Proven reliability at scale</h2>
      <a href="#proven-reliability-at-scale">
        
      </a>
    </div>
    <p>Cloudflare’s network today spans more than 310 cities in over 120 countries, and interconnects with over 13,000 networks globally. HTTP/3 and QUIC were introduced to the Cloudflare network in 2019, the HTTP/3 standard was <a href="/cloudflare-view-http3-usage/">finalized in 2022</a>, and represented about <a href="https://radar.cloudflare.com/adoption-and-usage?dateStart=2023-01-01&amp;dateEnd=2023-12-31#http-1x-vs-http-2-vs-http-3">30% of all HTTP traffic on our network in 2023</a>.</p><p>We are also using MASQUE for <a href="/icloud-private-relay/">iCloud Private Relay</a> and other Privacy Proxy partners. The services that power these partnerships, from our Rust-based <a href="/introducing-oxy/">proxy framework</a> to our open source <a href="https://github.com/cloudflare/quiche">QUIC implementation</a>, are already deployed globally in our network and have proven to be fast, resilient, and reliable.</p><p>Cloudflare is already operating MASQUE, HTTP/3, and QUIC reliably at scale. So we want you, our Zero Trust WARP users and Cloudflare One customers, to benefit from that same reliability and scale.</p>
    <div>
      <h2>Connect from anywhere</h2>
      <a href="#connect-from-anywhere">
        
      </a>
    </div>
    <p>Employees need to be able to connect from anywhere that has an Internet connection. But that can be a challenge as many security engineers will configure firewalls and other networking devices to block all ports by default, and only open the most well-known and common ports. As we pointed out earlier, this can be frustrating for the roaming Zero Trust WARP user.</p><p>We want to fix that for our users, and remove that frustration. HTTP/3 and QUIC deliver the perfect solution. QUIC is carried on top of UDP (<a href="https://www.iana.org/assignments/protocol-numbers/protocol-numbers.xhtml">protocol number 17</a>), while HTTP/3 uses <a href="https://www.iana.org/assignments/service-names-port-numbers/service-names-port-numbers.xhtml">port 443</a> for encrypted traffic. Both of these are well known, widely used, and are very unlikely to be blocked.</p><p>We want our Zero Trust WARP users to reliably connect wherever they might be.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/53RZc92rNIUWscFuLuA13w/098b18464be4ee893d51786ff74a5bc4/image1-13.png" />
            
            </figure>
    <div>
      <h2>Compliant cipher suites</h2>
      <a href="#compliant-cipher-suites">
        
      </a>
    </div>
    <p>MASQUE leverages <a href="https://datatracker.ietf.org/doc/html/rfc8446">TLS 1.3</a> with QUIC, which provides a number of cipher suite choices. WireGuard also uses standard cipher suites. But some standards are more, let’s say, standard than others.</p><p>NIST, the <a href="https://www.nist.gov/">National Institute of Standards and Technology</a> and part of the US Department of Commerce, does a tremendous amount of work across the technology landscape. Of interest to us is the NIST research into network security that results in <a href="https://csrc.nist.gov/pubs/fips/140-2/upd2/final">FIPS 140-2</a> and similar publications. NIST studies individual cipher suites and publishes lists of those they recommend for use, recommendations that become requirements for US Government entities. Many other customers, both government and commercial, use these same recommendations as requirements.</p><p>Our first MASQUE implementation for Zero Trust WARP will use <a href="https://www.cloudflare.com/learning/ssl/why-use-tls-1.3/">TLS 1.3</a> and FIPS compliant cipher suites.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/25Qc8qdJd78bngZqpH0Pv7/1541929144b5ed4d85ccca36e0787957/image3-12.png" />
            
            </figure>
    <div>
      <h2>How can I get Zero Trust WARP with MASQUE?</h2>
      <a href="#how-can-i-get-zero-trust-warp-with-masque">
        
      </a>
    </div>
    <p>Cloudflare engineers are hard at work implementing MASQUE for the mobile apps, the desktop clients, and the Cloudflare network. Progress has been good, and we will open this up for beta testing early in the second quarter of 2024 for Cloudflare One customers. Your account team will be reaching out with participation details.</p>
    <div>
      <h2>Continuing the journey with Zero Trust WARP</h2>
      <a href="#continuing-the-journey-with-zero-trust-warp">
        
      </a>
    </div>
    <p>Cloudflare launched WARP five years ago, and we’ve come a long way since. This introduction of MASQUE to Zero Trust WARP is a big step, one that will immediately deliver the benefits noted above. But there will be more — we believe MASQUE opens up new opportunities to leverage the capabilities of QUIC and HTTP/3 to build innovative <a href="https://www.cloudflare.com/zero-trust/solutions/">Zero Trust solutions</a>. And we’re also continuing to work on other new capabilities for our Zero Trust customers.Cloudflare is committed to continuing our mission to help build a better Internet, one that is more private and secure, scalable, reliable, and fast. And if you would like to join us in this exciting journey, check out our <a href="https://www.cloudflare.com/careers/jobs/">open positions</a>.</p> ]]></content:encoded>
            <category><![CDATA[Security Week]]></category>
            <category><![CDATA[Product News]]></category>
            <category><![CDATA[Cloudflare Access]]></category>
            <category><![CDATA[Better Internet]]></category>
            <category><![CDATA[WARP]]></category>
            <category><![CDATA[HTTP3]]></category>
            <category><![CDATA[TLS 1.3]]></category>
            <category><![CDATA[Privacy]]></category>
            <category><![CDATA[QUIC]]></category>
            <category><![CDATA[1.1.1.1]]></category>
            <guid isPermaLink="false">5sDoFBGGZJbT4D9pftVhXY</guid>
            <dc:creator>Dan Hall</dc:creator>
        </item>
        <item>
            <title><![CDATA[Cloudflare’s 2023 Annual Founders’ Letter]]></title>
            <link>https://blog.cloudflare.com/cloudflares-annual-founders-letter-2023/</link>
            <pubDate>Wed, 27 Sep 2023 13:00:25 GMT</pubDate>
            <description><![CDATA[ Cloudflare is officially a teenager. We launched on September 27, 2010. Today we celebrate our thirteenth birthday ]]></description>
            <content:encoded><![CDATA[ <p></p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/67OiVANFpoXiW5HSigsJXf/daf80a65e1bcb4c51943f2377bd7cff4/Founders--Letter-2.png" />
            
            </figure><p>Cloudflare is officially a teenager. We launched on September 27, 2010. Today we celebrate our thirteenth birthday. As is our tradition, we use the week of our birthday to launch products that we think of as our gift back to the Internet. More on some of the incredible announcements in a second, but we wanted to start by talking about something more fundamental: our identity.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4fdonv6sU0NR22ONAvY8Nf/3a6a1d778beedf089e3693770f4489cc/Untitled-2.png" />
            
            </figure><p>Like many kids, it took us a while to fully understand who we are. We chafed at being put in boxes. People would describe Cloudflare as a security company, and we'd say, "That's not all we do." They'd say we were a network, and we'd object that we were so much more. Worst of all, they'd sometimes call us a "CDN," and we'd remind them that caching is a part of any sensibly designed system, but it shouldn't be a feature unto itself. Thank you very much.</p><p>And so, yesterday, the day before our thirteenth birthday, we announced to the world finally what we realized we are: a connectivity cloud.</p>
    <div>
      <h3>The connectivity cloud</h3>
      <a href="#the-connectivity-cloud">
        
      </a>
    </div>
    <p>What does that mean? "Connectivity" means we measure ourselves by connecting people and things together. Our job isn't to be the final destination for your data, but to help it move and flow. Any application, any data, anyone, anywhere, anytime — that's the essence of connectivity, and that’s always been the promise of the Internet.</p><p>"Cloud" means the batteries are included. It scales with you. It’s programmable. Has consistent security built in. It’s intelligent and learns from your usage and others' and optimizes for outcomes better than you ever could on your own.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5vtrLo5x2vMruQ6lphoUTm/9545282c61e0dd10d19830401c10c481/Untitled--1--1.png" />
            
            </figure><p>Our connectivity cloud is worth contrasting against some other clouds. The so-called hyperscale public clouds are, in many ways, the opposite. They optimize for hoarding your data. Locking it in. Making it difficult to move. They are captivity clouds. And, while they may be great for some things, their full potential will only truly be unlocked for customers when combined with a connectivity cloud that lets you mix and match the best of each of their features.</p>
    <div>
      <h3>Enabling the future</h3>
      <a href="#enabling-the-future">
        
      </a>
    </div>
    <p>That's what we're seeing from the hottest startups these days. Many of the leading AI companies are using Cloudflare's connectivity cloud to move their training data to wherever there's excess GPU capacity. We estimate that across the AI startup ecosystem, Cloudflare is the most commonly used cloud provider. Because, if you're building the future, you know connectivity and the agility of the cloud are key.</p><p>We've spent the last year listening to our AI customers and trying to understand what the future of AI will look like and how we can better help them build it. Today, we're releasing a series of products and features borne of those conversations and opening incredible new opportunities.</p><p>The biggest opportunity in <a href="https://www.cloudflare.com/learning/ai/what-is-artificial-intelligence/">AI</a> is inference. Inference is what happens when you type a prompt to write a poem about your love of connectivity clouds into ChatGPT and, seconds later, get a coherent response. Or when you run a search for a picture of your passport on your phone, and it immediately pulls it up.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3hZTnf3ox43UTTLSCoQYoi/b0958157538422c72ca13764af98c06e/Untitled--2--1.png" />
            
            </figure><p>The models that power those modern miracles take significant time to generate — a process called training. Once trained though, they can have new data fed through them over and over to generate valuable new output.</p>
    <div>
      <h3>Where inference happens</h3>
      <a href="#where-inference-happens">
        
      </a>
    </div>
    <p>Before today, those models could run in two places. The first was the end user's device — like in the case of the search for “passport” in the photos on your phone. When that's possible it's great. It's fast. Your private data stays local. And it works even when there's no network access. But it's also challenging. Models are big and the storage on your phone or other local device is limited. Moreover, putting the fastest GPU resources to process these models in your phone makes the phone expensive and burns precious battery resources.</p><p>The alternative has been the centralized public cloud. This is what’s used for a big model like OpenAI’s GPT-4, which runs services like ChatGPT. But that has its own challenges. Today, nearly all the GPU resources for AI are deployed in the US — a fact that rightfully troubles the rest of the world. As AI queries get more personal, sending them all to some centralized cloud is a potential security and data locality disaster waiting to happen. Moreover, it's inherently slow and less efficient and therefore more costly than running the inference locally.</p>
    <div>
      <h3>A third place for inference</h3>
      <a href="#a-third-place-for-inference">
        
      </a>
    </div>
    <p>Running on the device is too small. Running on the centralized public cloud is too far. It’s like the story of “Goldilocks and the Three Bears”: the right answer is somewhere in between. That's why today we're excited to be rolling out modern GPU resources across Cloudflare's global connectivity cloud. The third place for AI inference. Not too small. Not too far. The perfect step in between. By the end of the year, you'll be able to run AI models in more than 100 cities in 40+ countries where Cloudflare operates. By the end of 2024, we plan to have inference-tuned GPUs deployed in nearly every city that makes up Cloudflare's global network and within milliseconds of nearly every device connected to the Internet worldwide.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/fVvmxz6QyAagRfc7jnKlL/c5ee84b4149ace4a7d041fb34211892a/Untitled--3--1.png" />
            
            </figure><p>(A brief shout out for the Cloudflare team members who are, as of this moment, literally dragging suitcases full of NVIDIA GPU cards around the world and installing them in the servers that make up our network worldwide. It takes a lot of atoms to move all the bits that we do, and it takes intrepid people spanning the globe to update our network to facilitate these new capabilities.)</p><p>Running AI in a connectivity cloud like Cloudflare gives you the best of both worlds: nearly boundless resources running locally near any device connected to the Internet. And we've made it flexible to run whatever models a developer creates, easy to use without needing a dev ops team, and inexpensive to run where you only pay for when we're doing inference work for you.</p><p>To make this tangible, think about a Cloudflare customer that makes consumer wearable devices. They make devices that need to be smart but also affordable and have the longest possible battery life. As explorers rely on them literally to navigate out of harrowing conditions, tradeoffs aren't an option. That's why, when they heard about Cloudflare Workers AI, they immediately knew it was something they needed to try. The promise is powerful devices that are still affordable and have great battery life while still respecting users’ privacy and security.</p><p>They are one of the limited set of customers we gave an early sneak peek to, all of whom immediately started running off ideas of what they could do next and clamoring to get more access. We feel like we’ve seen it and are here to report: the not-so-distant future is super cool.</p>
    <div>
      <h3>The spirit of helping build a better Internet</h3>
      <a href="#the-spirit-of-helping-build-a-better-internet">
        
      </a>
    </div>
    <p>Over the years we've announced several things on our birthday that have gone on to change the future of the Internet. On our <a href="/introducing-cloudflares-automatic-ipv6-gatewa/">first birthday</a>, we announced an IPv6 gateway that has helped the Internet scale past its early protocol decisions. On our <a href="/introducing-universal-ssl/">fourth birthday</a>, we announced that we were making encryption free and doubled the size of the encrypted web in a day. On our <a href="/code-everywhere-cloudflare-workers/">seventh birthday</a>, we launched the Workers platform that has revolutionized how modern applications are developed. <a href="/introducing-r2-object-storage/">On our eleventh birthday</a> we announced R2 and further freed data from the captivity clouds.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2Zu0yp9b8e6WiwtsfH8GFM/607ec11c1efc0ffae173bcbc762957ee/Untitled--4--1.png" />
            
            </figure><p>This year, as we celebrate Cloudflare's thirteenth birthday, we set out to have as dramatic an impact on AI development as we have in other areas we've focused on in the past: providing tools that bring affordable, powerful AI inference to anyone, anywhere on Earth.</p>
    <div>
      <h3>Closing with a poem</h3>
      <a href="#closing-with-a-poem">
        
      </a>
    </div>
    <p>And so, to commemorate that, let's close with a poem about the connectivity cloud. This one was generated using a tiny bit of code and Meta's Llama 2 <a href="https://www.cloudflare.com/learning/ai/what-is-large-language-model/">LLM model</a> running on Cloudflare's Workers AI platform in a data center not far from where we're writing this…</p><p><code><i>Prompt: “Write a poem that talks about the connectivity cloud and contrast it with the captivity cloud”</i></code></p><p>The connectivity cloud, a wondrous sight to see,</p><p>Where data flows and knowledge is set free,</p><p>A place where minds meet and ideas take flight,</p><p>A world of endless possibility in sight.</p><p></p><p>But then there's the captivity cloud, a darker place,</p><p>Where thoughts are trapped and minds are in disgrace,</p><p>A prison of the mind, where freedom's lost,</p><p>A world of endless sorrow and cost.</p><p></p><p>So let us choose the connectivity cloud,</p><p>And let our minds be free and allowed,</p><p>To soar and glide and reach for the sky,</p><p>And never be held back, but always fly.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3aZzQ7Vw4G69UGP0rGcepY/91eb15cde2fd66f58e36a4306c039ff6/Screenshot-2023-09-27-at-09.54.04.png" />
            
            </figure>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6SI9dzaeXO0iBoPxQNDdh5/4230708aaf9a2d74dac2688aa0cd150a/Untitled--5-.png" />
            
            </figure><p></p> ]]></content:encoded>
            <category><![CDATA[Birthday Week]]></category>
            <category><![CDATA[Connectivity Cloud]]></category>
            <category><![CDATA[Founders' Letter]]></category>
            <category><![CDATA[Better Internet]]></category>
            <guid isPermaLink="false">3lnlO41gonF28Yk7CXzzno</guid>
            <dc:creator>Matthew Prince</dc:creator>
            <dc:creator>Michelle Zatlyn</dc:creator>
        </item>
        <item>
            <title><![CDATA[Making home Internet faster has little to do with “speed”]]></title>
            <link>https://blog.cloudflare.com/making-home-internet-faster/</link>
            <pubDate>Tue, 18 Apr 2023 13:00:00 GMT</pubDate>
            <description><![CDATA[ The speed of an Internet connection is more about decreasing real-world latency than adding underutilized bandwidth ]]></description>
            <content:encoded><![CDATA[ <p></p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2lEfazasknt17nyfB1EKGP/2322dbc487af3a096a1f5d3206edaf84/image4-17.png" />
            
            </figure><p>More than ten years ago, researchers at Google <a href="https://docs.google.com/a/chromium.org/viewer?a=v&amp;pid=sites&amp;srcid=Y2hyb21pdW0ub3JnfGRldnxneDoxMzcyOWI1N2I4YzI3NzE2">published</a> a paper with the seemingly heretical title “More Bandwidth Doesn’t Matter (much)”. We <a href="/the-bandwidth-of-a-boeing-747-and-its-impact/">published</a> our own blog showing it is faster to fly 1TB of data from San Francisco to London than it is to upload it on a 100 Mbps connection. Unfortunately, things haven’t changed much. When you make purchasing decisions about home Internet plans, you probably consider the bandwidth of the connection when evaluating Internet performance. More bandwidth is faster speed, or so the marketing goes. In this post, we’ll use real-world data to show both bandwidth and – spoiler alert! – latency impact the speed of an Internet connection. By the end, we think you’ll understand why Cloudflare is so laser <a href="/network-performance-update-developer-week/">focused</a> on <a href="/last-mile-insights/">reducing</a> <a href="/new-cities-april-2022-edition/">latency</a> <a href="/network-performance-update-cloudflare-one-week-june-2022/">everywhere</a> we can find it.</p><p>The grand summary of the blog that follows is this:</p><ul><li><p>There are many ways to evaluate network performance.</p></li><li><p>Performance “goodness” depends on the application -- a good number for one application can be of zero benefit to a different application.</p></li><li><p>“Speed” numbers can be misleading, not least because any single metric cannot accurately describe how all applications will perform.</p></li></ul><p>To better understand these ideas, we should define bandwidth and latency. Bandwidth is the amount of data that can be transmitted at any single time. It’s the maximum throughput, or capacity, of the communications link between two servers that want to exchange data. The “bottleneck” is the place in the network where the connection is constrained by the amount of bandwidth available. Usually this is in the “last mile”, either the wire that connects a home, or the modem or router in the home itself.</p><p>If the Internet is an information superhighway, bandwidth is the number of lanes on the road. The wider the road, the more traffic can fit on the highway at any time. Bandwidth is useful for downloading large files like operating system updates and big game updates. We use bandwidth when streaming video, though probably less than you think. Netflix <a href="https://help.netflix.com/en/node/306">recommends</a> 15 Mbps of bandwidth to watch a stream in 4K/Ultra HD. A 1 Gbps connection could stream more than 60 Netflix shows in 4K at the same time!</p><p>Latency, on the other hand, is the time it takes data to move through the Internet. To extend our superhighway analogy, latency is the speed at which vehicles move on the highway. If traffic is moving quickly, you’ll get to your destination faster. Latency is measured in the number of milliseconds that it takes a packet of data to travel between a client (such as your laptop computer) and a server. In practice, we have to measure latency as the <a href="https://www.cloudflare.com/learning/cdn/glossary/round-trip-time-rtt/">round-trip time (RTT)</a> between client and server because every device has its own independent clock, so it’s hard to measure latency in just one direction. If you’re practicing tennis against a wall, round-trip latency is the time the ball was in the air. On the Internet fibre optic “backbone”, data is <a href="https://www.blog.adva.com/en/speed-light-fiber-first-building-block-low-latency-trading-infrastructure#:~:text=The%20refractive%20index%20of%20light,1.467%20%3D%20124%2C188%20miles%20per%20second.">travels</a> at almost 200,000 kilometers per second as it bounces off the glass on the inside of optical wires. That’s fast!</p><p>Low-latency connections are important for gaming, where tiny bits of data, such as the change in position of players in a game, need to reach another computer quickly. And increasingly, we’re becoming aware of high latency when it makes our live video conferencing choppy and unpleasant.</p><p>While we can’t make light travel through glass much faster, we can <a href="https://www.cloudflare.com/developer-platform/solutions/live-streaming/">improve latency</a> by moving the content closer to users, shortening the distance data needs to travel. That’s the effect of our presence in more than <a href="https://www.cloudflare.com/network/">285 cities</a> globally: when you’re on the Internet superhighway trying to reach Cloudflare, we want to be just off the next exit.</p><p>The terms bandwidth, capacity, and maximum throughput are slightly <a href="https://en.wikipedia.org/wiki/Bandwidth_(computing)">different</a> from each other, but close enough in their meaning to be <a href="https://en.wikipedia.org/wiki/Network_performance">interchangeable</a>, Confusingly “speed” has come to mean bandwidth when talking about Internet plans, but “speed” gives no indication of the latency between your devices and the servers they connect to. More on this later.  For now, we don’t use the Internet only to play games, nor only watch streaming video. We do those and more, and we visit a lot of normal web pages in between.</p><p>In the 2010 <a href="https://docs.google.com/a/chromium.org/viewer?a=v&amp;pid=sites&amp;srcid=Y2hyb21pdW0ub3JnfGRldnxneDoxMzcyOWI1N2I4YzI3NzE2">paper</a> from Google, the author simulated loading web pages while varying the throughput and latency of the connection. The finding was that above about 5 Mbps, the page doesn’t load much faster. Increasing bandwidth from 1 Mbps to 2 Mbps is almost a 40 percent improvement in page load time. From 5 Mbps to 6 Mbps is less than a 5 percent improvement.</p><p>However, something interesting happened when varying the latency (the Round Trip Time, or RTT): there was a linear and proportional improvement on page load times. For every 20 milliseconds of reduced latency, the page load time improved by about 10%.</p><p>Let’s see what this looks like in real life with empirical data. Below is a chart from an excellent recent <a href="https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4178804">paper</a> by two researchers from MIT. Using data from the FCC’s <a href="https://www.fcc.gov/general/measuring-broadband-america">Measuring Broadband America</a> program, these researchers produced a chart showing similar results to the 2010 simulation. Those results are summarized in the chart below. Though the point of diminishing returns to more bandwidth has moved higher – to about 20 Mbps – the overall trend was exactly the same.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6wcZyRWYVXlEd04AvuO820/0ffb9e3162238efc829dac630b0caa0a/download--5-.png" />
            
            </figure><p>We repeated this analysis with a focus on latency using our own Cloudflare data. The results are summarized in the next chart, showing  a familiar pattern. For every 200 milliseconds of latency we can save, we cut the page load time by over 1 second. That relationship applies when the latency is 950 milliseconds. And it applies when the latency is 50 milliseconds.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1UyB3HCO8RA5EWTWF8jFTt/7ce08383bc1d67d3215eb76e96bc21ba/download-1.png" />
            
            </figure><p>There are a few reasons latency matters in the set of transactions needed to load pages. When you connect to a website, the first thing that your browser does is establish a secure connection, to authenticate the website and ensure your data is encrypted. The protocols to do this are TCP and TLS, or <a href="/the-road-to-quic/">QUIC</a> (that is encrypted by default). The number of message exchanges each needs to establish a secure connection varies, but one aspect of the establishment phase is common to all of them: Latency matters most.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5A4Y3IJhLCfCqYMwjAjhst/1b9e0aedcb9054942bf2f256e1ecbf86/download--1--1.png" />
            
            </figure><p>On top of that, when we load a webpage after we establish encryption and verify website authority, we might be asking the browser to <a href="https://www.webpagetest.org/result/221107_BiDcB1_ERZ/2/details/#waterfall_view_step1">load</a> hundreds of different files across dozens of different domains. Some of these files can be loaded in parallel, but others need to be loaded sequentially. As the browser races to compile all these different files, it’s the speed at which it can get to the server and back that determines how fast it can put the page together. The files are often quite small, but there’s a lot of them.</p><p>The chart below shows the beginning of what the browser does when it loads cnn.com. First is the connection handshake phase, followed by 301 redirect to <a href="http://www.cnn.com">www.cnn.com,</a> which requires a completely new  connection handshake before the browser can load the main HTML page in step two. Only then, more than 1 second into the load, does it learn about all the JavaScript files it requires in order to render the page. Files 3-19 are requested mostly on the same connection but are not served until after the HTML file has been delivered in full. Files 8, 9, and 10 are requested over separate connections (all costing handshakes). Files 20-27 are all blocked on earlier files and similarly need new connections. They can’t start until the browser has the previous file back from the server and executes it. There are 650 assets in this page load, and the blocking happens all the way through the page load. Here’s why this matters: better latency makes every file load faster, which in turn unblocks other files faster, and so on.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4VkNcalwdnqthmWSvdepwK/370b582e9a5ebf1b5dcb93c147be3987/download--2--1.png" />
            
            </figure><p>The protocols will use all the bandwidth available, but often complete a transfer before all the available bandwidth is consumed. It’s no wonder then that adding more bandwidth doesn’t speed up the page load, but better latency does. While developments like <a href="/early-hints/">Early Hints</a> help this by informing browsers of dependencies earlier, allowing them to pre-connect to servers or pre-fetch resources that don’t need to be strictly ordered, this is still a problem for many websites on the Internet today.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/59WkhTSfM4ThiaTl2GKYg5/0711ffe95378b01b113d78ce7f6f8dc0/download--3--1.png" />
            
            </figure><p>Recently, Internet researchers have turned their attention to using our understanding of the relationship between throughput and latency to improve Internet Quality of Experience (QoE). A <a href="https://www.bitag.org/latency-explained.php">paper</a> from the Broadband Internet Technical Advisory Group (BITAG) summarizes:</p><blockquote><p>But we now recognize that it is not just greater throughput that matters, but also consistently low latency. Unfortunately, the way that we’ve historically understood and characterized latency was flawed, and our latency measurements and metrics were not aligned with end-user QoE.</p></blockquote><p>Confusing matters further, there is a difference between latency on an idle Internet connection and latency measured in working conditions when many connections share the network resources, which we call “working latency” or “<a href="https://www.ietf.org/archive/id/draft-cpaasch-ippm-responsiveness-00.html">responsiveness</a>”. Since responsiveness is what the user experiences as the speed of their Internet connection, it’s important to understand and measure this particular latency.</p><p>An Internet connection can suffer from poor responsiveness (even if it has good idle latency) when data is delayed in buffers. If you download a large file, for example an operating system update, the server sending the file might send the file with higher throughput than the Internet connection can accept. That’s ok. Extra bits of the file will sit in a buffer until it’s their turn to go through the funnel. Adding extra lanes to the highway allows more cars to pass through, and is a good strategy if we aren’t particularly concerned with the speed of the traffic.</p><p>Say for example, Christabel is watching a stream of the news while on a video meeting. When Christabel starts watching the video, her browser fetches a bunch of content and stores it in various buffers on the way from the content host to the browser. Those same buffers also contain data packets pertaining to the video meeting Christabel is currently in. If the data generated as part of a video conference sits in the same buffer as the video files, the video files will fill up the buffer and cause delay for the video meeting packets as well. The larger the buffers, the longer the wait for video conference packets.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6ZyNLHmYFSZ3TK5ggfORHe/1b645133b8e42bccb0642fb409cb23cf/download--4--1.png" />
            
            </figure>
    <div>
      <h3>Cloudflare is helping to make “speed” meaningful</h3>
      <a href="#cloudflare-is-helping-to-make-speed-meaningful">
        
      </a>
    </div>
    <p>To help users understand the strengths and weaknesses of their connection, we recently added <a href="https://developers.cloudflare.com/fundamentals/speed/aim/">Aggregated Internet Measurement (AIM)</a> scores to our own <a href="https://speed.cloudflare.com">“Speed” Test</a>. These scores remove the technical metrics and give users a real-world, plain-English understanding of what their connection will be good at, and where it might struggle. We’d also like to collect more data from our speed test to help track Page Load Times (PLT) and see how they are correlated with the reduction of lower working latency. You’ll start seeing those numbers on our speed test soon!</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/439M08KpRs8HFYWbYuRAqf/767e90e4080ea76191716a33eb0151c8/download--6-.png" />
            
            </figure><p>We all use our Internet connections in slightly different ways, but we share the desire for our connections to be as fast as possible. As more and more services move into the cloud – word documents, music, websites, communications, etc – the speed at which we can access those services becomes critical. While bandwidth plays a part, the latency of the connection – the real Internet “speed” – is more important.</p><p>At Cloudflare, we’re working every day to help build a more performant Internet. Want to help? Apply for one of our open engineering roles <a href="https://www.cloudflare.com/careers/">here</a>.</p> ]]></content:encoded>
            <category><![CDATA[Speed]]></category>
            <category><![CDATA[Better Internet]]></category>
            <category><![CDATA[Internet Performance]]></category>
            <category><![CDATA[Internet Quality]]></category>
            <guid isPermaLink="false">5aV1I3MBF818MwHBwaCds8</guid>
            <dc:creator>Mike Conlow</dc:creator>
        </item>
        <item>
            <title><![CDATA[Closing out 2022 with our latest Impact Report]]></title>
            <link>https://blog.cloudflare.com/impact-report-2022/</link>
            <pubDate>Fri, 16 Dec 2022 20:23:31 GMT</pubDate>
            <description><![CDATA[ Our Impact Report is an annual summary highlighting how we are trying to build a better Internet and the progress we are making on our environmental, social, and governance priorities. ]]></description>
            <content:encoded><![CDATA[ <p><i></i></p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4INlvXaIlgBKNVFsHKwa6z/c6659bb457e8dd63591d2bb8ec8a2710/image1-47.png" />
            
            </figure><p>To conclude <a href="https://www.cloudflare.com/impact-week/">Impact Week</a>, which has been filled with announcements about new initiatives and features that we are thrilled about, today we are publishing our <a href="https://www.cloudflare.com/impact-report-2022">2022 Impact Report</a>.</p><p>In short, the Impact Report is an annual summary highlighting how we are helping build a better Internet and the progress we are making on our environmental, social, and governance priorities. It is where we showcase successes from Cloudflare Impact programs, celebrate awards and recognitions, and explain our approach to fundamental values like transparency and privacy.</p><p>We believe that a better Internet is principled, for everyone, and sustainable; these are the three themes around which we constructed the report. The Impact Report also serves as our repository for disclosures consistent with our commitments for the Global Reporting Initiative (GRI), Sustainability Accounting Standards Board (SASB), and UN Global Compact (UNGC).</p><p>Check out the <a href="https://www.cloudflare.com/impact-report-2022">full report</a> to:</p><ul><li><p>Explore how we are expanding the value and scope of our Cloudflare Impact programs</p></li><li><p>Review our latest diversity statistics — and our newest employee resource group</p></li><li><p>Understand how we are supporting humanitarian and human rights causes</p></li><li><p>Read quick summaries of Impact Week announcements</p></li><li><p>Examine how we calculate and validate emissions data</p></li></ul><p>As fantastic as 2022 has been for scaling up Cloudflare Impact and making strides toward a better Internet, we are aiming even higher in 2023. To keep up with developments throughout the year, follow us on <a href="https://twitter.com/Cloudflare">Twitter</a> and <a href="https://www.linkedin.com/company/cloudflare/">LinkedIn</a>, and keep an eye out for updates on our <a href="https://www.cloudflare.com/impact/">Cloudflare Impact</a> page.</p> ]]></content:encoded>
            <category><![CDATA[Impact Week]]></category>
            <category><![CDATA[Sustainability]]></category>
            <category><![CDATA[Better Internet]]></category>
            <category><![CDATA[Policy & Legal]]></category>
            <guid isPermaLink="false">4MxmSbIovB3RILQylXXIlg</guid>
            <dc:creator>Andie Goodwin</dc:creator>
        </item>
        <item>
            <title><![CDATA[How Cloudflare advocates for a better Internet]]></title>
            <link>https://blog.cloudflare.com/how-cloudflare-advocates-for-a-better-internet/</link>
            <pubDate>Fri, 16 Dec 2022 14:02:00 GMT</pubDate>
            <description><![CDATA[ In this blog we outline how we advocate, across the many jurisdictions where we operate, for a better Internet, in our engagement with governments and regulators. ]]></description>
            <content:encoded><![CDATA[ <p><i></i></p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5VXOtnACll7BXUt7oxnH60/ff93abe008352a6315343a68cd634064/Advocating-for-a-Better-Internet.png" />
            
            </figure><p>We mean a lot of things when we talk about helping to build a better Internet. Sometimes, it’s about democratizing technologies that were previously only available to the wealthiest and most technologically savvy companies, sometimes it’s about protecting the most vulnerable groups from cyber attacks and online prosecution. And the Internet does not exist in a vacuum.</p><p>As a global company, we see the way that the future of the Internet is affected by governments, regulations, and people. If we want to help build a better Internet, we have to make sure that we are in the room, sharing Cloudflare’s perspective in the many places where important conversations about the Internet are happening. And that is why we believe strongly in the value of public policy.</p><p>We thought this week would be a great opportunity to share Cloudflare’s principles and our theories behind policy engagement. Because at its core, a public policy approach needs to reflect who the company is through their actions and rhetoric. And as a company, we believe there is real value in helping governments understand how companies work, and helping our employees understand how governments and law-makers work. Especially now, during a time in which many jurisdictions are passing far-reaching laws that shape the future of the Internet, from laws on content moderation, to new and more demanding regulations on cybersecurity.</p>
    <div>
      <h3>Principled, Curious, Transparent</h3>
      <a href="#principled-curious-transparent">
        
      </a>
    </div>
    <p>At Cloudflare, we have three core company values: we are Principled, Curious, and Transparent. By principled, we mean thoughtful, consistent, and long-term oriented about what the right course of action is. By curious, we mean taking on big challenges and understanding the why and how behind things. Finally, by transparent, we mean being clear on why and how we decide to do things both internally and externally.</p><p>Our approach to public policy aims to integrate these three values into our engagement with stakeholders. We are thoughtful when choosing the right issues to prioritize, and are consistent once we have chosen to take a position on a particular topic. We are curious about the important policy conversations that governments and institutions around the world are having about the future of the Internet, and want to understand the different points of view in that debate. And we aim to be as transparent as possible when talking about our policy stances, by, for example, writing blogs, submitting comments to public consultations, or participating in conversations with policymakers and our peers in the industry. And, for instance with this blog, we also aim to be transparent about our actual advocacy efforts.</p>
    <div>
      <h3>What makes Cloudflare different?</h3>
      <a href="#what-makes-cloudflare-different">
        
      </a>
    </div>
    <p>With approximately 20 percent of websites using our service, including those who use our free tier, Cloudflare protects a wide variety of customers from cyberattack. Our business model relies on economies of scale, and customers choosing to add products and services to our entry-level cybersecurity protections. This means our policy perspective can be broad: we are advocating for a better Internet for our customers who are Fortune 1000 companies, as well as for individual developers with hobby blogs or small business websites. It also means that our perspective is distinct: we have a business model that is unique, and therefore a perspective that often isn’t represented by others.</p>
    <div>
      <h3>Strategy</h3>
      <a href="#strategy">
        
      </a>
    </div>
    <p>We are not naive: we do not believe that a growing company can command the same attention as some of the Internet giants, or has the capacity to engage on as many issues as those bigger companies. So how do we prioritize? What’s our rule of thumb on how and when we engage?</p><p>Our starting point is to think about the policy developments that have the largest impact on our own activities. Which issues could force us to change our model? Cause significant (financial) impact? Skew incentives for stronger cybersecurity? Then we do the exercise again, this time, thinking about whether our perspective on that policy issue is dramatically different from those of other companies in the industry. Is it important to us, but we share the same perspective as other cybersecurity, infrastructure, or cloud companies? We pass. For example, while changing corporate tax rates could have a significant financial impact on our business, we don’t exactly have a unique perspective on that. So that’s off the list. But privacy? There we think we have a distinct perspective, as a company that practices privacy by design, and supports and develops standards that help ensure privacy on the Internet. And crucially: we think privacy will be critical to the future of the Internet. So on public policy ideas related to privacy we engage. And then there is our unique vantage point, derived from our global network. This often gives us important insight and data, which we can use to educate policymakers on relevant issues.</p>
    <div>
      <h3>Our engagement channels</h3>
      <a href="#our-engagement-channels">
        
      </a>
    </div>
    <p>Our Public Policy team includes people who have worked in government, law firms and the tech industry before they joined Cloudflare. The informal networks, professional relationships, and expertise that they have built over the course of their careers are instrumental in ensuring that Cloudflare is involved in important policy conversations about the Internet. We do not have a Political Action Committee, and we do not make political contributions.</p><p>As mentioned, we try to focus on the issues where we can make a difference, where we have a unique interest, perspective and expertise. Nonetheless, there are many policies and regulations that could affect not only us at Cloudflare, but the entire Internet ecosystem. In order to track policy developments worldwide, and ensure that we are able to share information, we are members of a number of associations and coalitions.</p><p>Some of these advocacy groups represent a particular industry, such as software companies, or US based technology firms, and engage with lawmakers on a wide variety of relevant policy issues for their particular sector. Other groups, in contrast, focus their advocacy on a more specific policy issue.</p><p>In addition to formal trade association memberships, we will occasionally join coalitions of companies or civil society organizations assembled for particular advocacy purposes. For example, we periodically engage with the Stronger Internet coalition, to share information about policies around encryption, privacy, and free expression around the world.</p><p>It almost goes without saying that, given our commitment to transparency as a company and entirely in line with our own ethics code and legal compliance, we fully comply with all relevant rules around advocacy in jurisdictions across the world. You can also find us in transparency registers of governmental entities, where these exist. Because we want to be transparent about how we advocate for a better Internet, today we have published an overview of the organizations we work with on our <a href="https://www.cloudflare.com/trade-association-memberships/">website</a>.</p> ]]></content:encoded>
            <category><![CDATA[Impact Week]]></category>
            <category><![CDATA[Better Internet]]></category>
            <category><![CDATA[Policy & Legal]]></category>
            <guid isPermaLink="false">5cfCFupYkEzqzs8LQyOYbg</guid>
            <dc:creator>Christiaan Smits</dc:creator>
            <dc:creator>Zaid Zaid</dc:creator>
            <dc:creator>Carly Ramsey</dc:creator>
        </item>
        <item>
            <title><![CDATA[The unintended consequences of blocking IP addresses]]></title>
            <link>https://blog.cloudflare.com/consequences-of-ip-blocking/</link>
            <pubDate>Fri, 16 Dec 2022 14:00:00 GMT</pubDate>
            <description><![CDATA[ A discussion about IP blocking: why we see it, what it is, what it does, who it affects, and why it’s such a problematic way to address content online. ]]></description>
            <content:encoded><![CDATA[ <p><i></i></p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5XRs4hep0bF2DPlSc0NJPe/9c4f7666cc9570882d0b18fea2823124/image1-53.png" />
            
            </figure><p>In late August 2022, Cloudflare’s customer support team began to receive complaints about sites on our network being down in Austria. Our team immediately went into action to try to identify the source of what looked from the outside like a partial Internet outage in Austria. We quickly realized that it was an issue with local Austrian Internet Service Providers.</p><p>But the service disruption wasn’t the result of a technical problem. As we later learned from <a href="https://www.derstandard.de/story/2000138619757/ueberzogene-netzsperre-sorgt-fuer-probleme-im-oesterreichischen-internet">media reports</a>, what we were seeing was the result of a court order. Without any notice to Cloudflare, an Austrian court had ordered Austrian Internet Service Providers (ISPs) to block 11 of Cloudflare’s IP addresses.</p><p>In an attempt to block 14 websites that copyright holders argued were violating copyright, the court-ordered IP block rendered thousands of websites inaccessible to ordinary Internet users in Austria over a two-day period. What did the thousands of other sites do wrong? Nothing. They were a temporary casualty of the failure to build legal remedies and systems that reflect the Internet’s actual architecture.</p><p>Today, we are going to dive into a discussion of IP blocking: why we see it, what it is, what it does, who it affects, and why it’s such a problematic way to address content online.</p>
    <div>
      <h2>Collateral effects, large and small</h2>
      <a href="#collateral-effects-large-and-small">
        
      </a>
    </div>
    <p>The craziest thing is that this type of blocking happens on a regular basis, all around the world. But unless that blocking happens at the scale of what happened in Austria, or someone decides to highlight it, it is typically invisible to the outside world. Even Cloudflare, with deep technical expertise and understanding about how blocking works, can’t routinely see when an IP address is blocked.</p><p>For Internet users, it’s even more opaque. They generally don’t know why they can’t connect to a particular website, where the connection problem is coming from, or how to address it. They simply know they cannot access the site they were trying to visit. And that can make it challenging to document when sites have become inaccessible because of IP address blocking.</p><p>Blocking practices are also wide-spread. In their Freedom on the Net report, Freedom House recently <a href="https://freedomhouse.org/report/freedom-net/2022/key-internet-controls">reported</a> that 40 out of the 70 countries that they examined - which vary from countries like Russia, Iran and Egypt to Western democracies like the United Kingdom and Germany -  did some form of website blocking. Although the report doesn’t delve into exactly how those countries block, many of them use forms of IP blocking, with the same kind of potential effects for a partial Internet shutdown that we saw in Austria.</p><p>Although it can be challenging to assess the amount of collateral damage from IP blocking, we do have examples where organizations have attempted to quantify it. In conjunction with a case before the European Court of Human Rights, the European Information Society Institute, a Slovakia-based nonprofit, reviewed Russia’s regime for website blocking in 2017. Russia exclusively used IP addresses to block content. The European Information Society Institute concluded that IP blocking led to “<i>collateral website blocking on a massive scale</i>” and noted that as of June 28, 2017, “6,522,629 Internet resources had been blocked in Russia, of which 6,335,850 – or 97% – had been blocked collaterally, that is to say, without legal justification.”</p><p>In the UK, overbroad blocking prompted the non-profit Open Rights Group to create the website <a href="https://www.blocked.org.uk/">Blocked.org.uk</a>. The website has a tool enabling users and site owners to report on overblocking and request that ISPs remove blocks. The group also has hundreds of individual stories about the effect of blocking on those whose websites were inappropriately blocked, from charities to small business owners. Although it’s not always clear what blocking methods are being used, the fact that the site is necessary at all conveys the amount of overblocking. Imagine a dressmaker, watchmaker or car dealer looking to advertise their services and potentially gain new customers with their website. That doesn’t work if local users can’t access the site.</p><p>One reaction might be, “Well, just make sure there are no restricted sites sharing an address with unrestricted sites.” But as we’ll discuss in more detail, this ignores the large difference between the number of possible <a href="https://www.cloudflare.com/learning/dns/glossary/what-is-a-domain-name/">domain names</a> and the number of available IP addresses, and runs counter to the very technical specifications that empower the Internet. Moreover, the definitions of restricted and unrestricted differ across nations, communities, and organizations. Even if it were possible to know all the restrictions, the designs of the protocols -- of the Internet, itself -- mean that it is simply infeasible, if not impossible, to satisfy every agency’s constraints.</p>
    <div>
      <h2>Legal and human rights concerns</h2>
      <a href="#legal-and-human-rights-concerns">
        
      </a>
    </div>
    <p>Overblocking websites is not only a problem for users; it has legal implications. Because of the effect it can have on ordinary citizens looking to exercise their rights online, government entities (both courts and regulatory bodies) have a legal obligation to make sure that their orders are necessary and proportionate, and don’t unnecessarily affect those who are not contributing to the harm.</p><p>It would be hard to imagine, for example, that a court in response to alleged wrongdoing would blindly issue a search warrant or an order based solely on a street address without caring if that address was for a single family home, a six-unit condo building, or a high rise with hundreds of separate units. But those sorts of practices with IP addresses appear to be rampant.</p><p>In 2020, the European Court of Human Rights (ECHR) - the court overseeing the implementation of the Council of Europe’s European Convention on Human Rights - considered a case involving a website that was blocked in Russia not because it had been targeted by the Russian government, but because it shared an IP address with a blocked website. The website owner brought suit over the block. The ECHR concluded that the indiscriminate blocking was impermissible, ruling that the block on the lawful content of the site “<i>amounts to arbitrary interference with the rights of owners of such websites</i>.” In other words, the ECHR ruled that it was improper for a government to issue orders that resulted in the blocking of sites that were not targeted.</p>
    <div>
      <h2>Using Internet infrastructure to address content challenges</h2>
      <a href="#using-internet-infrastructure-to-address-content-challenges">
        
      </a>
    </div>
    <p>Ordinary Internet users don’t think a lot about how the content they are trying to access online is delivered to them. They assume that when they type a domain name into their browser, the content will automatically pop up. And if it doesn’t, they tend to assume the website itself is having problems unless their entire Internet connection seems to be broken. But those basic assumptions ignore the reality that connections to a website are often used to limit access to content online.</p><p>Why do countries block connections to websites? Maybe they want to limit their own citizens from accessing what they believe to be illegal content - like online gambling or explicit material - that is permissible elsewhere in the world. Maybe they want to prevent the viewing of a foreign news source that they believe to be primarily disinformation. Or maybe they want to support copyright holders seeking to block access to a website to limit viewing of content that they believe infringes their intellectual property.</p><p>To be clear, <b>blocking access is not the same thing as removing content from the Internet</b>. There are a variety of legal obligations and authorities designed to permit actual removal of illegal content. Indeed, the legal expectation in many countries is that blocking is a matter of last resort, after attempts have been made to remove content at the source.</p><p>Blocking just prevents certain viewers - those whose Internet access depends on the ISP that is doing the blocking - from being able to access websites. The site itself continues to exist online and is accessible by everyone else. But when the content originates from a different place and can’t be easily removed, a country may see blocking as their best or only approach.</p><p>We recognize the concerns that sometimes drive countries to implement blocking. But fundamentally, we believe it’s important for users to know when the websites they are trying to access have been blocked, and, to the extent possible, who has blocked them from view and why. And it’s critical that any restrictions on content should be as limited as possible to address the harm, to avoid infringing on the rights of others.</p><p>Brute force IP address blocking doesn’t allow for those things. It’s fully opaque to Internet users. The practice has unintended, unavoidable consequences on other content. And the very fabric of the Internet means that there is no good way to identify what other websites might be affected either before or during an IP block.</p><p>To understand what happened in Austria and what happens in many other countries around the world that seek to block content with the bluntness of IP addresses, we have to understand what is going on behind the scenes. That means diving into some technical details.</p>
    <div>
      <h2>Identity is attached to names, never addresses</h2>
      <a href="#identity-is-attached-to-names-never-addresses">
        
      </a>
    </div>
    <p>Before we even get started describing the technical realities of blocking, it’s important to stress that the first and best option to deal with content is at the source. A website owner or hosting provider has the option of removing content at a granular level, without having to take down an entire website. On the more technical side, a <a href="https://www.cloudflare.com/learning/dns/glossary/what-is-a-domain-name-registrar/">domain name registrar</a> or registry can potentially withdraw a domain name, and therefore a website, from the Internet altogether.</p><p>But how do you block access to a website, if for whatever reason the content owner or content source is unable or unwilling to remove it from the Internet?  There are only three possible control points.</p><p>The first is via the <a href="https://www.cloudflare.com/learning/dns/what-is-dns/">Domain Name System (DNS)</a>, which translates domain names into IP addresses so that the site can be found. Instead of returning a valid IP address for a domain name, the DNS resolver could lie and respond with a code, NXDOMAIN, meaning that “there is no such name.” A better approach would be to use one of the honest error numbers <a href="https://datatracker.ietf.org/doc/rfc8914/">standardized in 2020</a>, including error 15 for blocked, error 16 for censored, 17 for filtered, or 18 for prohibited, although these are not widely used currently.</p><p>Interestingly, the precision and effectiveness of DNS as a control point depends on whether the DNS resolver is private or public. Private or ‘internal’ DNS resolvers are operated by ISPs and enterprise environments for their own known clients, which means that operators can be precise in applying content restrictions. By contrast, that level of precision is unavailable to open or public resolvers, not least because routing and addressing is global and ever-changing on the Internet map, and in stark contrast to addresses and routes on a fixed postal or street map. For example, private DNS resolvers may be able to block access to websites within specified geographic regions with at least some level of accuracy in a way that public DNS resolvers cannot, which becomes profoundly important given the disparate (and inconsistent) blocking regimes around the world.</p><p>The second approach is to block individual connection requests to a restricted domain name. When a user or client wants to visit a website, a connection is initiated from the client to a server <i>name</i>, i.e. the domain name. If a network or on-path device is able to observe the server name, then the connection can be terminated. Unlike DNS, there is no mechanism to communicate to the user that access to the server name was blocked, or why.</p><p>The third approach is to block access to an IP address where the domain name can be found. This is a bit like blocking the delivery of all mail to a physical address. Consider, for example, if that address is a skyscraper with its many unrelated and independent occupants. Halting delivery of mail to the address of the skyscraper causes collateral damage by invariably affecting all parties at that address. IP addresses work the same way.</p><p>Notably, the IP address is the only one of the three options that has no attachment to the domain name. The website domain name is not required for routing and delivery of data packets; in fact it is fully ignored. A website can be available on any IP address, or even on many IP addresses, simultaneously. And the set of IP addresses that a website is on can change at any time. The set of IP addresses cannot <i>definitively</i> be known by querying DNS, which has been able to return any valid address at any time for any reason, since <a href="https://datatracker.ietf.org/doc/rfc1794/">1995</a>.</p><p>The idea that an address is representative of an identity is anathema to the Internet’s design, because the decoupling of address from name is deeply embedded in the Internet standards and protocols, as is explained next.</p>
    <div>
      <h2>The Internet is a set of protocols, not a policy or perspective</h2>
      <a href="#the-internet-is-a-set-of-protocols-not-a-policy-or-perspective">
        
      </a>
    </div>
    <p>Many people still incorrectly assume that an IP address represents a single website. We’ve previously <a href="/addressing-agility/">stated</a> that the association between names and addresses is understandable given that the earliest connected components of the Internet appeared as one computer, one interface, one address, and one name. This one-to-one association was an artifact of the ecosystem in which the Internet Protocol was deployed, and satisfied the needs of the time.</p><p>Despite the one-to-one naming practice of the early Internet, it has always been possible to assign more than one name to a server (or ‘host’). For example, a server was (and is still) often configured with names to reflect its service offerings such as <code>mail.example.com</code> and <code>www.example.com</code>, but these shared a base domain name.  There were few reasons to have completely different domain names until the need to colocate completely different websites onto a single server. That practice was made easier in 1997 by the <b>Host</b> header in <a href="https://datatracker.ietf.org/doc/rfc2068/">HTTP/1.1</a>, a feature preserved by the SNI field in a <a href="https://datatracker.ietf.org/doc/rfc3546/">TLS extension</a> in 2003.</p><p>Throughout these changes, the Internet Protocol and, separately, the DNS protocol, have not only kept pace, but have remained fundamentally unchanged. They are the very reason that the Internet has been able to scale and evolve, because they are about addresses, reachability, and arbitrary name to IP address relationships.</p><p>The designs of IP and DNS are also entirely independent, which only reinforces that names are separate from addresses. A closer inspection of the protocols’ design elements illuminates the misperceptions of policies that lead to today's common practice of <a href="https://www.cloudflare.com/learning/access-management/what-is-access-control/">controlling access</a> to content by blocking IP addresses.</p>
    <div>
      <h3>By design, IP is for reachability and nothing else</h3>
      <a href="#by-design-ip-is-for-reachability-and-nothing-else">
        
      </a>
    </div>
    <p>Much like large public civil engineering projects rely on building codes and best practice, the Internet is built using a set of <i>open</i> standards and specifications informed by experience and agreed by international consensus. The Internet standards that connect hardware and applications are published by the Internet Engineering Task Force (<a href="https://www.ietf.org/">IETF</a>) in the form of “Requests for Comment” or <a href="https://www.ietf.org/standards/rfcs/">RFCs</a> -- so named not to suggest incompleteness, but to reflect that standards must be able to evolve with knowledge and experience. The IETF and its RFCs are cemented in the very fabric of communications, for example, with the first RFC 1 published in 1969. The Internet Protocol (IP) specification reached <a href="https://datatracker.ietf.org/doc/rfc791/">RFC status</a> in 1981.</p><p>Alongside the standards organizations, the Internet’s success has been helped by a core idea known as the end-to-end (e2e) principle, <a href="https://web.mit.edu/Saltzer/www/publications/endtoend/endtoend.pdf">codified</a> also in 1981, based on years of trial and error <a href="https://en.wikipedia.org/wiki/End-to-end_principle">experience</a>. The end-to-end principle is a powerful abstraction that, despite taking many forms, manifests a core notion of the Internet Protocol specification: the network’s only responsibility is to establish reachability, and every other possible feature has a cost or a risk.</p><p>The idea of “reachability” in the Internet Protocol is also enshrined in the design of IP addresses themselves. Looking at the Internet Protocol specification, <a href="https://www.rfc-editor.org/rfc/rfc791">RFC 791</a>, the following excerpt from Section 2.3 is explicit about IP addresses having no association with names, interfaces, or anything else.</p>
            <pre><code>Addressing

    A distinction is made between names, addresses, and routes [4].   A
    name indicates what we seek.  An address indicates where it is.  A
    route indicates how to get there.  The internet protocol deals
    primarily with addresses.  It is the task of higher level (i.e.,
    host-to-host or application) protocols to make the mapping from
    names to addresses.   The internet module maps internet addresses to
    local net addresses.  It is the task of lower level (i.e., local net
    or gateways) procedures to make the mapping from local net addresses
    to routes.
                            [ RFC 791, 1981 ]</code></pre>
            <p>Just like postal addresses for skyscrapers in the physical world, IP addresses are no more than street addresses written on a piece of paper. And just like a street address on paper, one can never be confident about the entities or organizations that exist behind an IP address. In a network like Cloudflare’s, any single IP address represents <a href="/cloudflare-architecture-and-how-bpf-eats-the-world/">thousands of servers</a>, and can have even more websites and services -- in some cases numbering into the <a href="/addressing-agility/">millions</a> -- expressly because the Internet Protocol is designed to enable it.</p><p>Here’s an interesting question: could we, or any content service provider, ensure that every IP address matches to one and only one name? The answer is an unequivocal <b>no</b>, and here too, because of a protocol design -- in this case, DNS.</p>
    <div>
      <h3>The number of names in DNS always exceeds the available addresses</h3>
      <a href="#the-number-of-names-in-dns-always-exceeds-the-available-addresses">
        
      </a>
    </div>
    <p>A one-to-one relationship between names and addresses is impossible given the Internet specifications for the same reasons that it is infeasible in the physical world. Ignore for a moment that people and organizations can change addresses. Fundamentally, the number of people and organizations on the planet exceeds the number of postal addresses. We not only want, but <i>need</i> for the Internet to accommodate more names than addresses.</p><p>The difference in magnitude between names and addresses is also codified in the specifications. IPv4 addresses are 32 bits, and IPv6 addresses are 128 bits. The size of a domain name that can be queried by DNS is as many as 253 octets, or 2,024 bits (from Section 2.3.4 in <a href="https://datatracker.ietf.org/doc/rfc1035/">RFC 1035</a>, published 1987). The table below helps to put those differences into perspective:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6owhjh505J8A9SrL5gznHG/499625dd20a849da4d72727fedd2da8d/Screenshot-2022-12-16-at-13.02.04.png" />
            
            </figure><p>On November 15, 2022, the United Nations announced the population of the Earth surpassed eight billion people. Intuitively, we know that there cannot be anywhere near as many postal addresses. The difference between the number of possible names on the planet, and similarly on the Internet, does and must exceed the number of available addresses.</p>
    <div>
      <h2>The proof is in the pudding names!</h2>
      <a href="#the-proof-is-in-the-pudding-names">
        
      </a>
    </div>
    <p>Now that those two relevant principles about IP addresses and DNS names in the international standards are understood - that IP address and domain names serve distinct purposes and there is no one to one relationship between the two - an examination of a recent case of content blocking using IP addresses can help to see the reasons it is problematic. Take, for example, the IP blocking incident in Austria late August 2022. The goal was to restrict access to 14 target domains, by blocking 11 IP addresses (source: RTR.Telekom. Post via the <a href="https://web.archive.org/web/20220828220559/http://netzsperre.liwest.at/">Internet Archive</a>) -- the mismatch between those two numbers should have been a warning flag that IP blocking might not have the desired effect.</p><p>Analogies and international standards may explain the reasons that IP blocking should be avoided, but we can see the scale of the problem by looking at Internet-scale data. To better understand and explain the severity of IP blocking, we decided to generate a global view of domain names and IP addresses (thanks are due to a PhD research intern, Sudheesh Singanamalla, for the effort). In September 2022, we used the authoritative zone files for the <a href="https://www.cloudflare.com/learning/dns/top-level-domain/">top-level domains (TLDs)</a> .com, .net, .info, and <a href="https://www.cloudflare.com/application-services/products/registrar/buy-org-domains/">.org</a>, together with top-1M website lists, to find a total of 255,315,270 unique names. We then queried DNS from each of five regions and recorded the set of IP addresses returned. The table below summarizes our findings:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5YeOt63WPvv2WY1hYcyNWt/15fad8223e223da6ea17157ad335294f/image3-23.png" />
            
            </figure><p>The table above makes clear that it takes no more than 10.7 million addresses to reach 255,315,270 names from any region on the planet, and the total set of IP addresses for those names from everywhere is about 16 million -- the ratio of names to IP addresses is nearly 24x in Europe and 16x globally.</p><p>There is one more worthwhile detail about the numbers above: The IP addresses are the combined totals of both IPv4 and IPv6 addresses, meaning that far fewer addresses are needed to reach all 255M websites.</p><p>We’ve also inspected the data a few different ways to find some interesting observations. For example, the figure below shows the cumulative distribution (CDF) of the proportion of websites that can be visited with each additional IP address. On the y-axis is the proportion of websites that can be reached given some number of IP addresses. On the x-axis, the 16M IP addresses are ranked from the most domains on the left, to the least domains on the right. Note that any IP address in this set is a response from DNS and so it must have at least one domain name, but the highest numbers of domains on IP addresses in the set number are in the 8-digit millions.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7r9Dj9LCN9K2w3vWyqRTKR/efef1916d727b42ab0baa6ff772b6ada/image2-37.png" />
            
            </figure><p>By looking at the CDF there are a few eye-watering observations:</p><ul><li><p>Fewer than 10 IP addresses are needed to reach 20% of, or approximately 51 million, domains in the set;</p></li><li><p>100 IPs are enough to reach almost 50% of domains;</p></li><li><p>1000 IPs are enough to reach 60% of domains;</p></li><li><p>10,000 IPs are enough to reach 80%, or about 204 million, domains.</p></li></ul><p>In fact, from the total set of 16 million addresses, fewer than half, 7.1M (43.7%), of the addresses in the dataset had one name. On this ‘one’ point we must be additionally clear: we are unable to ascertain if there was only one and no other names on those addresses because there are many more domain names than those contained only in .com, .org, .info., and .net -- there might very well be other names on those addresses.</p><p>In addition to having a number of domains on a single IP address, any IP address may change over time for any of those domains.  Changing IP addresses periodically can be helpful with certain security, performance, and to improve reliability for websites. One common example in use by many operations is load balancing. This means DNS queries may return different IP addresses over time, or in different places, for the same websites. This is a further, and separate, reason why blocking based on IP addresses will not serve its intended purpose over time.</p><p>Ultimately, <b>there is no reliable way to know the number of domains on an IP address</b> without inspecting all names in the DNS, from every location on the planet, at every moment in time -- an entirely infeasible proposition.</p><p>Any action on an IP address must, by the very definitions of the protocols that rule and empower the Internet, be expected to have collateral effects.</p>
    <div>
      <h2>Lack of transparency with IP blocking</h2>
      <a href="#lack-of-transparency-with-ip-blocking">
        
      </a>
    </div>
    <p>So if we have to expect that the blocking of an IP address will have collateral effects, and it’s generally agreed that it’s inappropriate or even legally impermissible to overblock by blocking IP addresses that have multiple domains on them, why does it still happen? That’s hard to know for sure, so we can only speculate. Sometimes it reflects a lack of technical understanding about the possible effects, particularly from entities like judges who are not technologists. Sometimes governments just ignore the collateral damage - as they do with Internet shutdowns - because they see the blocking as in their interest. And when there is collateral damage, it’s not usually obvious to the outside world, so there can be very little external pressure to have it addressed.</p><p>It’s worth stressing that point. When an IP is blocked, a user just sees a failed connection. They don’t know why the connection failed, or who caused it to fail. On the other side, the server acting on behalf of the website doesn’t even know it’s been blocked until it starts getting complaints about the fact that it is unavailable. There is virtually no transparency or accountability for the overblocking. And it can be challenging, if not impossible, for a website owner to challenge a block or seek redress for being inappropriately blocked.</p><p>Some governments, including <a href="https://www.rtr.at/TKP/was_wir_tun/telekommunikation/weitere-regulierungsthemen/netzneutralitaet/nn_blockings.de.html">Austria</a>, do publish active block lists, which is an important step for transparency. But for all the reasons we’ve discussed, publishing an IP address does not reveal all the sites that may have been blocked unintentionally. And it doesn’t give those affected a means to challenge the overblocking. Again, in the physical world example, it’s hard to imagine a court order on a skyscraper that wouldn’t be posted on the door, but we often seem to jump over such due process and notice requirements in virtual space.</p><p>We think talking about the problematic consequences of IP blocking is more important than ever as an increasing number of countries push to block content online. Unfortunately, ISPs often use IP blocks to implement those requirements. It may be that the ISP is newer or less robust than larger counterparts, but larger ISPs engage in the practice, too, and understandably so because IP blocking takes the least effort and is readily available in most equipment.</p><p>And as more and more domains are included on the same number of IP addresses, the problem is only going to get worse.</p>
    <div>
      <h2>Next steps</h2>
      <a href="#next-steps">
        
      </a>
    </div>
    <p>So what can we do?</p><p>We believe the first step is to improve transparency around the use of IP blocking. Although we’re not aware of any comprehensive way to document the collateral damage caused by IP blocking, we believe there are steps we can take to expand awareness of the practice. We are committed to working on new initiatives that highlight those insights, as we’ve done with the Cloudflare Radar Outage Center.</p><p>We also recognize that this is a whole Internet problem, and therefore has to be part of a broader effort. The significant likelihood that blocking by IP address will result in restricting access to a whole series of unrelated (and untargeted) domains should make it a non-starter for everyone. That’s why we’re engaging with civil society partners and like-minded companies to lend their voices to challenge the use of blocking IP addresses as a way of addressing content challenges and to point out collateral damage when they see it.</p><p>To be clear, to address the challenges of illegal content online, countries need legal mechanisms that enable the removal or restriction of content in a rights-respecting way. We believe that addressing the content at the source is almost always the best and the required first step. Laws like the EU’s new Digital Services Act or the Digital Millennium Copyright Act provide tools that can be used to address illegal content at the source, while respecting important due process principles. Governments should focus on building and applying legal mechanisms in ways that least affect other people’s rights, consistent with human rights expectations.</p><p>Very simply, these needs cannot be met by blocking IP addresses.</p><p>We’ll continue to look for new ways to talk about network activity and disruption, particularly when it results in unnecessary limitations on access. Check out <a href="https://radar.cloudflare.com/">Cloudflare Radar</a> for more insights about what we see online.</p> ]]></content:encoded>
            <category><![CDATA[Impact Week]]></category>
            <category><![CDATA[Better Internet]]></category>
            <category><![CDATA[Internet Performance]]></category>
            <category><![CDATA[Policy & Legal]]></category>
            <guid isPermaLink="false">5W7SrYRBnpDHBDnLNBVqBI</guid>
            <dc:creator>Alissa Starzak</dc:creator>
            <dc:creator>Marwan Fayed</dc:creator>
        </item>
        <item>
            <title><![CDATA[Helping build a safer Internet by measuring BGP RPKI Route Origin Validation]]></title>
            <link>https://blog.cloudflare.com/rpki-updates-data/</link>
            <pubDate>Fri, 16 Dec 2022 14:00:00 GMT</pubDate>
            <description><![CDATA[ Is BGP safe yet? If the question needs asking, then it isn't. But how far the Internet is from this goal is what we set out to answer. ]]></description>
            <content:encoded><![CDATA[ 
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1VWVhVnz5Xbv2u1jm48KeJ/dd52aaf9426c64b5d2b68a0b7651cb93/image7-7.png" />
            
            </figure><p>The <a href="https://www.cloudflare.com/learning/security/glossary/what-is-bgp/">Border Gateway Protocol</a> (BGP) is the glue that keeps the entire Internet together. However, despite its vital function, BGP wasn't originally designed to protect against malicious actors or routing mishaps. It has since been updated to account for this shortcoming with the <a href="https://en.wikipedia.org/wiki/Resource_Public_Key_Infrastructure">Resource Public Key Infrastructure</a> (RPKI) framework, but can we declare it to be safe yet?</p><p>If the question needs asking, you might suspect we can't. There is a shortage of reliable data on how much of the Internet is protected from preventable routing problems. Today, we’re releasing a new method to measure exactly that: what percentage of Internet users are protected by their Internet Service Provider from these issues. We find that there is a long way to go before the Internet is protected from routing problems, though it varies dramatically by country.</p>
    <div>
      <h3>Why RPKI is necessary to secure Internet routing</h3>
      <a href="#why-rpki-is-necessary-to-secure-internet-routing">
        
      </a>
    </div>
    <p>The Internet is a network of independently-managed networks, called <a href="https://www.cloudflare.com/learning/network-layer/what-is-an-autonomous-system/">Autonomous Systems (ASes)</a>. To achieve global reachability, ASes interconnect with each other and determine the feasible paths to a given destination IP address by exchanging routing information using BGP. BGP enables routers with only local network visibility to construct end-to-end paths based on the arbitrary preferences of each administrative entity that operates that equipment. Typically, Internet traffic between a user and a destination traverses multiple AS networks using paths constructed by BGP routers.</p><p>BGP, however, lacks built-in security mechanisms to protect the integrity of the exchanged routing information and to provide authentication and authorization of the advertised IP address space. Because of this, AS operators must implicitly trust that the routing information exchanged through BGP is accurate. As a result, the Internet is vulnerable to the injection of bogus routing information, which cannot be mitigated by security measures at the client or server level of the network.</p><p>An adversary with access to a BGP router can inject fraudulent routes into the routing system, which can be used to execute an array of attacks, including:</p><ul><li><p>Denial-of-Service (DoS) through traffic blackholing or redirection,</p></li><li><p>Impersonation attacks to eavesdrop on communications,</p></li><li><p>Machine-in-the-Middle exploits to modify the exchanged data, and subvert reputation-based filtering systems.</p></li></ul><p>Additionally, local misconfigurations and fat-finger errors can be propagated well beyond the source of the error and cause major disruption across the Internet.</p><p>Such an incident happened on <a href="/how-verizon-and-a-bgp-optimizer-knocked-large-parts-of-the-internet-offline-today/">June 24, 2019</a>. Millions of users were unable to access Cloudflare address space when a regional ISP in Pennsylvania accidentally advertised routes to Cloudflare through their capacity-limited network. This was effectively the Internet equivalent of routing an entire freeway through a neighborhood street.</p><p>Traffic misdirections like these, either unintentional or intentional, are not uncommon. The Internet Society’s <a href="https://www.manrs.org/">MANRS</a> (Mutually Agreed Norms for Routing Security) initiative estimated that in 2020 alone there were <a href="https://www.manrs.org/2021/03/a-regional-look-into-bgp-incidents-in-2020/">over 3,000 route leaks and hijacks</a>, and new occurrences can be <a href="/route-leak-detection-with-cloudflare-radar/">observed every day through Cloudflare Radar.</a></p><p>The most prominent proposals to secure BGP routing, standardized by the <a href="https://www.ietf.org/about/introduction/">IETF</a> focus on validating the origin of the advertised routes using <a href="https://en.wikipedia.org/wiki/Resource_Public_Key_Infrastructure">Resource Public Key Infrastructure</a> (RPKI) and verifying the integrity of the paths with <a href="https://en.wikipedia.org/wiki/BGPsec">BGPsec</a>. Specifically, RPKI (defined in <a href="https://www.rfc-editor.org/rfc/rfc7115.html">RFC 7115</a>) relies on a <a href="https://en.wikipedia.org/wiki/Public_key_infrastructure">Public Key Infrastructure</a> to validate that an AS advertising a route to a destination (an IP address space) is the legitimate owner of those IP addresses.</p><p>RPKI has been defined for a long time but lacks adoption. It requires network operators to cryptographically sign their prefixes, and routing networks to perform an RPKI Route Origin Validation (ROV) on their routers. This is a two-step operation that requires coordination and participation from many actors to be effective.</p>
    <div>
      <h3>The two phases of RPKI adoption: signing origins and validating origins</h3>
      <a href="#the-two-phases-of-rpki-adoption-signing-origins-and-validating-origins">
        
      </a>
    </div>
    <p>RPKI has two phases of deployment: first, an AS that wants to protect its own IP prefixes can cryptographically sign Route Origin Authorization (ROA) records thereby attesting to be the legitimate origin of that signed IP space. Second, an AS can avoid selecting invalid routes by performing Route Origin Validation (ROV, defined in <a href="https://www.rfc-editor.org/rfc/rfc6483">RFC 6483</a>).</p><p>With ROV, a BGP route received by a neighbor is validated against the available RPKI records. A route that is valid or missing from RPKI is selected, while a route with RPKI records found to be invalid is typically rejected, thus preventing the use and propagation of hijacked and misconfigured routes.</p><p>One issue with RPKI is the fact that implementing ROA is meaningful only if other ASes implement ROV, and vice versa. Therefore, securing BGP routing requires a united effort and a lack of broader adoption disincentivizes ASes from commiting the resources to validate their own routes. Conversely, increasing RPKI adoption can lead to network effects and accelerate RPKI deployment. Projects like MANRS and Cloudflare’s <a href="https://isbgpsafeyet.com/">isbgpsafeyet.com</a> are promoting good Internet citizenship among network operators, and make the benefits of RPKI deployment known to the Internet. You can check whether your own ISP is being a good Internet citizen by testing it on <a href="https://isbgpsafeyet.com/">isbgpsafeyet.com</a>.</p><p>Measuring the extent to which both ROA (signing of addresses by the network that controls them) and ROV (filtering of invalid routes by ISPs) have been implemented is important to evaluating the impact of these initiatives, developing situational awareness, and predicting the impact of future misconfigurations or attacks.</p><p>Measuring ROAs is straightforward since ROA data is <a href="https://ftp.ripe.net/rpki/">readily available</a> from RPKI repositories. Querying RPKI repositories for publicly routed IP prefixes (e.g. prefixes visible in the <a href="http://www.routeviews.org/">RouteViews</a> and <a href="https://www.ripe.net/analyse/internet-measurements/routing-information-service-ris">RIPE RIS</a> routing tables) allows us to estimate the percentage of addresses covered by ROA objects. Currently, there are 393,344 IPv4 and 86,306 IPv6 ROAs in the global RPKI system, covering about 40% of the globally routed prefix-AS origin pairs<sup>1</sup>.</p><p>Measuring ROV, however, is significantly more challenging given it is configured inside the BGP routers of each AS, not accessible by anyone other than each router’s administrator.</p>
    <div>
      <h3>Measuring ROV deployment</h3>
      <a href="#measuring-rov-deployment">
        
      </a>
    </div>
    <p>Although we do not have direct access to the configuration of everyone’s BGP routers, it is possible to infer the use of ROV by comparing the reachability of RPKI-valid and RPKI-invalid prefixes from measurement points within an AS<sup>2</sup>.</p><p>Consider the following toy topology as an example, where an RPKI-invalid origin is advertised through AS0 to AS1 and AS2. If AS1 filters and rejects RPKI-invalid routes, a user behind AS1 would not be able to connect to that origin. By contrast, if AS2 does not reject RPKI invalids, a user behind AS2 would be able to connect to that origin.</p><p>While occasionally a user may be unable to access an origin due to transient network issues, if multiple users act as vantage points for a measurement system, we would be able to collect a large number of data points to infer which ASes deploy ROV.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3ix5pgVzjgMlL7BugvGJDD/aff6d6eaf101da010a24fa8e7908b106/1-1.png" />
            
            </figure><p>If, in the figure above, AS0 filters invalid RPKI routes, then vantage points in both AS1 and AS2 would be unable to connect to the RPKI-invalid origin, making it hard to distinguish if ROV is deployed at the ASes of our vantage points or in an AS along the path. One way to mitigate this limitation is to announce the RPKI-invalid origin from multiple locations from an anycast network taking advantage of its direct interconnections to the measurement vantage points as shown in the figure below. As a result, an AS that does not itself deploy ROV is less likely to observe the benefits of upstream ASes using ROV, and we would be able to accurately infer ROV deployment per AS<sup>3</sup>.</p><p><i>Note that it’s also important that the IP address of the RPKI-invalid origin should not be covered by a less specific prefix for which there is a valid or unknown RPKI route, otherwise even if an AS filters invalid RPKI routes its users would still be able to find a route to that IP.</i></p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7HDnUbqxvRQ3DbhArqJsMg/a21bfe1ef026a4aa615ac21f759d3f3f/2-1.png" />
            
            </figure><p>The measurement technique described here is the one implemented by Cloudflare’s <a href="https://isbgpsafeyet.com">isbgpsafeyet.com</a> website, allowing end users to assess whether or not their ISPs have deployed BGP ROV.</p><p>The <a href="https://isbgpsafeyet.com/">isbgpsafeyet.com</a> website itself doesn't submit any data back to Cloudflare, but recently we started measuring whether end users’ browsers can successfully connect to invalid RPKI origins when ROV is present. We use the same mechanism as is used for <a href="/network-performance-update-developer-week/">global performance data</a><sup>4</sup>. In particular, every measurement session (an individual end user at some point in time) attempts a request to both valid.rpki.cloudflare.com, which should always succeed as it’s RPKI-valid, and invalid.rpki.cloudflare.com, which is RPKI-invalid and should fail when the user’s ISP uses ROV.</p><p>This allows us to have continuous and up-to-date measurements from hundreds of thousands of browsers on a daily basis, and develop a greater understanding of the state of ROV deployment.</p>
    <div>
      <h3>The state of global ROV deployment</h3>
      <a href="#the-state-of-global-rov-deployment">
        
      </a>
    </div>
    <p>The figure below shows the raw number of ROV probe requests per hour during October 2022 to <i>valid.rpki.cloudflare.com</i> and <i>invalid.rpki.cloudflare.com</i>. In total, we observed 69.7 million successful probes from 41,531 ASNs.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/16AWpr0oNevsWJynpNN1v4/51885d284ed9360416c915a935f19d6d/3-1.png" />
            
            </figure><p>Based on <a href="https://labs.apnic.net/?p=526">APNIC's estimates</a> on the number of end users per ASN, our weighted<sup>5</sup> analysis covers 96.5% of the world's Internet population. As expected, the number of requests follow a diurnal pattern which reflects established user behavior in daily and weekly Internet activity<sup>6</sup>.</p><p>We can also see that the number of successful requests to <i>valid.rpki.cloudflare.com</i> (<b><i>gray line</i></b>) closely follows the number of sessions that issued at least one request (<b><i>blue line</i></b>), which works as a smoke test for the correctness of our measurements.</p><p>As we don't store the IP addresses that contribute measurements, we don’t have any way to count individual clients and large spikes in the data may introduce unwanted bias. We account for that by capturing those instants and excluding them.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2rou45RM7Y0NdF2opTgGZE/0e708f6e801926147cd498a355751b21/4-1.png" />
            
            </figure><p>Overall, we estimate that out of the four billion Internet users, <b>only 261 million (6.5%) are protected by BGP Route Origin Validation</b>, but the true state of global ROV deployment is more subtle than this.</p><p>The following map shows the fraction of dropped RPKI-invalid requests from ASes with over 200 probes over the month of October. It depicts how far along each country is in adopting ROV but doesn’t necessarily represent the fraction of protected users in each country, as we will discover.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3UzNnhgQaIpcQktYHwNwpf/b6a389c9ed94e49456c75aa0f8689264/5-1.png" />
            
            </figure><p>Sweden and Bolivia appear to be the countries with the highest level of adoption (over 80%), while only a few other countries have crossed the 50% mark (e.g. Finland, Denmark, Chad, Greece, the United States).</p><p>ROV adoption may be driven by a few ASes hosting large user populations, or by many ASes hosting small user populations. To understand such disparities, the map below plots the contrast between overall adoption in a country (as in the previous map) and median adoption over the individual ASes within that country. Countries with stronger reds have relatively few ASes deploying ROV with high impact, while countries with stronger blues have more ASes deploying ROV but with lower impact per AS.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6m2lvwtrbDEzDObM6NfW5W/2371a8bdb5f7ed5ed4981103d4aad4c5/6-1.png" />
            
            </figure><p>In the Netherlands, Denmark, Switzerland, or the United States, adoption appears mostly driven by their larger ASes, while in Greece or Yemen it’s the smaller ones that are adopting ROV.</p><p>The following histogram summarizes the worldwide level of adoption for the 6,765 ASes covered by the previous two maps.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5zg7hScwuknrlGP0z76zvx/6fa811c1f9f92f50397ec64267b0af73/7.png" />
            
            </figure><p>Most ASes either don’t validate at all, or have close to 100% adoption, which is what we’d intuitively expect. However, it's interesting to observe that there are small numbers of ASes all across the scale. ASes that exhibit partial RPKI-invalid drop rate compared to total requests may either implement ROV partially (on some, but not all, of their BGP routers), or appear as dropping RPKI invalids due to ROV deployment by other ASes in their upstream path.</p><p>To estimate the number of users protected by ROV we only considered ASes with an observed adoption above <b>95%</b>, as an AS with an incomplete deployment still leaves its users vulnerable to route leaks from its BGP peers.</p><p>If we take the previous histogram and summarize by the number of users behind each AS, the green bar on the right corresponds to the <b>261 million</b> users currently protected by ROV according to the above criteria (686 ASes).</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/NAtYPcWbDWsiBiMfX56es/40a59d3253b0cba83c6e1bde2bbf83a6/8.png" />
            
            </figure><p>Looking back at the country adoption map one would perhaps expect the number of protected users to be larger. But worldwide ROV deployment is still mostly partial, lacking larger ASes, or both. This becomes even more clear when compared with the next map, plotting just the fraction of fully protected users.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7kAM05eOwaPusgkvSIASoB/e86386a84c161919b3bdc9018145eb1c/9.png" />
            
            </figure><p>To wrap up our analysis, we look at two world economies chosen for their contrasting, almost symmetrical, stages of deployment: the United States and the European Union.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3DShkKyva4l7rm5qC7gQnP/2ba1802f7d305815450bc1b9df372abb/10.png" />
            
            </figure>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4Q0ipIYrllZ63MXSRYEh59/bc25c31c0de0b71c157f909e5eb8522f/11.png" />
            
            </figure><p>112 million Internet users are protected by 111 ASes from the United States with comprehensive ROV deployments. Conversely, more than twice as many ASes from countries making up the European Union have fully deployed ROV, but end up covering only half as many users. This can be reasonably explained by end user ASes being more likely to operate within a single country rather than span multiple countries.</p>
    <div>
      <h3>Conclusion</h3>
      <a href="#conclusion">
        
      </a>
    </div>
    <p>Probe requests were performed from end user browsers and very few measurements were collected from transit providers (which have few end users, if any). Also, paths between end user ASes and Cloudflare are often very short (a nice outcome of our extensive peering) and don't traverse upper-tier networks that they would otherwise use to reach the rest of the Internet.</p><p>In other words, the methodology used focuses on ROV adoption by <b>end user networks</b> (e.g. ISPs) and isn’t meant to reflect the eventual effect of indirect validation from (perhaps validating) upper-tier transit networks. While indirect validation may limit the "blast radius" of (malicious or accidental) route leaks, it still leaves non-validating ASes vulnerable to leaks coming from their peers.</p><p>As with indirect validation, an AS remains vulnerable until its ROV deployment reaches a sufficient level of completion. We chose to only consider AS deployments above 95% as truly comprehensive, and <a href="https://radar.cloudflare.com">Cloudflare Radar</a> will soon begin using this threshold to track ROV adoption worldwide, as part of our mission to help build a better Internet.</p><p>When considering only comprehensive ROV deployments, some countries such as Denmark, Greece, Switzerland, Sweden, or Australia, already show an effective coverage above 50% of their respective Internet populations, with others like the Netherlands or the United States slightly above 40%, mostly driven by few large ASes rather than many smaller ones.</p><p>Worldwide we observe a very low effective coverage of just <b>6.5%</b> over the measured ASes, corresponding to <b>261 million</b> end users currently safe from (malicious and accidental) route leaks, which means there’s still a long way to go before we can declare BGP to be safe.</p><p>......</p><p><sup>1</sup><a href="https://rpki.cloudflare.com/">https://rpki.cloudflare.com/</a></p><p><sup>2</sup>Gilad, Yossi, Avichai Cohen, Amir Herzberg, Michael Schapira, and Haya Shulman. "Are we there yet? On RPKI's deployment and security." Cryptology ePrint Archive (2016).</p><p><sup>3</sup>Geoff Huston. “Measuring ROAs and ROV”. <a href="https://blog.apnic.net/2021/03/24/measuring-roas-and-rov/">https://blog.apnic.net/2021/03/24/measuring-roas-and-rov/</a>
<sup>4</sup>Measurements are issued stochastically when users encounter 1xxx error pages from default (non-customer) configurations.</p><p><sup>5</sup>Probe requests are weighted by AS size as calculated from Cloudflare's <a href="https://radar.cloudflare.com/">worldwide HTTP traffic</a>.</p><p><sup>6</sup>Quan, Lin, John Heidemann, and Yuri Pradkin. "When the Internet sleeps: Correlating diurnal networks with external factors." In Proceedings of the 2014 Conference on Internet Measurement Conference, pp. 87-100. 2014.</p> ]]></content:encoded>
            <category><![CDATA[Impact Week]]></category>
            <category><![CDATA[Radar]]></category>
            <category><![CDATA[BGP]]></category>
            <category><![CDATA[RPKI]]></category>
            <category><![CDATA[Routing Security]]></category>
            <category><![CDATA[Better Internet]]></category>
            <guid isPermaLink="false">dMGl1iwWVn3YZWRTxIzgV</guid>
            <dc:creator>Carlos Rodrigues</dc:creator>
            <dc:creator>Vasilis Giotsas</dc:creator>
        </item>
        <item>
            <title><![CDATA[Introducing Cloudflare's Third Party Code of Conduct]]></title>
            <link>https://blog.cloudflare.com/introducing-cloudflares-third-party-code-of-conduct/</link>
            <pubDate>Fri, 16 Dec 2022 14:00:00 GMT</pubDate>
            <description><![CDATA[ We are excited to share our Third Party Code of Conduct, specifically formulated with our suppliers, resellers and other partners in mind. ]]></description>
            <content:encoded><![CDATA[ 
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7jLdGRaLNMOig2HJ7AavJr/1f6d5e31c07fb5e0db8713d8cca1c93c/image1-51.png" />
            
            </figure><p>Cloudflare is on a mission to help build a better Internet, and we are committed to doing this with ethics and integrity in everything that we do. This commitment extends beyond our own actions, to third parties acting on our behalf. Cloudflare has the same expectations of ethics and integrity of our suppliers, resellers, and other partners as we do of ourselves.</p>
    <div>
      <h3>Our new code of conduct for third parties</h3>
      <a href="#our-new-code-of-conduct-for-third-parties">
        
      </a>
    </div>
    <p>We first shared publicly our <a href="https://cloudflare.net/files/doc_downloads/governance/2020/12/Code-of-Business-Conduct-and-Ethics-(Amended-10.27.20).pdf">Code of Business Conduct and Ethics</a> during Cloudflare’s <a href="https://www.cloudflare.com/press-releases/2019/cloudflare-announces-pricing-of-initial-public-offering/">initial public offering</a> in September 2019. All Cloudflare employees take legal training as part of their onboarding process, as well as an annual refresher course, which includes the topics covered in our Code, and they sign an acknowledgement of our Code and related policies as well.</p><p>While our Code of Business Conduct and Ethics applies to all directors, officers and employees of Cloudflare, it has not extended to third parties. Today, we are excited to share our <a href="https://cf-assets.www.cloudflare.com/slt3lc6tev37/284hiWkCYNc49GQpAeBvGN/e137cdac96d1c4cd403c6b525831d284/Third_Party_Code_of_Conduct.pdf">Third Party Code of Conduct</a>, specifically formulated with our suppliers, resellers, and other partners in mind. It covers such topics as:</p><ul><li><p>Human Rights</p></li><li><p>Fair Labor</p></li><li><p>Environmental Sustainability</p></li><li><p>Anti-Bribery and Anti-Corruption</p></li><li><p>Trade Compliance</p></li><li><p>Anti-Competition</p></li><li><p>Conflicts of Interest</p></li><li><p>Data Privacy and Security</p></li><li><p>Government Contracting</p></li></ul>
    <div>
      <h3>But why have another Code?</h3>
      <a href="#but-why-have-another-code">
        
      </a>
    </div>
    <p>We work with a wide range of third parties in all parts of the world, including countries with a high risk of corruption, potential for political unrest, and also countries that are just not governed by the same laws that we may see as standard in the United States. We wanted a Third Party Code of Conduct that serves as a statement of Cloudflare’s core values and commitments, and a call for third parties who share the same.</p><p>The following are just a few examples of how we want to ensure our third parties act with ethics and integrity on our behalf, even when we aren’t watching:</p><p>We want to ensure that the servers and other equipment in our supply chain are sourced responsibly, from manufacturers who respect human rights — free of forced or child labor, with environmental <a href="/extending-the-life-of-hardware/">sustainability at the forefront</a>.</p><p>We want to provide our products and services to customers based on the quality of Cloudflare, not because a third party reseller may bribe a customer to enter into an agreement.</p><p>We want to ensure there are no conflicts of interest with our third parties, that might give someone an unfair advantage.</p><p>As a government contractor, we want to ensure that we do not have telecommunications or video surveillance equipment, systems, or services from prohibited parties in our supply chain to protect national security interests.</p><p>Having a Third Party Code of Conduct is also industry standard. As Cloudflare garners an increasing number of Fortune 500 and other enterprise customers, we find ourselves reviewing and committing to their Third Party Codes of Conduct as well.</p>
    <div>
      <h3>How it works</h3>
      <a href="#how-it-works">
        
      </a>
    </div>
    <p>Our Third Party Code of Conduct is not meant to replace our <a href="https://www.cloudflare.com/terms/">terms of service</a> or other contractual agreements. Rather, it is meant to supplement them, highlighting Cloudflare’s ethical commitments and encouraging our suppliers, resellers, and other partners to commit to the same. We will be cascading this new Code to all existing third parties, and include it at onboarding for all new third parties going forward. A violation of the Code, or any contractual agreements between Cloudflare and our third parties, may result in termination of the relationship.</p><p>This Third Party Code of Conduct is only one facet of Cloudflare’s third party due diligence program, and it complements the other work that Cloudflare does in this area. Cloudflare rigorously screens and vets our suppliers and partners at onboarding, and we continue to routinely monitor and audit them over time. We are always looking for ways to communicate with, educate, and learn from our third parties as well.</p>
    <div>
      <h3>Join our mission</h3>
      <a href="#join-our-mission">
        
      </a>
    </div>
    <p>Are you a supplier, reseller or other partner who shares these values of ethics and integrity? Come work with us and join Cloudflare on its mission to help build a better, more ethical Internet.</p> ]]></content:encoded>
            <category><![CDATA[Impact Week]]></category>
            <category><![CDATA[Better Internet]]></category>
            <guid isPermaLink="false">3w3R9nO283WpNBfKHejFG2</guid>
            <dc:creator>Andria Jones</dc:creator>
        </item>
    </channel>
</rss>