
<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:media="http://search.yahoo.com/mrss/">
    <channel>
        <title><![CDATA[ The Cloudflare Blog ]]></title>
        <description><![CDATA[ Get the latest news on how products at Cloudflare are built, technologies used, and join the teams helping to build a better Internet. ]]></description>
        <link>https://blog.cloudflare.com</link>
        <atom:link href="https://blog.cloudflare.com/" rel="self" type="application/rss+xml"/>
        <language>en-us</language>
        
        <lastBuildDate>Thu, 07 May 2026 19:41:30 GMT</lastBuildDate>
        <item>
            <title><![CDATA[How Cloudflare responded to the “Copy Fail” Linux vulnerability]]></title>
            <link>https://blog.cloudflare.com/copy-fail-linux-vulnerability-mitigation/</link>
            <pubDate>Thu, 07 May 2026 13:00:00 GMT</pubDate>
            <description><![CDATA[ When a critical Linux kernel privilege escalation was publicly disclosed, Cloudflare's security and engineering teams detected, investigated, and mitigated the threat across our global fleet, confirming zero customer impact and no malicious exploitation. ]]></description>
            <content:encoded><![CDATA[ <p>On April 29, 2026, a Linux kernel local privilege escalation vulnerability was publicly disclosed under the name "Copy Fail" (<a href="https://security-tracker.debian.org/tracker/CVE-2026-31431"><u>CVE-2026-31431</u></a>). Cloudflare’s Security and Engineering teams began assessing the vulnerability as soon as it was disclosed. We reviewed the exploit technique, evaluated exposure across our infrastructure, and validated that our existing behavioral detections could identify the exploit pattern within minutes. </p><p><b>There was no impact to the Cloudflare environment, no customer data was at risk, and no services were disrupted at any point.</b> Read on to learn how our preparedness paid off. </p>
    <div>
      <h2>Background</h2>
      <a href="#background">
        
      </a>
    </div>
    
    <div>
      <h4>Our Linux kernel release process</h4>
      <a href="#our-linux-kernel-release-process">
        
      </a>
    </div>
    <p>Cloudflare operates a global Linux server infrastructure at an immense scale, with datacenters located <a href="https://www.cloudflare.com/network/"><u>across 330 cities</u></a>. We maintain a custom Linux kernel build based on the community's Long-Term Support (LTS) versions to manage updates effectively at this volume. At any given time, we may utilize multiple LTS versions from various series, such as 6.12 or 6.18, which benefit from extended update periods.</p><p>The community regularly merges and releases security and stability updates which trigger an automated job to generate a new internal kernel build approximately every week. These builds undergo testing in our staging data centers to ensure stability before a global rollout. Following a successful release, the Edge Reboot Release (ERR) pipeline manages a systematic update and reboot of the edge infrastructure on a four-week cycle. Our control plane infrastructure typically adopts the most recent kernel, with reboots scheduled according to specific workload requirements.</p><p>By the time a CVE becomes public knowledge, the necessary fix has typically been integrated into stable Linux LTS releases for several weeks. Our established procedures ensure that we have already deployed these patches.</p><p>At the time of the "Copy Fail" disclosure, the majority of our infrastructure was running the 6.12 LTS version, while a subset of machines had begun transitioning to the newer 6.18 LTS release.</p>
    <div>
      <h3>About the Copy Fail vulnerability</h3>
      <a href="#about-the-copy-fail-vulnerability">
        
      </a>
    </div>
    <p>It helps to understand the vulnerability before getting to the response story. A comprehensive write-up can be found in the original <a href="https://xint.io/blog/copy-fail-linux-distributions"><u>Xint Code disclosure</u></a> post.</p>
    <div>
      <h4>AF_ALG and the kernel crypto API</h4>
      <a href="#af_alg-and-the-kernel-crypto-api">
        
      </a>
    </div>
    <p>The Linux kernel's internal crypto API manages functions like kTLS and IPsec. Userspace programs access this via the <code>AF_ALG</code> socket family, allowing unprivileged processes to request encryption or decryption. The <code>algif_aead</code> module facilitates this for Authenticated Encryption with Associated Data (AEAD) ciphers.</p><p>An unprivileged program follows these steps:</p><ol><li><p>Opens an <code>AF_ALG</code> socket and binds to an AEAD template.</p></li><li><p>Sets a key and accepts a request socket.</p></li><li><p>Submits input via <code>sendmsg()</code> or <code>splice()</code>.</p></li><li><p>Executes the operation using <code>recvmsg()</code>.</p></li></ol><p>The <code>splice()</code> system call is critical here, as it moves data by passing page cache references.</p>
    <div>
      <h4>Memory mechanics: page cache and in-place crypto</h4>
      <a href="#memory-mechanics-page-cache-and-in-place-crypto">
        
      </a>
    </div>
    <p>The <b>page cache</b> is a shared system cache for file contents. Modifying a page belonging to a setuid binary effectively edits that program for all users until the page is evicted.</p><p>The crypto API utilizes <b>scatterlists</b>, which are structures linking various memory pages. In 2017, <code>algif_aead</code> was optimized for <i>in-place</i> operations, chaining destination and reference pages together. This design lacked enforcement to prevent algorithms from writing past intended boundaries.</p>
    <div>
      <h4>The vulnerability: out-of-bounds write</h4>
      <a href="#the-vulnerability-out-of-bounds-write">
        
      </a>
    </div>
    <p>When the user executes <code>recvmsg()</code>, the <code>authencesn</code> wrapper in the kernel performs a 4-byte write past the legitimate output region:</p>
            <pre><code>scatterwalk_map_and_copy(tmp + 1, dst, assoclen + cryptlen, 4, 1);
</code></pre>
            <p>By using <code>splice()</code>, an attacker can chain a target file's page cache pages to the scatterlist. The out-of-bounds write then taints the cached file, allowing an attacker to control which file is modified, the offset, and the specific 4 bytes written. This means the attacker can manipulate the following with this exploit:</p><ul><li><p>File: Any readable file.</p></li><li><p>Offset: Tunable via <code>assoclen</code> and splice parameters.</p></li><li><p>Value: Controlled via AAD bytes 4-7 in <code>sendmsg()</code></p></li></ul>
    <div>
      <h4>The exploit, step by step</h4>
      <a href="#the-exploit-step-by-step">
        
      </a>
    </div>
    
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5uqfOddH7biQaTjtgQOist/5c16c08bf3e5ce2f9030d6f98d2403cb/BLOG-3308_2.png" />
          </figure><p>The default exploit targets <code>/usr/bin/su</code>, a setuid-root binary present on essentially every distribution.</p><ol><li><p><b>Cache Reference:</b> Open <code>/usr/bin/su</code> as <code>O_RDONLY</code> and <code>read()</code> to populate the page cache. Use splice() on the file descriptor to pass these page cache references into the crypto scatterlist.</p></li><li><p><b>Setup:</b> Create an <code>AF_ALG</code> socket, <code>bind()</code> to <code>authencesn(hmac(sha256),cbc(aes))</code>, set a key, and accept a request socket without needing privileges.</p></li><li><p><b>Write Construction:</b> For each 4-byte shellcode chunk:</p><ul><li><p><code>sendmsg()</code> with AAD bytes 4–7 containing the shellcode.</p></li><li><p><code>splice()</code> the binary into a pipe then the <code>AF_ALG</code> socket so <code>assoclen + cryptlen</code> targets the desired <code>.text</code> offset.</p></li></ul></li><li><p><b>Trigger:</b> <code>recvmsg()</code> initiates decryption. <code>authencesn</code> writes its scratch data to the target offset of <code>/usr/bin/su</code> in the page cache. Although the function returns <code>-EBADMSG</code>, the 4-byte write is now in the global page cache.</p></li><li><p><b>Execution:</b> Running <code>execve("/usr/bin/su")</code> loads the tainted page cache. Since the binary is setuid-root, the injected shellcode executes with <b>root</b> privileges.</p></li></ol><p>The upstream fix (<a href="https://github.com/torvalds/linux/commit/a664bf3d603d"><u>commit a664bf3d603d</u></a>) reverts the 2017 in-place optimization, removing the exploit.</p>
    <div>
      <h3>How we responded </h3>
      <a href="#how-we-responded">
        
      </a>
    </div>
    <p>When the vulnerability was disclosed, many workstreams started in parallel:</p><ul><li><p><b>Mapping the blast radius:</b> Our security team worked with kernel engineers to determine which kernel versions were vulnerable and assess the potential exposure.</p></li><li><p><b>Validating coverage:</b> Security reviewed the exploit technique and confirmed that our existing behavioral detections could identify the exploit pattern during authorized internal validation.</p></li><li><p><b>Proactive threat hunting:</b> Security began searching for signs that the vulnerability had been exploited before it was publicly known, going back 48 hours in our fleet-wide logs.</p></li><li><p><b>Engineering a mitigation:</b> Kernel engineers began building a runtime mitigation that would protect the fleet without breaking production services.</p></li><li><p><b>Continuing software updates</b>: Our engineering teams worked on delivering an updated Linux kernel, which required carefully rebooting and rolling it out across our servers.</p></li></ul><p>There was no customer impact at any point during this response.</p>
    <div>
      <h4>Validating detection coverage</h4>
      <a href="#validating-detection-coverage">
        
      </a>
    </div>
    <p>One of the first things our security team did was confirm that our existing endpoint detection would catch this exploit. Our servers run behavioral detection that continuously monitors process execution patterns. It doesn't rely on knowing about specific vulnerabilities; it watches for anomalous behavior across the fleet.</p><p>When our engineers validated the vulnerability internally as part of the response, the detection platform flagged it within minutes. The system linked the entire execution chain—starting at the script interpreter, moving through the kernel’s cryptographic subsystem, and ending at the privilege escalation binary—flagging it as malicious based on fleet-wide behavioral patterns.</p><p>This happened without a signature update, without a rule change, and without human intervention. Our behavioral detection coverage existed before we wrote any custom logic for this particular Copy File exploit. </p><p>The confirmation was important because it meant we had coverage before writing a vulnerability-specific rule.</p>
    <div>
      <h4>Hunting for exploitation</h4>
      <a href="#hunting-for-exploitation">
        
      </a>
    </div>
    <p>While our engineering team moved to a more targeted mitigation, our security investigation had been running since disclosure. This is our standard procedure for any critical vulnerability.</p><p>Our security team operates on a simple principle for critical vulnerabilities: assume compromise until you can prove otherwise. The investigation started from the assumption that exploitation could have occurred before the vulnerability was public, and we worked systematically to either confirm or rule it out.</p><p>The exploit leaves a distinctive trace in kernel logs when it runs. We searched for that trace across our centralized logging infrastructure, covering 48 hours before the vulnerability was publicly disclosed. If someone had exploited this before the world knew about it, we would have seen it.</p><p>We pulled access logs for affected systems and reconstructed who connected, when, and what commands they ran. This gave us a complete forensic picture of interactive activity on potentially affected infrastructure.</p><p>We checked that system binaries had not been tampered with, validated cryptographic hashes against known-good package manifests, looked for persistence mechanisms, and audited network connections for anything unusual. Everything was clean. </p>
    <div>
      <h2>Incident timeline and impact</h2>
      <a href="#incident-timeline-and-impact">
        
      </a>
    </div>
    
<div><table><colgroup>
<col></col>
<col></col>
</colgroup>
<thead>
  <tr>
    <th><span>Time (UTC)</span></th>
    <th><span>Event</span></th>
  </tr></thead>
<tbody>
  <tr>
    <td><span>2026-04-29 16:00</span></td>
    <td><span>Copy Fail publicly disclosed.</span></td>
  </tr>
  <tr>
    <td><span>2026-04-29 ~21:00</span></td>
    <td><span>Security and Engineering teams began assessing fleet exposure and mitigation options before full declaration of the Incident Response process</span></td>
  </tr>
  <tr>
    <td><span>2026-04-29 22:52</span></td>
    <td><span>Security confirmed existing behavioral detection covered the Copy Fail exploit pattern. During authorized internal validation, detection flagged the activity within minutes.</span></td>
  </tr>
  <tr>
    <td><span>2026-04-29 23:01</span></td>
    <td><span>Existing behavioral detection generated a high-severity alert for exploit-like activity, confirming detection coverage for the technique.</span></td>
  </tr>
  <tr>
    <td><span>2026-04-29 (evening)</span></td>
    <td><span>First mitigation attempt pushed to our staging datacenter. The deployment process surfaced a dependency conflict; the mitigation was rolled back. No production systems were affected.</span></td>
  </tr>
  <tr>
    <td><span>2026-04-29 (overnight)</span></td>
    <td><span>Engineering drafted bpf-lsm mitigation program.</span></td>
  </tr>
  <tr>
    <td><span> 2026-04-30 03:14</span></td>
    <td><span>Security incident declared to drive cross-functional collaboration and urgency. Security performed fleetwide threat hunting of historical data to confirm that no malicious activity was present on Cloudflare systems.</span></td>
  </tr>
  <tr>
    <td><span>2026-04-30 (morning)</span></td>
    <td><span>Engineering tested the bpf-lsm mitigation program and made it production-ready.</span></td>
  </tr>
  <tr>
    <td><span>2026-04-30 14:25</span></td>
    <td><span>Engineering incident declared to coordinate mitigation program and Linux patch rollout. </span></td>
  </tr>
  <tr>
    <td><span>2026-04-30 ~17:00</span></td>
    <td><span>Decision made: ship a patched build of the previous LTS line through reboot automation; do not accelerate the new LTS; lean on bpf-lsm in the meantime.</span></td>
  </tr>
  <tr>
    <td><span>2026-04-30 (afternoon)</span></td>
    <td><span>Visibility pipeline (eBPF tracing of AF_ALG socket usage) deployed fleet-wide. Gives a complete picture of all legitimate AF_ALG users.</span></td>
  </tr>
  <tr>
    <td><span>2026-04-30 (evening)</span></td>
    <td><span>bpf-lsm mitigation program rolled out behind a separate gate to fully mitigate the fleet. End-to-end verification on a previously-vulnerable test node confirms the exploit no longer works.</span></td>
  </tr>
  <tr>
    <td><span>2026-05-04 (morning)</span></td>
    <td><span>Reboot automation resumed at normal pace with the patched kernel.</span></td>
  </tr>
  <tr>
    <td><span>2026-05-04 onward</span></td>
    <td><span>Servers that had already passed through reboot automation earlier in the week manually rebooted to pick up the patched kernel. Unpatched servers update per our normal reboot automation.</span></td>
  </tr>
</tbody></table></div>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7LI0k0FJgbLKzOkSEtYPwW/997234a334b3694c63417fad5810b679/BLOG-3308_3.png" />
          </figure><p>This graph shows the progress of our mitigation program as it progressed through our infrastructure.</p>
    <div>
      <h3>How did we mitigate it?</h3>
      <a href="#how-did-we-mitigate-it">
        
      </a>
    </div>
    <p>Because of the long timeframe involved in deploying a patched Linux kernel, we also pursued mitigating this exploit without a reboot.</p>
    <div>
      <h4>Removing the module</h4>
      <a href="#removing-the-module">
        
      </a>
    </div>
    <p>The bug was in the <code>algif_aead</code> kernel module. Therefore, the simple fix was to just remove this module and disallow it from being reloaded.</p><p>This mitigation was therefore exactly what the <a href="https://copy.fail/"><u>Copy Fail</u></a> write-up from the security researchers who identified it recommends.</p>
            <pre><code>echo "install algif_aead /bin/false" &gt; /etc/modprobe.d/disable-algif.conf
rmmod algif_aead 2&gt;/dev/null || true</code></pre>
            <p>Unfortunately removing the module would have impacted software that leverages the kernel crypto API.  This meant that we had to figure out a more surgical mitigation.</p>
    <div>
      <h4>Bpf-lsm</h4>
      <a href="#bpf-lsm">
        
      </a>
    </div>
    
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2VYFKst8aUkkHCaEdb8yQ4/91ad9a855d6185bccd179fcf092ed636/BLOG-3308_4.png" />
          </figure><p>We’ve already developed and deployed such a tool for this exact scenario: <a href="https://blog.cloudflare.com/live-patch-security-vulnerabilities-with-ebpf-lsm/"><u>bpf-lsm</u></a>. Instead of removing the module, this tool leaves it loaded for legitimate users and uses a BPF Linux Security Module program to deny the <code>socket_bind</code> LSM hook for everyone else. This completely blocks the front door for any exploits.</p><p>A draft of the eBPF program was put together overnight. Team members picked it up the following morning, ran validations, and made it production-ready. The program is fairly straightforward. On every <code>socket_bind</code> call:</p><ol><li><p>If the socket family is not <code>AF_ALG</code>, allow the call through unchanged.</p></li><li><p>If the family is <code>AF_ALG</code>, check the calling binary's path against an allow-list of the binaries we know to be legitimate users.</p></li><li><p>If the binary is on the allow-list, allow the bind. Otherwise, deny it.</p></li></ol><p>To verify the mitigation on a given machine without exploiting it, the <a href="https://copy.fail/"><u>Copy Fail</u></a> write-up gives a one-liner:</p>
            <pre><code>python3 -c 'import socket; s = socket.socket(socket.AF_ALG, socket.SOCK_SEQPACKET, 0); s.bind(("aead","authencesn(hmac(sha256),cbc(aes))"));'</code></pre>
            <p>On a mitigated machine you get PermissionError: [Errno 1] Operation not permitted (or FileNotFoundError, depending on which mitigation is active) instead of a successful bind.</p>
    <div>
      <h4>Rolling it out</h4>
      <a href="#rolling-it-out">
        
      </a>
    </div>
    <p><b>Before enabling enforcement, we verified that our known internal service was the sole legitimate </b><code><b>AF_ALG</b></code><b> user to avoid accidental outages.</b> We used <a href="https://github.com/cloudflare/ebpf_exporter"><code>prometheus-ebpf-exporter</code></a> to hook the <code>socket()</code> syscall and track <code>AF_ALG</code> usage per binary across the fleet. This required no kernel changes and provided aggregate data from hundreds of thousands of servers within hours. Results confirmed the identified service was indeed the only legitimate user.</p><p>So the bpf-lsm rollout was deliberately staged in two steps:</p><ol><li><p><b>Get visibility first.</b> Push the ebpf-exporter config gated by salt. Confirm at the metric layer that the known service is effectively the only thing creating <code>AF_ALG</code> sockets.</p></li><li><p><b>Then enforce.</b> Push the bpf-lsm program behind a separate enforcement gate.</p></li></ol><p>In parallel, the upstream backport for our majority LTS line finally became available, and our internal automation built a patched kernel against it.</p><p>We started to test the patched kernel in our staging datacenters as soon as possible, then we resumed the longer reboot process in order to fully patch our fleet.</p>
    <div>
      <h2>Remediation and follow-up steps</h2>
      <a href="#remediation-and-follow-up-steps">
        
      </a>
    </div>
    <p>While we were prepared for this scenario, at Cloudflare we’re always learning and improving. Key areas we identified for improvement:</p><ul><li><p><b>Better visibility into kernel-API dependencies.</b> We will review kernel-subsystem usage across production services, so we can continue to quickly mitigate exploits without service disruption.</p></li><li><p><b>Better runtime mitigation.</b> bpf-lsm is a valuable tool for mitigations, but we want to make this tool even better. This will include looking into faster deployments, better playbooks, and better logging and visibility of the tool. </p></li><li><p><b>Reduce attack surface of Linux Kernel</b>. Review and audit our kernel configuration. Proactively identify unused modules or features so that we can remove them from our build entirely.</p></li></ul>
    <div>
      <h2>Conclusion</h2>
      <a href="#conclusion">
        
      </a>
    </div>
    <p>The "Copy Fail" vulnerability presented a unique challenge for us. Despite our practice of deploying Linux patch updates every two weeks, we remained vulnerable because a month-old mainline fix had yet to be backported to our primary kernel line. Despite that, we were still able to roll out patched kernels within hours of the backport's release. In the interim, bpf-lsm provided a surgical, no-reboot mitigation that secured our fleet. While our initial attempt to disable the problematic module failed, it did so safely within our internal staging environment rather than production, allowing us to identify this dependency.</p><p>By the end of the rollout, every machine in our fleet was protected by either a patched kernel or a bpf-lsm program denying the vulnerable code path to non-allow-listed binaries. There was no customer impact at any point during this incident, and we have committed to the follow-up work above to make our response faster and our visibility better the next time something like this lands. Responsible disclosure works, in-kernel visibility tooling pays off in moments exactly like this one, and bpf-lsm continues to be one of the most useful primitives we have for runtime kernel mitigation.</p><p>At Cloudflare, critical vulnerability response is a coordinated effort across Security, Engineering, Product, and many other teams. Special thanks to Ali Adnan, Ivan Babrou, Frederik Baetens, Curtis Bray, Piers Cornwell, Everton Didone Foscarini, Rob Dinh, Elle Dougherty, Kevin Flansburg, Matt Fleming, Kimberley Hall, Brandon Harris, Jerry Ho, Oxana Kharitonova, Marek Kroemeke, Fred Lawler, James Munson, Nafeez Nazer, Walead Parviz, Miguel Pato, Evan Pratten, Josh Seba, June Slater, Ryan Timken, Michael Wolf, Jianxin Zeng and everyone else who contributed to the investigation, mitigation, and remediation of Copy Fail. We'd also like to thank the Linux upstream maintainers and Copy Fail researchers whose work helped make a rapid response possible.</p> ]]></content:encoded>
            <category><![CDATA[Linux]]></category>
            <category><![CDATA[Security]]></category>
            <category><![CDATA[Incident Response]]></category>
            <category><![CDATA[Kernel]]></category>
            <category><![CDATA[Vulnerabilities]]></category>
            <category><![CDATA[Mitigation]]></category>
            <category><![CDATA[eBPF]]></category>
            <guid isPermaLink="false">7JN0oOT8V9YgCD6JFW92my</guid>
            <dc:creator>Chris J Arges</dc:creator>
            <dc:creator>Sourov Zaman</dc:creator>
            <dc:creator>Rian Islam</dc:creator>
        </item>
    </channel>
</rss>