
<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:media="http://search.yahoo.com/mrss/">
    <channel>
        <title><![CDATA[ The Cloudflare Blog ]]></title>
        <description><![CDATA[ Get the latest news on how products at Cloudflare are built, technologies used, and join the teams helping to build a better Internet. ]]></description>
        <link>https://blog.cloudflare.com</link>
        <atom:link href="https://blog.cloudflare.com/" rel="self" type="application/rss+xml"/>
        <language>en-us</language>
        
        <lastBuildDate>Tue, 14 Apr 2026 23:03:48 GMT</lastBuildDate>
        <item>
            <title><![CDATA[Investigating multi-vector attacks in Log Explorer]]></title>
            <link>https://blog.cloudflare.com/investigating-multi-vector-attacks-in-log-explorer/</link>
            <pubDate>Tue, 10 Mar 2026 13:00:00 GMT</pubDate>
            <description><![CDATA[ Log Explorer customers can now identify and investigate multi-vector attacks. Log Explorer supports 14 additional Cloudflare datasets, enabling users to have a 360-degree view of their network. ]]></description>
            <content:encoded><![CDATA[ <p>In the world of cybersecurity, a single data point is rarely the whole story. Modern attackers don’t just knock on the front door; they probe your APIs, flood your network with "noise" to distract your team, and attempt to slide through applications and servers using stolen credentials.</p><p>To stop these multi-vector attacks, you need the full picture. By using Cloudflare Log Explorer to conduct security forensics, you get 360-degree visibility through the integration of 14 new datasets, covering the full surface of Cloudflare’s Application Services and Cloudflare One product portfolios. By correlating telemetry from application-layer HTTP requests, network-layer DDoS and Firewall logs, and Zero Trust Access events, security analysts can significantly reduce Mean Time to Detect (MTTD) and effectively unmask sophisticated, multi-layered attacks.</p><p>Read on to learn more about how Log Explorer gives security teams the ultimate landscape for rapid, deep-dive forensics.</p>
    <div>
      <h2>The flight recorder for your entire stack</h2>
      <a href="#the-flight-recorder-for-your-entire-stack">
        
      </a>
    </div>
    <p>The contemporary digital landscape requires deep, correlated telemetry to defend against adversaries using multiple attack vectors. Raw logs serve as the "flight recorder" for an application, capturing every single interaction, attack attempt, and performance bottleneck. And because Cloudflare sits at the edge, between your users and your servers, all of these events are logged before the requests even reach your infrastructure. </p><p>Cloudflare Log Explorer centralizes these logs into a unified interface for rapid investigation.</p>
    <div>
      <h3>Log Types Supported</h3>
      <a href="#log-types-supported">
        
      </a>
    </div>
    
    <div>
      <h4>Zone-Scoped Logs</h4>
      <a href="#zone-scoped-logs">
        
      </a>
    </div>
    <p><i>Focus: Website traffic, security events, and edge performance.</i></p><table><tr><td><p><b>HTTP Requests</b></p></td><td><p>As the most comprehensive dataset, it serves as the "primary record" of all application-layer traffic, enabling the reconstruction of session activity, exploit attempts, and bot patterns.</p></td></tr><tr><td><p><b>Firewall Events</b></p></td><td><p>Provides critical evidence of blocked or challenged threats, allowing analysts to identify the specific WAF rules, IP reputations, or custom filters that intercepted an attack.</p></td></tr><tr><td><p><b>DNS Logs</b></p></td><td><p>Identify cache poisoning attempts, domain hijacking, and infrastructure-level reconnaissance by tracking every query resolved at the authoritative edge.</p></td></tr><tr><td><p><b>NEL (Network Error Logging) Reports</b></p></td><td><p>Distinguish between a coordinated Layer 7 DDoS attack and legitimate network connectivity issues by tracking client-side browser errors.</p></td></tr><tr><td><p><b>Spectrum Events</b></p></td><td><p>For non-web applications, these logs provide visibility into L4 traffic (TCP/UDP), helping to identify anomalies or brute-force attacks against protocols like SSH, RDP, or custom gaming traffic.</p></td></tr><tr><td><p><b>Page Shield</b></p></td><td><p>Track and audit unauthorized changes to your site's client-side environment such as JavaScript, outbound connections.</p></td></tr><tr><td><p><b>Zaraz Events</b></p></td><td><p>Examine how third-party tools and trackers are interacting with user data, which is vital for auditing privacy compliance and detecting unauthorized script behaviors.</p></td></tr></table>
    <div>
      <h4>Account-Scoped Logs</h4>
      <a href="#account-scoped-logs">
        
      </a>
    </div>
    <p><i>Focus: Internal security, Zero Trust, administrative changes, and network activity.</i></p><table><tr><td><p><b>Access Requests</b></p></td><td><p>Tracks identity-based authentication events to determine which users accessed specific internal applications and whether those attempts were authorized.</p></td></tr><tr><td><p><b>Audit Logs</b></p></td><td><p>Provides a trail of configuration changes within the Cloudflare dashboard to identify unauthorized administrative actions or modifications.</p></td></tr><tr><td><p><b>CASB Findings</b></p></td><td><p>Identifies security misconfigurations and data risks within SaaS applications (like Google Drive or Microsoft 365) to prevent unauthorized data exposure.</p></td></tr><tr><td><p><b>Magic Transit / IPSec Logs</b></p></td><td><p>Helps network engineers perform network-level (L3) monitoring such as reviewing tunnel health and view BGP routing changes.</p></td></tr><tr><td><p><b>Browser Isolation Logs</b></p></td><td><p>Tracks user actions <i>inside</i> an isolated browser session (e.g., copy-paste, print, or file uploads) to prevent data leaks on untrusted sites </p></td></tr><tr><td><p><b>Device Posture Results </b></p></td><td><p>Details the security health and compliance status of devices connecting to your network, helping to identify compromised or non-compliant endpoints.</p></td></tr><tr><td><p><b>DEX Application Tests </b></p></td><td><p>Monitors application performance from the user's perspective, which can help distinguish between a security-related outage and a standard performance degradation.</p></td></tr><tr><td><p><b>DEX Device State Events</b></p></td><td><p>Provides telemetry on the physical state of user devices, useful for correlating hardware or OS-level anomalies with potential security incidents.</p></td></tr><tr><td><p><b>DNS Firewall Logs</b></p></td><td><p>Tracks DNS queries filtered through the DNS Firewall to identify communication with known malicious domains or command-and-control (C2) servers.</p></td></tr><tr><td><p><b>Email Security Alerts</b></p></td><td><p>Logs malicious email activity and phishing attempts detected at the gateway to trace the origin of email-based entry vectors.</p></td></tr><tr><td><p><b>Gateway DNS</b></p></td><td><p>Monitors every DNS query made by users on your network to identify shadow IT, malware callbacks, or domain-generation algorithms (DGAs).</p></td></tr><tr><td><p><b>Gateway HTTP</b></p></td><td><p>Provides full visibility into encrypted and unencrypted web traffic to detect hidden payloads, malicious file downloads, or unauthorized SaaS usage.</p></td></tr><tr><td><p><b>Gateway Network</b></p></td><td><p>Tracks L3/L4 network traffic (non-HTTP) to identify unauthorized port usage, protocol anomalies, or lateral movement within the network.</p></td></tr><tr><td><p><b>IPSec Logs</b></p></td><td><p>Monitors the status and traffic of encrypted site-to-site tunnels to ensure the integrity and availability of secure network connections.</p></td></tr><tr><td><p><b>Magic IDS Detections</b></p></td><td><p>Surfaces matches against intrusion detection signatures to alert investigators to known exploit patterns or malware behavior traversing the network.</p></td></tr><tr><td><p><b>Network Analytics Logs</b></p></td><td><p>Provides high-level visibility into packet-level data to identify volumetric DDoS attacks or unusual traffic spikes targeting specific infrastructure.</p></td></tr><tr><td><p><b>Sinkhole HTTP Logs</b></p></td><td><p>Captures traffic directed to "sinkholed" IP addresses to confirm which internal devices are attempting to communicate with known botnet infrastructure.</p></td></tr><tr><td><p><b>WARP Config Changes</b></p></td><td><p>Tracks modifications to the WARP client settings on end-user devices to ensure that security agents haven't been tampered with or disabled.</p></td></tr><tr><td><p><b>WARP Toggle Changes</b></p></td><td><p>Specifically logs when users enable or disable their secure connectivity, helping to identify periods where a device may have been unprotected.</p></td></tr><tr><td><p><b>Zero Trust Network Session Logs</b></p></td><td><p>Logs the duration and status of authenticated user sessions to map out the complete lifecycle of a user's access within the protected perimeter.</p></td></tr></table>
    <div>
      <h2>Log Explorer can identify malicious activity at every stage</h2>
      <a href="#log-explorer-can-identify-malicious-activity-at-every-stage">
        
      </a>
    </div>
    <p>Get granular application layer visibility with <b>HTTP Requests</b>, <b>Firewall Events</b>, and <b>DNS logs</b> to see exactly how traffic is hitting your public-facing properties.<b> </b>Track internal movement with <b>Access Requests</b>, <b>Gateway logs</b>, and <b>Audit logs</b>. If a credential is compromised, you’ll see where they went. Use <b>Magic IDS</b> and <b>Network Analytics logs</b> to spot volumetric attacks and "East-West" lateral movement within your private network.</p>
    <div>
      <h3>Identify the reconnaissance</h3>
      <a href="#identify-the-reconnaissance">
        
      </a>
    </div>
    <p>Attackers use scanners and other tools to look for entry points, hidden directories, or software vulnerabilities. To identify this, using Log Explorer, you can query <code>http_requests</code> for any <code>EdgeResponseStatus</code> codes of 401, 403, or 404 coming from a single IP, or requests to sensitive paths (e.g. <code>/.env</code>, <code>/.git</code>, <code>/wp-admin</code>). </p><p>Additionally, <code>magic_ids_detections</code> logs can also be used to identify scanning at the network layer. These logs provide packet-level visibility into threats targeting your network. Unlike standard HTTP logs, these logs focus on <b>signature-based detections</b> at the network and transport layers (IP, TCP, UDP). Query to discover cases where a single <code>SourceIP</code> is triggering multiple unique detections across a wide range of <code>DestinationPort</code> values in a short timeframe. Magic IDS signatures can specifically flag activities like Nmap scans or SYN stealth scans.</p>
    <div>
      <h3>Check for diversions</h3>
      <a href="#check-for-diversions">
        
      </a>
    </div>
    <p>While the attacker is conducting reconnaissance, they may attempt to disguise this with a simultaneous network flood. Pivot to <code>network_analytics_logs</code> to see if a volumetric attack is being used as a smokescreen.</p>
    <div>
      <h3>Identify the approach </h3>
      <a href="#identify-the-approach">
        
      </a>
    </div>
    <p>Once attackers identify a potential vulnerability, they begin to craft their weapon. The attacker sends malicious payloads (e.g. SQL injection or large/corrupt file uploads) to confirm the vulnerability. Review <code>http_requests</code> and/or <code>fw_events</code> to identify any Cloudflare detection tools that have triggered. Cloudflare logs security signals in these datasets to easily identify requests with malicious payloads using fields such as <code>WAFAttackScore</code>, <code>WAFSQLiAttackScore</code>, <code>FraudAttack</code>, <code>ContentScanJobResults</code>, and several more. Review <a href="https://developers.cloudflare.com/logs/logpush/logpush-job/datasets/zone/http_requests/"><u>our documentation</u></a> to get a full understanding of these fields. The <code>fw_events</code> logs can be used to determine whether these requests made it past Cloudflare’s defenses by examining the <code>action</code>, <code>source</code>, and <code>ruleID</code> fields. Cloudflare’s managed rules by default blocks many of these payloads by default. Review Application Security Overview to know if your application is protected.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1zpFguYrnbOPwyASGQCqZK/63f398acce2226e453a5eea1cc749241/image3.png" />
          </figure><p><sup><i>Showing the Managed rules Insight that displays on Security Overview if the current zone does not have Managed Rules enabled</i></sup></p>
    <div>
      <h3>Audit the identity</h3>
      <a href="#audit-the-identity">
        
      </a>
    </div>
    <p>Did that suspicious IP manage to log in? Use the <code>ClientIP</code> to search <code>access_requests</code>. If you see a "<code>Decision: Allow</code>" for a sensitive internal app, you know you have a compromised account.</p>
    <div>
      <h3>Stop the leak (data exfiltration)</h3>
      <a href="#stop-the-leak-data-exfiltration">
        
      </a>
    </div>
    <p>Attackers sometimes use DNS tunneling to bypass firewalls by encoding sensitive data (like passwords or SSH keys) into DNS queries. Instead of a normal request like <code>google.com</code>, the logs will show long, encoded strings. Look for an unusually high volume of queries for unique, long, and high-entropy subdomains by examining the fields: <code>QueryName</code>: Look for strings like <a href="http://h3ldo293js92.example.com"><code><u>h3ldo293js92.example.com</u></code></a>, <code>QueryType</code>: Often uses <code>TXT</code>, <code>CNAME</code>, or <code>NULL</code> records to carry the payload, and <code>ClientIP</code>: Identify if a single internal host is generating thousands of these unique requests.</p><p>Additionally, attackers may attempt to leak sensitive data by hiding it within non-standard protocols or by using common protocols (like DNS or ICMP) in unusual ways to bypass standard firewalls. Discover this by querying the <code>magic_ids_detections</code> logs to look for signatures that flag protocol anomalies, such as "ICMP tunneling" or "DNS tunneling" detections in the <code>SignatureMessage</code>.</p><p>Whether you are investigating a zero-day vulnerability or tracking a sophisticated botnet, the data you need is now at your fingertips.</p>
    <div>
      <h2>Correlate across datasets</h2>
      <a href="#correlate-across-datasets">
        
      </a>
    </div>
    <p>Investigate malicious activity across multiple datasets by pivoting between multiple concurrent searches. With Log Explorer, you can now work with multiple queries simultaneously with the new Tabs feature. Switch between tabs to query different datasets or Pivot and adjust queries using filtering via your query results.</p><div>
  
</div>
<p></p><p>When you correlate data across multiple Cloudflare log sources, you can detect sophisticated multi-stage attacks that appear benign when viewed in isolation. This cross-dataset analysis allows you to see the full attack chain from reconnaissance to exfiltration.</p>
    <div>
      <h3>Session hijacking (token theft)</h3>
      <a href="#session-hijacking-token-theft">
        
      </a>
    </div>
    <p><b>Scenario:</b> A user authenticates via Cloudflare Access, but their subsequent HTTP_request traffic looks like a bot.</p><p><b>Step 1:</b> Identify high-risk sessions in <code>http_requests</code>.</p>
            <pre><code>SELECT RayID, ClientIP, ClientRequestUserAgent, BotScore
FROM http_requests
WHERE date = '2026-02-22' 
  AND BotScore &lt; 20 
LIMIT 100</code></pre>
            <p><b>Step 2:</b> Copy the <code>RayID</code> and search <code>access_requests</code> to see which user account is associated with that suspicious bot activity.</p>
            <pre><code>
SELECT Email, IPAddress, Allowed
FROM access_requests
WHERE date = '2026-02-22' 
  AND RayID = 'INSERT_RAY_ID_HERE'</code></pre>
            
    <div>
      <h3>Post-phishing C2 beaconing</h3>
      <a href="#post-phishing-c2-beaconing">
        
      </a>
    </div>
    <p><b>Scenario:</b> An employee clicked a link in a phishing email which resulted in compromising their workstation. This workstation sends a DNS query for a known malicious domain, then immediately triggers an IDS alert.</p><p><b>Step 1:</b> Find phishing attacks by examining email_security_alerts for violations. </p>
            <pre><code>SELECT Timestamp, Threatcategories, To, Alertreason
FROM email_security_alerts
WHERE date = '2026-02-22' 
  AND Threatcategories LIKE 'phishing'</code></pre>
            <p><b>Step 2:</b> Use Access logs to correlate the user’s email (To) to their IP Address.</p>
            <pre><code>SELECT Email, IPAddress
FROM access_requests
WHERE date = '2026-02-22' </code></pre>
            <p><b>Step 3:</b> Find internal IPs querying a specific malicious domain in <code>gateway_dns</code> logs.</p>
            <pre><code>
SELECT SrcIP, QueryName, DstIP, 
FROM gateway_dns
WHERE date = '2026-02-22' 
  AND SrcIP = 'INSERT_IP_FROM_PREVIOUS_QUERY'
  AND QueryName LIKE '%malicious_domain_name%'</code></pre>
            
    <div>
      <h3>Lateral movement (Access → network probing)</h3>
      <a href="#lateral-movement-access-network-probing">
        
      </a>
    </div>
    <p><b>Scenario:</b> A user logs in via Zero Trust and then tries to scan the internal network.</p><p><b>Step 1:</b> Find successful logins from unexpected locations in <code>access_requests</code>.</p>
            <pre><code>SELECT IPAddress, Email, Country
FROM access_requests
WHERE date = '2026-02-22' 
  AND Allowed = true 
  AND Country != 'US' -- Replace with your HQ country</code></pre>
            <p><b>Step 2:</b> Check if that <code>IPAddress</code> is triggering network-level signatures in <code>magic_ids_detections</code>.</p>
            <pre><code>SELECT SignatureMessage, DestinationIP, Protocol
FROM magic_ids_detections
WHERE date = '2026-02-22' 
  AND SourceIP = 'INSERT_IP_ADDRESS_HERE'</code></pre>
            
    <div>
      <h3>Opening doors for more data </h3>
      <a href="#opening-doors-for-more-data">
        
      </a>
    </div>
    <p>From the beginning, Log Explorer was designed with extensibility in mind. Every dataset schema is defined using JSON Schema, a widely-adopted standard for describing the structure and types of JSON data. This design decision has enabled us to easily expand beyond HTTP Requests and Firewall Events to the full breadth of Cloudflare's telemetry. The same schema-driven approach that powered our initial datasets scaled naturally to accommodate Zero Trust logs, network analytics, email security alerts, and everything in between.</p><p>More importantly, this standardization opens the door to ingesting data beyond Cloudflare's native telemetry. Because our ingestion pipeline is schema-driven rather than hard-coded, we're positioned to accept any structured data that can be expressed in JSON format. For security teams managing hybrid environments, this means Log Explorer could eventually serve as a single pane of glass, correlating Cloudflare's edge telemetry with logs from third-party sources, all queryable through the same SQL interface. While today's release focuses on completing coverage of Cloudflare's product portfolio, the architectural groundwork is laid for a future where customers can bring their own data sources with custom schemas.</p>
    <div>
      <h3>Faster data, faster response: architectural upgrades</h3>
      <a href="#faster-data-faster-response-architectural-upgrades">
        
      </a>
    </div>
    <p>To investigate a multi-vector attack effectively, timing is everything. A delay of even a few minutes in the log availability can be the difference between proactive defense and reactive damage control.</p><p>That is why we have optimized our ingestion for better speed and resilience. By increasing concurrency in one part of our ingestion path, we have eliminated bottlenecks that could cause “noisy neighbor” issues, ensuring that one client’s data surge doesn’t slow down another’s visibility. This architectural work has reduced our P99 ingestion latency by approximately 55%, and our P50 by 25%, cutting the time it takes for an event at the edge to become available for your SQL queries.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/41M2eWP0BwrQFSZW4GzZbV/7a6139354abb561aba17e77d83beb17a/image4.png" />
          </figure><p><sup><i>Grafana chart displaying the drop in ingest latency after architectural upgrades</i></sup></p>
    <div>
      <h2>Follow along for more updates</h2>
      <a href="#follow-along-for-more-updates">
        
      </a>
    </div>
    <p>We're just getting started. We're actively working on even more powerful features to further enhance your experience with Log Explorer, including the ability to run these detection queries on a custom defined schedule. </p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2JIOu9PXDwVAVcmbgq456q/1eace4b5d38bb705e82442a4ee8045dc/Scheduled_Queries_List.png" />
          </figure><p><sup><i>Design mockup of upcoming Log Explorer Scheduled Queries feature</i></sup></p><p><a href="https://blog.cloudflare.com/"><u>Subscribe to the blog</u></a> and keep an eye out for more Log Explorer updates soon in our <a href="https://developers.cloudflare.com/changelog/product/log-explorer/"><u>Change Log</u></a>. </p>
    <div>
      <h2>Get access to Log Explorer</h2>
      <a href="#get-access-to-log-explorer">
        
      </a>
    </div>
    <p>To get access to Log Explorer, you can purchase self-serve directly from the dash or for contract customers, reach out for a <a href="https://www.cloudflare.com/application-services/products/log-explorer/"><u>consultation</u></a> or contact your account manager. Additionally, you can read more in our <a href="https://developers.cloudflare.com/logs/log-explorer/"><u>Developer Documentation</u></a>.</p> ]]></content:encoded>
            <category><![CDATA[Analytics]]></category>
            <category><![CDATA[Logs]]></category>
            <category><![CDATA[Security]]></category>
            <category><![CDATA[R2]]></category>
            <category><![CDATA[Storage]]></category>
            <category><![CDATA[SIEM]]></category>
            <category><![CDATA[Product News]]></category>
            <category><![CDATA[Connectivity Cloud]]></category>
            <guid isPermaLink="false">1hirraqs3droftHovXp1G6</guid>
            <dc:creator>Jen Sells</dc:creator>
            <dc:creator>Claudio Jolowicz</dc:creator>
            <dc:creator>Nico Gutierrez</dc:creator>
        </item>
        <item>
            <title><![CDATA[Cloudflare enables native monitoring and forensics with Log Explorer and custom dashboards]]></title>
            <link>https://blog.cloudflare.com/monitoring-and-forensics/</link>
            <pubDate>Tue, 18 Mar 2025 13:00:00 GMT</pubDate>
            <description><![CDATA[ Today we are excited to announce support for Zero Trust datasets, and custom dashboards where customers can monitor critical metrics for suspicious or unusual activity.  ]]></description>
            <content:encoded><![CDATA[ <p>In 2024, we <a href="https://blog.cloudflare.com/log-explorer/"><u>announced Log Explorer</u></a>, giving customers the ability to store and query their HTTP and security event logs natively within the Cloudflare network. Today, we are excited to announce that Log Explorer now supports logs from our Zero Trust product suite. In addition, customers can create custom dashboards to monitor suspicious or unusual activity.</p><p>Every day, Cloudflare detects and protects customers against billions of threats, including DDoS attacks, bots, web application exploits, and more. SOC analysts, who are charged with keeping their companies safe from the growing spectre of Internet threats, may want to investigate these threats to gain additional insights on attacker behavior and protect against future attacks. Log Explorer, by collecting logs from various Cloudflare products, provides a single starting point for investigations. As a result, analysts can avoid forwarding logs to other tools, maximizing productivity and minimizing costs. Further, analysts can monitor signals specific to their organizations using custom dashboards.</p>
    <div>
      <h2>Zero Trust dataset support in Log Explorer</h2>
      <a href="#zero-trust-dataset-support-in-log-explorer">
        
      </a>
    </div>
    <p>Log Explorer stores your Cloudflare logs for a 30-day retention period so that you can analyze them natively and in a single interface, within the Cloudflare Dashboard. Cloudflare log data is diverse, reflecting the breadth of capabilities available.  For example, HTTP requests contain information about the client such as their IP address, request method, <a href="https://www.cloudflare.com/learning/network-layer/what-is-an-autonomous-system/"><u>autonomous system (ASN)</u></a>, request paths, and TLS versions used. Additionally, Cloudflare’s Application Security <a href="https://developers.cloudflare.com/waf/detections/"><u>WAF Detections</u></a> enrich these HTTP request logs with additional context, such as the <a href="https://developers.cloudflare.com/waf/detections/attack-score/"><u>WAF attack score</u></a>, to identify threats.</p><p>Today we are announcing that seven additional Cloudflare product datasets are now available in Log Explorer. These seven datasets are the logs generated from our Zero Trust product suite, and include logs from <a href="https://developers.cloudflare.com/logs/reference/log-fields/account/access_requests/"><u>Access</u></a>, <a href="https://developers.cloudflare.com/logs/reference/log-fields/account/gateway_dns/"><u>Gateway DNS</u></a>, <a href="https://developers.cloudflare.com/logs/reference/log-fields/account/gateway_http/"><u>Gateway HTTP</u></a>, <a href="https://developers.cloudflare.com/logs/reference/log-fields/account/gateway_network/"><u>Gateway Network</u></a>, <a href="https://developers.cloudflare.com/logs/reference/log-fields/account/casb_findings/"><u>CASB</u></a>, <a href="https://developers.cloudflare.com/logs/reference/log-fields/account/zero_trust_network_sessions/"><u>Zero </u></a></p><p><a href="https://developers.cloudflare.com/logs/reference/log-fields/account/zero_trust_network_sessions/"><u>Trust Network Session</u></a>, and <a href="https://developers.cloudflare.com/logs/reference/log-fields/account/device_posture_results/"><u>Device Posture Results</u></a>. Read on for examples of how to use these logs to identify common threats.</p>
    <div>
      <h3>Investigating unauthorized access</h3>
      <a href="#investigating-unauthorized-access">
        
      </a>
    </div>
    <p>By reviewing Access logs and HTTP request logs, we can reveal attempts to access resources or systems without proper permissions, including brute force password attacks, indicating potential security breaches or malicious activity.</p><p>Below, we filter Access Logs on the <code>Allowed</code> field, to see activity related to unauthorized access.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2piOIdnNz9OWskJqrZJfcf/f88673fc184c23de493920661020e7b3/access_requests.png" />
          </figure><p>By then reviewing the HTTP logs for the requests identified in the previous query, we can assess if bot networks are the source of unauthorized activity.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4b38nYNdpLbmHFt0BHkapa/88e1acf82d8bbc257a7cbbe102cbd723/http_requests.png" />
          </figure><p>With this information, you can craft targeted <a href="https://developers.cloudflare.com/waf/custom-rules/"><u>Custom Rules</u></a> to block the offending traffic. </p>
    <div>
      <h3>Detecting malware</h3>
      <a href="#detecting-malware">
        
      </a>
    </div>
    <p>Cloudflare's <a href="https://developers.cloudflare.com/cloudflare-one/policies/gateway/"><u>Web Gateway</u></a> can track which websites users are accessing, allowing administrators to identify and block access to malicious or inappropriate sites. These logs can be used to detect if a user’s machine or account is compromised by malware attacks. When reviewing logs, this may become apparent when we look for records that show a rapid succession of attempts to browse known malicious sites, such as hostnames that have long strings of seemingly random characters that hide their true destination. In this example, we can query logs looking for requests to a spoofed YouTube URL.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5Nkm4udjUw9tmzPk0Fk1eK/524dc1a6d4070a1f6cc9478e09b67ffd/gateway_requests.png" />
          </figure>
    <div>
      <h2>Monitoring what matters using custom dashboards</h2>
      <a href="#monitoring-what-matters-using-custom-dashboards">
        
      </a>
    </div>
    <p>Security monitoring is not one size fits all. For instance, companies in the retail or financial industries worry about fraud, while every company is concerned about data exfiltration, of information like trade secrets. And any form of personally identifiable information (PII) is a target for data breaches or ransomware attacks.</p><p>While log exploration helps you react to threats, our new custom dashboards allow you to define the specific metrics you need in order to monitor threats you are concerned about. </p><p>Getting started is easy, with the ability to create a chart using natural language. A natural language interface is integrated into the chart create/edit experience, enabling you to describe in your own words the chart you want to create. Similar to the <a href="https://blog.cloudflare.com/security-analytics-ai-assistant/"><u>AI Assistant</u></a> we announced during Security Week 2024, the prompt translates your language to the appropriate chart configuration, which can then be added to a new or existing custom dashboard.</p><ul><li><p><b>Use a prompt</b>: Enter a query like “Compare status code ranges over time”. The AI model decides the most appropriate visualization and constructs your chart configuration.</p></li><li><p><b>Customize your chart</b>: Select the chart elements manually, including the chart type, title, dataset to query, metrics, and filters. This option gives you full control over your chart’s structure. </p></li></ul><div>
  
</div>
<br /><p><sup><i>Video shows entering a natural language description of desired metric “compare status code ranges over time”, preview chart shown is a time series grouped by error code ranges, selects “add chart” to save to dashboard.</i></sup></p><p>For more help getting started, we have some pre-built templates that you can use for monitoring specific uses. Available templates currently include: </p><ul><li><p><b>Bot monitoring</b>: Identify automated traffic accessing your website</p></li><li><p><b>API Security:</b> Monitor the data transfer and exceptions of API endpoints within your application</p></li><li><p><b>API Performance</b>: See timing data for API endpoints in your application, along with error rates</p></li><li><p><b>Account Takeover:</b> View login attempts, usage of leaked credentials, and identify account takeover attacks</p></li><li><p><b>Performance Monitoring</b>: Identify slow hosts and paths on your origin server, and view <a href="https://blog.cloudflare.com/ttfb-is-not-what-it-used-to-be/"><u>time to first byte (TTFB)</u></a> metrics over time</p></li></ul><p>Templates provide a good starting point, and once you create your dashboard, you can add or remove individual charts using the same natural language chart creator. </p><div>
  
</div>
<br /><p><sup><i>Video shows editing chart from an existing dashboard and moving individual charts via drag and drop.</i></sup></p>
    <div>
      <h3>Example use cases</h3>
      <a href="#example-use-cases">
        
      </a>
    </div>
    <p>Custom dashboards can be used to monitor for suspicious activity, or to keep an eye on performance and errors for your domains. Let’s explore some examples of suspicious activity that we can monitor using custom dashboards.</p><p>Take, for example, our use case from above: investigating unauthorized access. With custom dashboards, you can create a dashboard using the <b>Account takeover</b> template to monitor for suspicious login activity related to your domain.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/72KBaEdr0bEn4SNwKOfPfJ/e28997b94630cf856d3924e9ba443063/image7.png" />
          </figure><p>As another example, spikes in requests or errors are common indicators that something is wrong, and they can sometimes be signals of suspicious activity. With the Performance Monitoring template, you can view origin response time and time to first byte metrics as well as monitor for common errors. For example, in this chart, the spikes in 404 errors could be an indication of an unauthorized scan of your endpoints.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3krBxVm8dB5pr5XEoHnVtK/44f682436c3d5a63baa1105987347433/image1.jpg" />
          </figure>
    <div>
      <h3>Seamlessly integrated into the Cloudflare platform</h3>
      <a href="#seamlessly-integrated-into-the-cloudflare-platform">
        
      </a>
    </div>
    <p>When using custom dashboards, if you observe a traffic pattern or spike in errors that you would like to further investigate, you can click the button to “View in Security Analytics” in order to drill down further into the data and craft custom WAF rules to mitigate the threat.  </p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5XfvQ24bvDmnNKeInyA8eU/e96798a72e55fa454439f8b85197e02b/image2.png" />
          </figure><p>These tools, seamlessly integrated into the Cloudflare platform, will enable users to discover, investigate, and mitigate threats all in one place, reducing time to resolution and overall cost of ownership by eliminating the need to forward logs to third party security analysis tools. And because it is a native part of Cloudflare, you can immediately use the data from your investigation to craft targeted rules that will block these threats. </p>
    <div>
      <h2>What’s next</h2>
      <a href="#whats-next">
        
      </a>
    </div>
    <p>Stay tuned as we continue to develop more capabilities in the areas of <a href="https://www.cloudflare.com/learning/performance/what-is-observability/">observability and forensics</a>, with additional features including: </p><ul><li><p><b>Custom alerts</b>: create alerts based on specific metrics or anomalies</p></li><li><p><b>Scheduled query detections</b>: craft log queries and run them on a schedule to detect malicious activity</p></li><li><p><b>More integration</b>: further streamlining the journey between detect, investigate, and mitigate across the full Cloudflare platform.</p></li></ul>
    <div>
      <h2>How to get it</h2>
      <a href="#how-to-get-it">
        
      </a>
    </div>
    <p>Current Log Explorer beta users get immediate access to the new custom dashboards feature. Pricing will be made available to everyone during Q2 2025. Between now and then, these features continue to be available at no cost.</p><p>Let us know if you are interested in joining our Beta program by completing <a href="https://www.cloudflare.com/lp/log-explorer/"><u>this form</u></a>, and a member of our team will contact you.</p>
    <div>
      <h2>Watch on Cloudflare TV</h2>
      <a href="#watch-on-cloudflare-tv">
        
      </a>
    </div>
    <div>
  
</div><p></p> ]]></content:encoded>
            <category><![CDATA[Security Week]]></category>
            <category><![CDATA[Analytics]]></category>
            <category><![CDATA[Logs]]></category>
            <category><![CDATA[Security]]></category>
            <category><![CDATA[undefined]]></category>
            <category><![CDATA[SIEM]]></category>
            <category><![CDATA[Product News]]></category>
            <category><![CDATA[Connectivity Cloud]]></category>
            <guid isPermaLink="false">76XBFojN0mhfyCoz6VRe1G</guid>
            <dc:creator>Jen Sells</dc:creator>
        </item>
        <item>
            <title><![CDATA[Cloudflare incident on November 14, 2024, resulting in lost logs]]></title>
            <link>https://blog.cloudflare.com/cloudflare-incident-on-november-14-2024-resulting-in-lost-logs/</link>
            <pubDate>Tue, 26 Nov 2024 16:00:00 GMT</pubDate>
            <description><![CDATA[ On November 14, 2024, Cloudflare experienced a Cloudflare Logs outage, impacting the majority of customers using these products. During the ~3.5 hours that these services were impacted, about 55% of the logs we normally send to customers were not sent and were lost. The details of what went wrong and why are interesting both for customers and practitioners. ]]></description>
            <content:encoded><![CDATA[ <p>On November 14, 2024, Cloudflare experienced an incident which impacted the majority of customers using <a href="https://developers.cloudflare.com/logs"><u>Cloudflare Logs</u></a>. During the roughly 3.5 hours that these services were impacted, about 55% of the logs we normally send to customers were not sent and were lost. We’re very sorry this happened, and we are working to ensure that a similar issue doesn't happen again.</p><p>This blog post explains what happened and what we’re doing to prevent recurrences. Also, the systems involved and the particular class of failure we experienced will hopefully be of interest to engineering teams beyond those specifically using these products.</p><p>Failures within systems at scale are inevitable, and it’s essential that subsystems protect themselves from failures in other parts of the larger system to prevent cascades. In this case, a misconfiguration in one part of the system caused a cascading overload in another part of the system, which was itself misconfigured. Had it been properly configured, it could have prevented the loss of logs.</p>
    <div>
      <h2>Background</h2>
      <a href="#background">
        
      </a>
    </div>
    <p>Cloudflare’s network is a globally distributed system enabling and supporting a wide variety of services. Every part of this system generates event logs which contain detailed metadata about what’s happening with our systems around the world. For example, an event log is generated for every request to Cloudflare’s CDN. <a href="https://developers.cloudflare.c/om/logs"><u>Cloudflare Logs</u></a> makes these event logs available to customers, who use them in a number of ways, including compliance, observability, and accounting.</p><p>On a typical day, Cloudflare sends about 4.5 trillion individual event logs to customers. Although this represents less than 10% of the over 50 trillion total customer event logs processed, it presents unique challenges of scale when building a reliable and fault-tolerant system.</p>
    <div>
      <h2>System architecture</h2>
      <a href="#system-architecture">
        
      </a>
    </div>
    <p>Cloudflare’s network is composed of tens of thousands of individual servers, network hardware components, and specialized software programs located in over 330 cities around the world. Although Cloudflare’s <a href="https://developers.cloudflare.com/logs/edge-log-delivery/"><u>Edge Log Delivery</u></a> product will send customers their event logs directly from each server, most customers opt not to do this because doing so will create significant complication and cost at the receiving end.</p><p>By analogy, imagine the postal service ringing your doorbell once for each letter instead of once for each packet of letters. With thousands or millions of letters each second, the number of separate transactions that would entail becomes prohibitive.</p><p>Fortunately, we also offer <a href="https://developers.cloudflare.com/logs/about/"><u>Logpush</u></a>, which collects and pushes logs to customers in more predictable file sizes and which scales automatically with usage. In order to provide this feature several services work together to collect and push the logs, as illustrated in the diagram below:</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6pRdzUUrMbwG3AWsPncB3w/75146d1e379ccb126d8cd0210a6c12b8/image2.png" />
          </figure>
    <div>
      <h3>Logfwdr</h3>
      <a href="#logfwdr">
        
      </a>
    </div>
    <p>Logfwdr is an internal service written in Golang that accepts event logs from internal services running across Cloudflare’s global network and forwards them in batches to a service called Logreceiver. Logfwdr handles many different types of event logs, and one of its responsibilities is to determine which event logs should be forwarded and where they should be sent based on the type of event log, which customers it represents, and associated rules about where it should be processed. Configuration is provided to Logfwdr to enable it to make these determinations.</p>
    <div>
      <h3>Logreceiver</h3>
      <a href="#logreceiver">
        
      </a>
    </div>
    <p>Logreceiver (also written in Golang) accepts the batches of logs from across Cloudflare’s global network and further sorts them depending on the type of event and its purpose. For Cloudflare Logs, Logreceiver demultiplexes the batches into per-customer batches and forwards them to be buffered by Buftee. Currently, Logreceiver is handling about 45 PB (uncompressed) of customer event logs each day.</p>
    <div>
      <h3>Buftee</h3>
      <a href="#buftee">
        
      </a>
    </div>
    <p>It’s common for data pipelines to include a buffer. Producers and consumers of the data might be operating at different cadences, and parts of the pipeline will experience variances in how quickly they can process information. Using a <a href="https://en.wikipedia.org/wiki/Data_buffer"><u>buffer</u></a> makes it easier to manage these situations, and helps to prevent data loss if downstream consumers are broken. It’s also convenient to have a buffer that supports multiple downstream consumers with different cadences (<a href="https://en.wikipedia.org/wiki/Tee_(command)"><u>like the pipe fitting function of a tee</u></a>.)</p><p>At Cloudflare, we use an internal system called Buftee (written in Golang) to support this combined function. Buftee is a highly distributed system which supports a large number of named “buffers”. It supports operating on named “prefixes” (collections of buffers) as well as multiple representations/resolutions of the same time-indexed dataset. Using Buftee makes it possible for Cloudflare to handle extremely high throughput very efficiently.</p><p>For Cloudflare Logs, Buftee provides buffers for each Logpush job, containing 100% of the logs generated by the zone or account referenced by each job. This means that failure to process one customer’s job will not affect progress on another customer’s job. Handling buffers in this way avoids “head of line” blocking and also enables us to encrypt and delete each customer’s data separately if needed.</p><p>Buftee typically handles over 1 million buffers globally. The following is a snapshot of the number of buffers managed by Buftee servers in the period just prior to the incident.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5g4Ev1CX8pGqtpNUY3vZOo/50d8dfa7a1e9c492a537e4822d801625/image5.png" />
          </figure>
    <div>
      <h3>Logpush</h3>
      <a href="#logpush">
        
      </a>
    </div>
    <p>Logpush is a Golang service which reads logs from Buftee buffers and pushes the results in batches to various destinations configured by customers. A batch could end up, for example, as a file in R2. Each job has a unique configuration, and only jobs that are active and configured will be pushed. Currently, we push over 600 million such batches each day.</p>
    <div>
      <h2>What happened</h2>
      <a href="#what-happened">
        
      </a>
    </div>
    <p>On November 14, 2024, we made a change to support an additional <a href="https://developers.cloudflare.com/logs/reference/log-fields/#datasets"><u>dataset</u></a> for Logpush. This required adding a new configuration to be provided to Logfwdr in order for it to know which customers’ logs to forward for this new stream. Every few minutes, a separate system re-generates the configuration used by Logfwdr to decide which logs need to be forwarded. A bug in this system resulted in a blank configuration being provided to Logfwdr.</p><p>This bug essentially informed Logfwdr that <b>no customers</b> had logs configured to be pushed. The team quickly noticed the mistake and reverted the change in under five minutes.</p><p>Unfortunately, this first mistake triggered a second, latent bug in Logfwdr itself. A failsafe introduced in the early days of this feature, when traffic was much lower, was configured to “fail open”. This failsafe was designed to protect against a situation when this specific Logfwdr configuration was unavailable (as in this case) by transmitting events for <b>all customers</b> instead of just those who had configured a Logpush job. This was intended to prevent the loss of logs at the expense of sending more logs than strictly necessary when individual hosts were prevented from getting the configuration due to intermittent networking errors, for example.</p><p>When this failsafe was first introduced, the potential list of customers was smaller than it is today. This small window of less than five minutes resulted in a massive spike<b> </b>in the number of customers whose logs were sent by Logfwdr.</p><p>Even given this massive overload, our systems would have continued to send logs if not for one additional problem. Remember that Buftee creates a separate buffer for each customer with their logs to be pushed. When Logfwdr began to send event logs for all customers, Buftee began to create buffers for each one as those logs arrived, and each buffer requires resources as well as the bookkeeping to maintain them. This massive increase, resulting in roughly 40 times more buffers, is not something we’ve provisioned Buftee clusters to handle. In the lead-up to impact, Buftee was managing 40 million buffers globally, as shown in the figure below.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7HPwhkvRxiAQtjVbdVxRUN/a1d4aa174961f6a163e884011c8c18ad/image3.png" />
          </figure><p>A short temporary misconfiguration lasting just five minutes created a massive overload that took us several hours to fix and recover from. Because our backstops were not properly configured, the underlying systems became so overloaded that we could not interact with them normally.  A full reset and restart was required.</p>
    <div>
      <h2>Root causes</h2>
      <a href="#root-causes">
        
      </a>
    </div>
    <p>The bug in the Logfwdr configuration system was easy to fix, but it’s the type of bug that was likely to happen at some point.  We had planned for it by designing the original “fail open” behavior.  However, we neglected to regularly test that the broader system was capable of handling a fail open event.</p><p>The bigger failure was that Buftee became unresponsive.  Buftee’s purpose is to be a safeguard against bugs like this one.  A huge increase in the number of buffers is a failure mode that we had predicted, and had put mechanisms in Buftee to prevent this failure from cascading.  Our failure in this case was that we had not configured these mechanisms.  Had they been configured correctly, Buftee would not have been overwhelmed.</p><p>It's like having a seatbelt in a car, yet not fastening it. The seatbelt is there to protect you in case of an accident but if you don't actually buckle it up, it's not going to do its job when you need it. Similarly, while we had the safeguard of Buftee in place, we hadn't 'buckled it up' by configuring the necessary settings. We’re very sorry this happened and are taking steps to prevent a recurrence as described below.</p>
    <div>
      <h2>Going forward</h2>
      <a href="#going-forward">
        
      </a>
    </div>
    <p>We’re creating alerts to ensure that these particular misconfigurations will be impossible to miss, and we are also addressing the specific bug and the associated tests that triggered this incident.</p><p>Just as importantly, we accept that mistakes and misconfigurations are inevitable. All our systems at Cloudflare need to respond to these predictably and gracefully. Currently, we conduct regular “cut tests” to ensure that these systems will cope with the loss of a datacenter or a network failure. In the future, we’ll also conduct regular “overload tests” to simulate the kind of cascade which happened in this incident to ensure that our production systems will handle them gracefully.</p><p>Logpush is a robust and flexible platform for customers who need to integrate their own logging and monitoring systems with Cloudflare. Different Logpush jobs can be deployed to support multiple destinations or, with filtering, multiple subsets of logs.</p> ]]></content:encoded>
            <category><![CDATA[Logs]]></category>
            <category><![CDATA[Data]]></category>
            <category><![CDATA[Log Push]]></category>
            <guid isPermaLink="false">3SNSdDbbVziSrdGxDq4AzH</guid>
            <dc:creator>Jamie Herre</dc:creator>
            <dc:creator>Tom Walwyn</dc:creator>
            <dc:creator>Christian Endres</dc:creator>
            <dc:creator>Gabriele Viglianisi</dc:creator>
            <dc:creator>Mik Kocikowski</dc:creator>
            <dc:creator>Rian van der Merwe</dc:creator>
        </item>
        <item>
            <title><![CDATA[Log Explorer: monitor security events without third-party storage]]></title>
            <link>https://blog.cloudflare.com/log-explorer/</link>
            <pubDate>Fri, 08 Mar 2024 14:05:00 GMT</pubDate>
            <description><![CDATA[ With the combined power of Security Analytics + Log Explorer, security teams can analyze, investigate, and monitor for security attacks natively within Cloudflare, reducing time to resolution and overall cost of ownership for customers by eliminating the need to forward logs to third-party SIEMs ]]></description>
            <content:encoded><![CDATA[ <p></p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1GhVBYNZAsGZtOfgo8C3VY/42fc180d060574162071cbdd13ad6a88/image6-6.png" />
            
            </figure><p>Today, we are excited to announce beta availability of <a href="https://developers.cloudflare.com/logs/log-explorer/">Log Explorer</a>, which allows you to investigate your HTTP and Security Event logs directly from the Cloudflare Dashboard. Log Explorer is an extension of <a href="/security-analytics">Security Analytics</a>, giving you the ability to review related raw logs. You can analyze, investigate, and monitor for security attacks natively within the Cloudflare Dashboard, reducing time to resolution and overall cost of ownership by eliminating the need to forward logs to third party security analysis tools.</p>
    <div>
      <h3>Background</h3>
      <a href="#background">
        
      </a>
    </div>
    <p>Security Analytics enables you to analyze all of your HTTP traffic in one place, giving you the security lens you need to identify and act upon what matters most: potentially malicious traffic that has not been mitigated. Security Analytics includes built-in views such as top statistics and in-context quick filters on an intuitive page layout that enables rapid exploration and validation.</p><p>In order to power our rich analytics dashboards with fast query performance, we implemented <a href="https://developers.cloudflare.com/analytics/graphql-api/sampling/">data sampling</a> using <a href="/explaining-cloudflares-abr-analytics">Adaptive Bit Rate</a> (ABR) analytics. This is a great fit for providing high level aggregate views of the data. However, we received feedback from many Security Analytics power users that sometimes they need access to a more granular view of the data — they need logs.</p><p>Logs provide critical visibility into the operations of today's computer systems. Engineers and SOC analysts rely on logs every day to troubleshoot issues, identify and investigate security incidents, and tune the performance, reliability, and <a href="https://www.cloudflare.com/application-services/solutions/">security</a> of their applications and infrastructure. Traditional metrics or monitoring solutions provide aggregated or statistical data that can be used to identify trends. Metrics are wonderful at identifying THAT an issue happened, but lack the detailed events to help engineers uncover WHY it happened. Engineers and SOC Analysts rely on raw log data to answer questions such as:</p><ul><li><p>What is causing this increase in 403 errors?</p></li><li><p>What data was accessed by this IP address?</p></li><li><p>What was the user experience of this particular user’s session?</p></li></ul><p>Traditionally, these engineers and analysts would stand up a collection of various monitoring tools in order to capture logs and get this visibility. With more organizations using multiple clouds, or a hybrid environment with both cloud and on-premise tools and architecture, it is crucial to have a unified platform to regain visibility into this increasingly complex environment.  As more and more companies are moving towards a cloud native architecture, we see Cloudflare’s <a href="https://www.cloudflare.com/en-gb/learning/cloud/what-is-a-connectivity-cloud/">connectivity cloud</a> as an integral part of their performance and security strategy.</p><p>Log Explorer provides a lower cost option for storing and exploring log data within Cloudflare. Until today, we have offered the ability to export logs to expensive third party tools, and now with Log Explorer, you can quickly and easily explore your log data without leaving the Cloudflare Dashboard.</p>
    <div>
      <h3>Log Explorer Features</h3>
      <a href="#log-explorer-features">
        
      </a>
    </div>
    <p>Whether you're a SOC Engineer investigating potential incidents, or a Compliance Officer with specific log retention requirements, Log Explorer has you covered. It stores your Cloudflare logs for an uncapped and customizable period of time, making them accessible natively within the Cloudflare Dashboard. The supported features include:</p><ul><li><p>Searching through your HTTP Request or Security Event logs</p></li><li><p>Filtering based on any field and a number of standard operators</p></li><li><p>Switching between basic filter mode or SQL query interface</p></li><li><p>Selecting fields to display</p></li><li><p>Viewing log events in tabular format</p></li><li><p>Finding the HTTP request records associated with a Ray ID</p></li></ul>
    <div>
      <h3>Narrow in on unmitigated traffic</h3>
      <a href="#narrow-in-on-unmitigated-traffic">
        
      </a>
    </div>
    <p>As a SOC analyst, your job is to monitor and respond to threats and incidents within your organization’s network. Using Security Analytics, and now with Log Explorer, you can identify anomalies and conduct a forensic investigation all in one place.</p><p>Let’s walk through an example to see this in action:</p><p>On the Security Analytics dashboard, you can see in the Insights panel that there is some traffic that has been tagged as a likely attack, but not mitigated.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5Oq3oqY8JXMigK8OJKPFZ4/5d3a8751a56f06f58e96538f1d46a480/Screenshot-2024-03-07-at-20.20.41.png" />
            
            </figure><p>Clicking the filter button narrows in on these requests for further investigation.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7sWkjUYz1J0So4nphy4FSs/769d5ebb0b706a073a616b706783030c/image11.jpg" />
            
            </figure><p>In the sampled logs view, you can see that most of these requests are coming from a common client IP address.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5gtrP14GKbnB0YeV0ySgL1/6b476ec9d19e255912315eac9730604d/Sampled-logs.png" />
            
            </figure><p>You can also see that Cloudflare has flagged all of these requests as bot traffic. With this information, you can craft a WAF rule to either block all traffic from this IP address, or block all traffic with a bot score lower than 10.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2YgFTRD7u3KYh0bbInLylK/f076a70bc09f41d8ace0569bca172b39/Screenshot-2024-03-07-at-20.22.04.png" />
            
            </figure><p>Let’s say that the Compliance Team would like to gather documentation on the scope and impact of this attack. We can dig further into the logs during this time period to see everything that this attacker attempted to access.</p><p>First, we can use Log Explorer to query HTTP requests from the suspect IP address during the time range of the spike seen in Security Analytics.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/qsi7UnxjtygCQHnMCIx02/cda0aacf6d783b05c15196f27907c611/Log-Explorer.png" />
            
            </figure>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/YyewPADkPzjofXHXSihyi/e494ca5d4b9a3ad7071d5d3e27f57887/Query-results.png" />
            
            </figure><p>We can also review whether the attacker was able to <a href="https://www.cloudflare.com/learning/security/what-is-data-exfiltration/">exfiltrate</a> data by adding the OriginResponseBytes field and updating the query to show requests with OriginResponseBytes &gt; 0. The results show that no data was exfiltrated.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3TgN2aPwWTDo5niA95TGQx/71143be92550dfea1ce284507fa688ac/No-logs-found.png" />
            
            </figure>
    <div>
      <h3>Find and investigate false positives</h3>
      <a href="#find-and-investigate-false-positives">
        
      </a>
    </div>
    <p>With access to the full logs via Log Explorer, you can now perform a search to find specific requests.</p><p>A 403 error occurs when a user’s request to a particular site is blocked. Cloudflare’s security products use things like <a href="/introducing-ip-lists/">IP reputation</a> and <a href="/stop-attacks-before-they-are-known-making-the-cloudflare-waf-smarter/">WAF attack scores</a> based on ML technologies in order to assess whether a given HTTP request is malicious. This is extremely effective, but sometimes requests are mistakenly flagged as malicious and blocked.</p><p>In these situations, we can now use Log Explorer to identify these requests and why they were blocked, and then adjust the relevant WAF rules accordingly.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5ecKfF4lRDQSyLCoOgDjTw/6f0872962caff8d719958e6fdfcc8dbc/Log-Explorer-2.png" />
            
            </figure><p>Or, if you are interested in tracking down a specific request by Ray ID, an identifier given to every request that goes through Cloudflare, you can do that via Log Explorer with one query.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3NCDOP0axFC3wZ4qs03Jei/86286da2fd9fca3cdb68214c1d0f472a/Log-Explorer-3.png" />
            
            </figure><p>Note that the LIMIT clause is included in the query by default, but has no impact on RayID queries as RayID is unique and only one record would be returned when using the RayID filter field.</p>
    <div>
      <h3>How we built Log Explorer</h3>
      <a href="#how-we-built-log-explorer">
        
      </a>
    </div>
    
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3pNBd5iSyVW7aqfJ0JfYDP/89cd0341cc35ad9d86fd8687ba7a9147/How-we-built-Log-Explorer.png" />
            
            </figure><p>With Log Explorer, we have built a long-term, append-only log storage platform on top of <a href="https://www.cloudflare.com/developer-platform/r2/">Cloudflare R2</a>. Log Explorer leverages the <a href="https://databricks.com/wp-content/uploads/2020/08/p975-armbrust.pdf">Delta Lake</a> protocol, an open-source storage framework for building highly performant, <a href="https://en.wikipedia.org/wiki/ACID">ACID</a>-compliant databases atop a cloud object store. In other words, Log Explorer combines a large and cost-effective storage system – <a href="#">Cloudflare R2</a> – with the benefits of strong consistency and high performance. Additionally, Log Explorer gives you a SQL interface to your Cloudflare logs.</p><p>Each Log Explorer dataset is stored on a per-customer level, just like Cloudflare D1, so that your data isn't placed with that of other customers. In the future, this single-tenant storage model will give you the flexibility to create your own retention policies and decide in which regions you want to store your data.</p><p>Under the hood, the datasets for each customer are stored as Delta tables in R2 buckets. A <i>Delta table</i> is a storage format that organizes Apache Parquet objects into directories using Hive's partitioning naming convention. Crucially, Delta tables pair these storage objects with an append-only, checkpointed transaction log. This design allows Log Explorer to support multiple writers with optimistic concurrency.</p><p>Many of the products Cloudflare builds are a direct result of the challenges our own team is looking to address. Log Explorer is a perfect example of this <a href="/tag/dogfooding">culture of dogfooding</a>. Optimistic concurrent writes require atomic updates in the underlying object store, and as a result of our needs, R2 added a PutIfAbsent operation with strong consistency. Thanks, R2! The atomic operation sets Log Explorer apart from Delta Lake solutions based on Amazon Web Services’ S3, which incur the operational burden of using an <a href="https://delta.io/blog/2022-05-18-multi-cluster-writes-to-delta-lake-storage-in-s3/">external store</a> for synchronizing writes.</p><p>Log Explorer is written in the Rust programming language using open-source libraries, such as <a href="https://github.com/delta-io/delta-rs">delta-rs</a>, a native Rust implementation of the Delta Lake protocol, and <a href="https://arrow.apache.org/datafusion/">Apache Arrow DataFusion</a>, a very fast, extensible query engine. At Cloudflare, Rust has emerged as a popular choice for new product development due to its safety and performance benefits.</p>
    <div>
      <h3>What’s next</h3>
      <a href="#whats-next">
        
      </a>
    </div>
    <p>We know that application security logs are only part of the puzzle in understanding what’s going on in your environment. Stay tuned for future developments including tighter, more seamless integration between Analytics and Log Explorer, the addition of more datasets including Zero Trust logs, the ability to define custom retention periods, and integrated custom alerting.</p><p>Please use the <a href="https://forms.gle/tvKQDdXmCk98zyV9A">feedback link</a> to let us know how Log Explorer is working for you and what else would help make your job easier.</p>
    <div>
      <h3>How to get it</h3>
      <a href="#how-to-get-it">
        
      </a>
    </div>
    <p>We’d love to hear from you! Let us know if you are interested in joining our Beta program by completing <a href="https://cloudflare.com/lp/log-explorer/">this form</a> and a member of our team will contact you.</p><p>Pricing will be finalized prior to a General Availability (GA) launch.</p><div>
  
</div><p>Tune in for more news, announcements and thought-provoking discussions! Don't miss the full <a href="https://cloudflare.tv/shows/security-week">Security Week hub page</a>.</p> ]]></content:encoded>
            <category><![CDATA[Security Week]]></category>
            <category><![CDATA[Analytics]]></category>
            <category><![CDATA[Logs]]></category>
            <category><![CDATA[Security]]></category>
            <category><![CDATA[undefined]]></category>
            <category><![CDATA[SIEM]]></category>
            <category><![CDATA[Product News]]></category>
            <category><![CDATA[Connectivity Cloud]]></category>
            <guid isPermaLink="false">3K5UjFarMC09kkM507HshK</guid>
            <dc:creator>Jen Sells</dc:creator>
            <dc:creator>Claudio Jolowicz</dc:creator>
            <dc:creator>Cole MacKenzie</dc:creator>
        </item>
        <item>
            <title><![CDATA[Enhancing security analysis with Cloudflare Zero Trust logs and Elastic SIEM]]></title>
            <link>https://blog.cloudflare.com/enhancing-security-analysis-with-cloudflare-zero-trust-logs-and-elastic-siem/</link>
            <pubDate>Thu, 22 Feb 2024 14:00:26 GMT</pubDate>
            <description><![CDATA[ Today, we are thrilled to announce new Cloudflare Zero Trust dashboards on Elastic. Shared customers using Elastic can now use these pre-built dashboards to store, search, and analyze their Zero Trust logs ]]></description>
            <content:encoded><![CDATA[ <p></p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/696ov5uPtgNwN7Qm735ESm/6f88ef27e4cacb8057d6e600fd20d378/image3-7.png" />
            
            </figure><p>Today, we are thrilled to announce new Cloudflare Zero Trust dashboards on Elastic. Shared customers using Elastic can now use these pre-built <a href="https://docs.elastic.co/integrations/cloudflare_logpush#zero-trust-events">dashboards to store, search, and analyze</a> their Zero Trust logs.</p><p>When organizations look to adopt a <a href="https://www.cloudflare.com/learning/security/glossary/what-is-zero-trust/">Zero Trust architecture</a>, there are many components to get right. If products are configured incorrectly, used maliciously, or security is somehow breached during the process, it can open your organization to underlying security risks without the ability to get insight from your data quickly and efficiently.</p><p>As a Cloudflare technology partner, Elastic helps Cloudflare customers find what they need faster, while keeping applications running smoothly and <a href="https://www.cloudflare.com/products/zero-trust/threat-defense/">protecting against cyber threats</a>. “I'm pleased to share our collaboration with Cloudflare, making it even easier to deploy log and analytics dashboards. This partnership combines Elastic's open approach with Cloudflare's practical solutions, offering straightforward tools for enterprise search, observability, and security deployment,” explained Mark Dodds, Chief Revenue Officer at Elastic.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7kDqbu2kQvUL1P47N6aDMY/8dacf9b75432a900b32cb900f080366a/image5-3.png" />
            
            </figure>
    <div>
      <h2>Value of Zero Trust logs in Elastic</h2>
      <a href="#value-of-zero-trust-logs-in-elastic">
        
      </a>
    </div>
    <p>With this joint solution, we’ve made it easy for customers to seamlessly forward their Zero Trust logs to Elastic via Logpush jobs. This can be achieved directly via a Restful API or through an intermediary storage solution like AWS S3 or Google Cloud. Additionally, Cloudflare's integration with Elastic has undergone improvements to encompass all categories of Zero Trust logs generated by Cloudflare.</p><p><b>Here are detailed some highlights of what the integration offers:</b></p><ul><li><p><b>Comprehensive Visibility:</b> Integrating Cloudflare Logpush into Elastic provides organizations with a real-time, comprehensive view of events related to Zero Trust. This enables a detailed understanding of who is accessing resources and applications, from where, and at what times. Enhanced visibility helps detect anomalous behavior and potential security threats more effectively, allowing for early response and mitigation.</p></li><li><p><b>Field Normalization:</b> By unifying data from Zero Trust logs in Elastic, it's possible to apply consistent field normalization not only for Zero Trust logs but also for other sources. This simplifies the process of search and analysis, as data is presented in a uniform format. Normalization also facilitates the creation of alerts and the identification of patterns of malicious or unusual activity.</p></li><li><p><b>Efficient Search and Analysis:</b> Elastic provides powerful data search and analysis capabilities. Having Zero Trust logs in Elastic enables quick and precise searching for specific information. This is crucial for investigating security incidents, understanding workflows, and making informed decisions.</p></li><li><p><b>Correlation and Threat Detection:</b> By combining Zero Trust data with other security events and data, Elastic enables deeper and more effective correlation. This is essential for detecting threats that might go unnoticed when analyzing each data source separately. Correlation aids in pattern identification and the detection of sophisticated attacks.</p></li><li><p><b>Prebuilt Dashboards:</b> The integration provides out-of-the-box dashboards offering a quick start to visualizing key metrics and patterns. These dashboards help security teams visualize the security landscape in a clear and concise manner. The integration not only provides the advantage of prebuilt dashboards designed for Zero Trust datasets but also empowers users to curate their own visualizations.</p></li></ul>
    <div>
      <h2>What’s new on the dashboards</h2>
      <a href="#whats-new-on-the-dashboards">
        
      </a>
    </div>
    <p>One of the main assets of the integration is the out-of-the-box dashboards tailored specifically for each type of Zero Trust log. Let's explore some of these dashboards in more detail to find out how they can help us in terms of visibility.</p>
    <div>
      <h3>Gateway HTTP</h3>
      <a href="#gateway-http">
        
      </a>
    </div>
    <p>This dashboard focuses on HTTP traffic and allows for monitoring and analyzing HTTP requests passing through Cloudflare's <a href="https://www.cloudflare.com/zero-trust/products/gateway/">Secure Web Gateway</a>.</p><p>Here, patterns of traffic can be identified, potential threats detected, and a better understanding gained of how resources are being used within the network.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2C5VeJ6U4MfZjn7cmHgAPn/0e2600c2f5cfdd83d9f9713d60454cc0/image2-10.png" />
            
            </figure><p>Every visualization in the stage is interactive. Therefore, the whole dashboard adapts to enabled filters, and they can be pinned across dashboards for pivoting. For instance, if clicking on one of the sections of the donut showing the different actions, a filter is automatically applied on that value and the whole dashboard is oriented around it.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5oHgZ74rXxV1we32WHqsye/ae9d1d99546257b6a6140e0a94947ca8/image1-9.png" />
            
            </figure>
    <div>
      <h3>CASB</h3>
      <a href="#casb">
        
      </a>
    </div>
    <p>Following with a different perspective, the <a href="https://www.cloudflare.com/learning/access-management/what-is-a-casb/">CASB (Cloud Access Security Broker)</a> dashboard provides visibility over cloud applications used by users. Its visualizations are targeted to detect threats effectively, helping in the risk management and regulatory compliance.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/79LR83kaKlJg7kzZS5ewTq/5e86a9bcf83db0940d14aef082c7fdde/image4-5.png" />
            
            </figure><p>These examples illustrate how dashboards in the integration between Cloudflare and Elastic offer practical and effective data visualization for Zero Trust. They enable us to make data-driven decisions, identify behavioral patterns, and proactively respond to threats. By providing relevant information in a visual and accessible manner, these dashboards strengthen security posture and allow for more efficient risk management in the Zero Trust environment.</p>
    <div>
      <h2>How to get started</h2>
      <a href="#how-to-get-started">
        
      </a>
    </div>
    <p>Setup and deployment is simple. Use the Cloudflare dashboard or API to create Logpush jobs with all fields enabled for each dataset you’d like to ingest on Elastic. There are eight account-scoped datasets available to use today (Access Requests, Audit logs, CASB findings, Gateway logs including DNS, Network, HTTP; Zero Trust Session Logs) that can be ingested into Elastic.</p><p>Setup <a href="https://developers.cloudflare.com/logs/get-started/enable-destinations/elastic/">Logpush jobs</a> to your Elastic destination via one of the following methods:</p><ul><li><p><b>HTTP Endpoint mode</b> - Cloudflare pushes logs directly to an HTTP endpoint hosted by your Elastic Agent.</p></li><li><p><b>AWS S3 polling mode</b> - Cloudflare writes data to S3 and Elastic Agent polls the S3 bucket by listing its contents and reading new files.</p></li><li><p><b>AWS S3 SQS mode</b> - Cloudflare writes data to S3, S3 pushes a new object notification to SQS, Elastic Agent receives the notification from SQS, and then reads the S3 object. Multiple Agents can be used in this mode.</p></li></ul>
    <div>
      <h3>Enabling the integration in Elastic</h3>
      <a href="#enabling-the-integration-in-elastic">
        
      </a>
    </div>
    <ol><li><p>In Kibana, go to Management &gt; Integrations</p></li><li><p>In the integrations search bar type Cloudflare Logpush.</p></li><li><p>Click the Cloudflare Logpush integration from the search results.</p></li><li><p>Click the Add Cloudflare Logpush button to add Cloudflare Logpush integration.</p></li><li><p>Enable the Integration with the HTTP Endpoint, AWS S3 input or GCS input.</p></li><li><p>Under the AWS S3 input, there are two types of inputs: using AWS S3 Bucket or using SQS.</p></li><li><p>Configure Cloudflare to send logs to the Elastic Agent.</p></li></ol>
    <div>
      <h2>What’s next</h2>
      <a href="#whats-next">
        
      </a>
    </div>
    <p>As organizations increasingly adopt a Zero Trust architecture, understanding your organization’s security posture is paramount. The dashboards help with necessary tools to build a robust security strategy, centered around visibility, early detection, and effective threat response.  By <a href="https://www.cloudflare.com/learning/security/what-is-siem/">unifying data</a>, normalizing fields, facilitating search, and enabling the creation of custom dashboards, this integration becomes a valuable asset for any cybersecurity team aiming to strengthen their security posture.</p><p>We’re looking forward to continuing to connect Cloudflare customers with our community of technology partners, to help in the adoption of a Zero Trust architecture.</p><p>Explore this new integration today.</p> ]]></content:encoded>
            <category><![CDATA[Product News]]></category>
            <category><![CDATA[Zero Trust]]></category>
            <category><![CDATA[Logs]]></category>
            <category><![CDATA[Security]]></category>
            <category><![CDATA[SIEM]]></category>
            <category><![CDATA[Elastic]]></category>
            <category><![CDATA[Partners]]></category>
            <guid isPermaLink="false">6amHiWxrNpxWRyQhTWFUSu</guid>
            <dc:creator>Corey Mahan</dc:creator>
            <dc:creator>Gavin Chen</dc:creator>
            <dc:creator>Andrew Meyer</dc:creator>
            <dc:creator>Chema Martínez (Guest Author)</dc:creator>
        </item>
        <item>
            <title><![CDATA[An overview of Cloudflare's logging pipeline]]></title>
            <link>https://blog.cloudflare.com/an-overview-of-cloudflares-logging-pipeline/</link>
            <pubDate>Mon, 08 Jan 2024 14:00:21 GMT</pubDate>
            <description><![CDATA[ In this post, we’re going to go over what that looks like, how we achieve high availability, and how we meet our Service Level Objectives (SLOs) while shipping close to a million log lines per second ]]></description>
            <content:encoded><![CDATA[ <p></p><p>One of the roles of Cloudflare's Observability Platform team is managing the operation, improvement, and maintenance of our internal logging pipelines. These pipelines are used to ship debugging logs from every service across Cloudflare’s infrastructure into a centralised location, allowing our engineers to operate and debug their services in near real time. In this post, we’re going to go over what that looks like, how we achieve high availability, and how we meet our Service Level Objectives (SLOs) while shipping close to a million log lines per second.</p><p>Logging itself is a simple concept. Virtually every programmer has written a <code>Hello, World!</code> program at some point. Printing something to the console like that is logging, whether intentional or not.</p><p>Logging <i>pipelines</i> have been around since the beginning of computing itself. Starting with putting string lines in a file, or simply in memory, our industry quickly outgrew the concept of each machine in the network having its own logs. To centralise logging, and to provide scaling beyond a single machine, we used protocols such as the <a href="https://www.ietf.org/rfc/rfc3164.txt">BSD Syslog Protocol</a> to provide a method for individual machines to send logs over the network to a collector, providing a single pane of glass for logs over an entire set of machines.</p><p>Our logging infrastructure at Cloudflare is a bit more complicated, but still builds on these foundational principles.</p>
    <div>
      <h3>The beginning</h3>
      <a href="#the-beginning">
        
      </a>
    </div>
    <p>Logs at Cloudflare start the same as any other, with a <code>println</code>. Generally systems don’t call println directly however, they outsource that logic to a logging library. Systems at Cloudflare use various logging libraries such as Go’s <a href="https://github.com/rs/zerolog">zerolog</a>, C++’s <a href="https://github.com/capnproto/capnproto/blob/7fc185568b4649d1fae97b0791a8f7aa110bfa5d/kjdoc/tour.md">KJ_LOG</a>, or Rusts <a href="https://docs.rs/log/latest/log/">log</a>, however anything that is able to print lines to a program's stdout/stderr streams is compatible with our pipeline. This offers our engineers the greatest flexibility in choosing tools that work for them and their teams.</p><p>Because we use systemd for most of our service management, these stdout/stderr streams are generally piped into systemd-journald which handles the local machine logs. With its <code>RateLimitBurst</code> and <code>RateLimitInterval</code> configurations, this gives us a simple knob to control the output of any given service on a machine. This has given our logging pipeline the colloquial name of the “journal pipeline”, however as we will see, our pipeline has expanded far beyond just journald logs.</p>
    <div>
      <h3>Syslog-NG</h3>
      <a href="#syslog-ng">
        
      </a>
    </div>
    <p>While journald provides us a method to collect logs on every machine, logging onto each machine individually is impractical for debugging large scale services. To this end, the next step of our pipeline is <a href="https://github.com/syslog-ng/syslog-ng">syslog-ng</a>. Syslog-ng is a daemon that implements the aforementioned BSD syslog protocol. In our case, it reads logs from journald, and applies another layer of rate limiting. It then applies rewriting rules to add common fields, such as the name of the machine that emitted the log, the name of the data center the machine is in, and the state of the data center that the machine is in. It then wraps the log in a JSON wrapper and forwards it to our Core data centers.</p><p>journald itself has an interesting feature that makes it difficult for some of our use cases - it guarantees a global ordering of every log on a machine. While this is convenient for the single node case, it imposes the limitation that journald is <a href="https://github.com/systemd/systemd/issues/15677">single-threaded</a>. This means that for our heavier workloads, where every millisecond of delay counts, we provide a more direct path into our pipeline. In particular, we offer a Unix Domain Socket that syslog-ng listens on. This socket operates as a separate source of logs into the same pipeline that the journald logs follow, but allows greater throughput by eschewing the need for a global ordering that journald enforces. Logging in this manner is a bit more involved than outputting logs to the stdout streams, as services have to have a pipe created for them and then manually open that socket to write to. As such, this is generally reserved for services that need it, and don’t mind the management overhead it requires.</p>
    <div>
      <h3>log-x</h3>
      <a href="#log-x">
        
      </a>
    </div>
    <p>Our logging pipeline is a critical service at Cloudflare. Any potential delays or missing data can cause downstream effects that may hinder or even prevent the resolving of customer facing incidents. Because of this strict requirement, we have to offer redundancy in our pipeline. This is where the operation we call “log-x” comes into play.</p><p>We operate two main core data centers. One in the United States, and one in Europe. From each machine, we ship logs to both of these data centers. We call these endpoints log-a, and log-b. The log-a and log-b receivers will insert the logs into a Kafka topic for later consumption. By duplicating the data to two different locations, we achieve a level of redundancy that can handle the failure of either data center.</p><p>The next problem we encounter is that we have many <a href="https://www.cloudflare.com/learning/cdn/glossary/data-center/">data centers</a> all around the world, which at any time due to changing Internet conditions may become disconnected from one, or both core data centers. If the data center is disconnected for long enough we may end up in a situation where we drop logs to either the log-a or log-b receivers. This would result in an incomplete view of logs from one data center and is unacceptable; Log-x was designed to alleviate this problem. In the event that syslog-ng fails to send logs to either log-a or log-b, it will actually send the log <i>twice</i> to the available receiver. This second copy will be marked as actually destined for the other log-x receiver. When a log-x receiver receives such a log, it will insert it into a <i>different</i> Kafka queue, known as the Dead Letter Queue (DLQ). We then use <a href="https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=27846330">Kafka Mirror Maker</a> to sync this DLQ across to the data center that was inaccessible. With this logic log-x allows us to maintain a full copy of all the logs in each core data center, regardless of any transient failures from any of our data centers.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1PLoVSIH48LYsFxCwUK9vf/85598f497674f9509fced29decaf7e58/Blog-1945_Kafka.png" />
            
            </figure>
    <div>
      <h3>Kafka</h3>
      <a href="#kafka">
        
      </a>
    </div>
    <p>When logs arrive in the core data centers, we buffer them in a Kafka queue. This provides a few benefits. Firstly, it means that any consumers of the logs can be added without any changes - they only need to register with Kafka as a consumer group on the logs topic. Secondly, it allows us to tolerate transient failures of the consumers without losing any data. Because the Kafka clusters in the core data centers are much larger than any single machine, Kafka allows us to tolerate up to eight hours of total outage for our consumers without losing any data. This has proven to be enough to recover without data loss from all but the largest of incidents.</p><p>When it comes to partitioning our Kafka data, we have an interesting dilemma. Rather arbitrarily, the syslog protocol only supports <a href="https://datatracker.ietf.org/doc/html/rfc5424#section-6.2.3.1">timestamps up to microseconds</a>. For our faster log emitters, this means that the syslog protocol cannot guarantee ordering with timestamps alone. To work around this limitation, we partition our logs using a key made up of both the host, and the service name. Because Kafka guarantees ordering within a partition, this means that any logs from a service on a machine are guaranteed to be ordered between themselves. Unfortunately, because logs from a service can have vastly different rates between different machines, this can result in unbalanced Kafka partitions. We have an ongoing project to move towards <a href="https://opentelemetry.io/docs/specs/otel/protocol/">Open Telemetry Logs</a> to combat this.</p>
    <div>
      <h3>Onward to storage</h3>
      <a href="#onward-to-storage">
        
      </a>
    </div>
    <p>With the logs in Kafka, we can proceed to insert them into a more long term storage. For storage, we operate two backends. An ElasticSearch/Logstash/Kibana (ELK) stack, and a Clickhouse cluster.</p><p>For ElasticSearch, we split our cluster of 90 nodes into a few types. The first being “master” nodes. These nodes act as the ElasticSearch masters, and coordinate insertions into the cluster. We then have “data” nodes that handle the actual insertion and storage. Finally, we have the “HTTP” nodes that handle HTTP queries. Traditionally in an ElasticSearch cluster, all the data nodes will also handle HTTP queries, however because of the size of our cluster and shards we have found that designating only a few nodes to handle <a href="https://www.cloudflare.com/learning/ddos/glossary/hypertext-transfer-protocol-http/">HTTP requests</a> greatly reduces our query times by allowing us to take advantage of aggressive caching.</p><p>On the Clickhouse side, we operate a ten node Clickhouse cluster that stores our service logs. We are in the process of migrating this to be our primary storage, but at the moment it provides an alternative interface into the same logs that ElasticSearch provides, allowing our Engineers to use either Lucene through the ELK stack, or SQL and Bash scripts through the Clickhouse interface.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7iTClO1Axdg7wI0UVLK0oL/13e1a46890d207290f8ee0a230af6e03/Blog-1945_What-s-next_.png" />
            
            </figure>
    <div>
      <h2>What’s next?</h2>
      <a href="#whats-next">
        
      </a>
    </div>
    <p>As Cloudflare continues to grow, our demands on our Observability systems, and our logging pipeline in particular continue to grow with it. This means that we’re always thinking ahead to what will allow us to scale and improve the experience for our engineers. On the horizon, we have a number of projects to further that goal including:</p><ul><li><p>Increasing our multi-tenancy capabilities with better resource insights for our engineers</p></li><li><p>Migrating our syslog-ng pipeline towards Open Telemetry</p></li><li><p>Tail sampling rather than our probabilistic head sampling we have at the moment</p></li><li><p>Better balancing for our Kafka clusters</p></li></ul><p>If you’re interested in working with logging at Cloudflare, then reach out - <a href="https://www.cloudflare.com/en-gb/careers/jobs/?department=Production+Engineering">we’re hiring</a>!</p> ]]></content:encoded>
            <category><![CDATA[Observability]]></category>
            <category><![CDATA[Logs]]></category>
            <guid isPermaLink="false">2oaeQB5iiazhLNCnVk7Cn6</guid>
            <dc:creator>Colin Douch</dc:creator>
        </item>
        <item>
            <title><![CDATA[Improving Worker Tail scalability]]></title>
            <link>https://blog.cloudflare.com/improving-worker-tail-scalability/</link>
            <pubDate>Fri, 01 Sep 2023 13:00:46 GMT</pubDate>
            <description><![CDATA[ We’re excited to announce improvements to Workers Tail that means it can now be enabled for Workers at any size and scale ]]></description>
            <content:encoded><![CDATA[ 
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/56gEEDPmOntJoXI16u8qU5/0ed51f115308dc4941b6553fbdaed0f0/image2-15.png" />
            
            </figure><p>Being able to get real-time information from applications in production is extremely important. Many times software passes local testing and automation, but then users report that something isn’t working correctly. Being able to quickly see what is happening, and how often, is critical to debugging.</p><p>This is why we originally developed the Workers Tail feature - to allow developers the ability to view requests, exceptions, and information for their Workers and to provide a window into what’s happening in real time. When we developed it, we also took the opportunity to build it on top of our own Workers technology using products like Trace Workers and Durable Objects. Over the last couple of years, we’ve continued to iterate on this feature - allowing users to quickly access logs <a href="/introducing-workers-dashboard-logs/">from the Dashboard</a> and via <a href="/10-things-i-love-about-wrangler/">Wrangler CLI</a>.</p><p>Today, we’re excited to announce that tail can now be enabled for Workers at any size and scale! In addition to telling you about the new and improved scalability, we wanted to share how we built it, and the changes we made to enable it to scale better.</p>
    <div>
      <h3>Why Tail was limited</h3>
      <a href="#why-tail-was-limited">
        
      </a>
    </div>
    <p>Tail leverages <a href="https://developers.cloudflare.com/workers/runtime-apis/durable-objects/#durable-objects">Durable Objects</a> to handle coordination between the Worker producing messages and consumers like <code>wrangler</code> and the Cloudflare dashboard, and Durable Objects are a great choice for handling real-time communication like this. However, when a single Durable Object instance starts to receive a very high volume of traffic - like the kind that can come with tailing live Workers - it can see some performance issues.</p><p>As a result, Workers with a high volume of traffic could not be supported by the original Tail infrastructure. Tail had to be limited to Workers receiving 100 requests/second (RPS) or less. This was a significant limitation that resulted in many users with large, high-traffic Workers having to turn to their own tooling to get proper observability in production.</p><p>Believing that every feature we provide should scale with users during their development journey, we set out to improve Tail's performance at high loads.</p>
    <div>
      <h3>Updating the way filters work</h3>
      <a href="#updating-the-way-filters-work">
        
      </a>
    </div>
    <p>The first improvement was to the existing filtering feature. When starting a Tail with <a href="https://developers.cloudflare.com/workers/wrangler/commands/#tail"><code>wrangler tail</code></a> (and now with the Cloudflare dashboard) users have the ability to filter out messages based on information in the requests or logs.Previously, this filtering was handled within the Durable Object, which meant that even if a user was filtering out the majority of their traffic, the Durable Object would still have to handle every message. Often users with high traffic Tails were using many filters to better interpret their logs, but wouldn’t be able to start a Tail due to the 100 RPS limit.</p><p>We moved filtering out of the Durable Object and into the Tail message producer, preventing any filtered messages from reaching the Tail Durable Object, and thereby reducing the load on the Tail Durable Object. Moving the filtering out of the Durable Object was the first step in improving Tail’s performance at scale.</p>
    <div>
      <h3>Sampling logs to keep Tails within Durable Object limits</h3>
      <a href="#sampling-logs-to-keep-tails-within-durable-object-limits">
        
      </a>
    </div>
    <p>After moving log filtering outside of the Durable Object, there was still the issue of determining when Tails could be started since there was no way to determine to what degree filters would reduce traffic for a given Tail, and simply starting a Durable Object back up would mean that it more than likely hit the 100 RPS limit immediately.</p><p>The solution for this was to add a safety mechanism for the Durable Object while the Tail was running.</p><p>We created a simple controller to track the RPS hitting a Durable Object and sample messages until the desired volume of 100 RPS is reached. As shown below, sampling keeps the Tail Durable Object RPS below the target of 100.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7a0f6KNAQCBfodwsGwzlEs/340c5f194ea626f41552f090787257a0/image4-12.png" />
            
            </figure><p>When messages are sampled, the following message appears every five seconds to let the user know that they are in sampling mode:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7jSffNOJOWuhVam9g2kjo2/daafee133ea30515a5b1ca30fbe559e7/image3-9.png" />
            
            </figure><p>This message goes away once the Tail is stopped or filters are applied that drop the RPS below 100.</p>
    <div>
      <h3>A final failsafe</h3>
      <a href="#a-final-failsafe">
        
      </a>
    </div>
    <p>Finally as a last resort a failsafe mechanism was added in the case the Durable Object gets fully overloaded. Since RPS tracking is done within the Durable Object, if the Durable Object is overloaded due to an extremely large amount of traffic, the sampling mechanism will fail.</p><p>In the case that an overload is detected, all messages forwarded to the Durable Object are stopped periodically to prevent any issues with Workers infrastructure.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7FXCXIYrmzHVJHFlRF1wXQ/d454b9bffab6eeddb19ec5718636113e/image1-24.png" />
            
            </figure><p>Here we can see a user who had a large amount of traffic that started to become sampled. As the traffic increased, the number of sampled messages grew. Since the traffic was too fast for the sampling mechanism to handle, the Durable Object got overloaded. However, soon excess messages were blocked and the overload stopped.</p>
    <div>
      <h3>Try it out</h3>
      <a href="#try-it-out">
        
      </a>
    </div>
    <p>These new improvements are in place currently and available to all users 🎉</p><p>To Tail Workers via the Dashboard, log in, navigate to your Worker, and click on the Logs tab. You can then start a log stream via the default view.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/F5Z80VaPFuPW71NWvfA3g/27e7fb654ac3eac556611c81a2ad0a1d/image5-9.png" />
            
            </figure><p>If you’re using the Wrangler CLI, you can start a new Tail by running <code>wrangler tail</code>.</p>
    <div>
      <h3>Beyond Worker tail</h3>
      <a href="#beyond-worker-tail">
        
      </a>
    </div>
    <p>While we're excited for tail to be able to reach new limits and scale, we also recognize users may want to go beyond the live logs provided by Tail.</p><p>For example, if you’d like to push log events to additional destinations for a historical view of your application’s performance, we offer <a href="https://developers.cloudflare.com/workers/observability/logpush/">Logpush</a>. If you’d like more insight into and control over log messages and events themselves, we offer <a href="https://developers.cloudflare.com/workers/observability/tail-workers/">Tail Workers</a>.</p><p>These products, and others, can be read about in our <a href="https://developers.cloudflare.com/logs/">Logs documentation</a>. All of them are available for use today.</p> ]]></content:encoded>
            <category><![CDATA[Cloudflare Workers]]></category>
            <category><![CDATA[Developers]]></category>
            <category><![CDATA[Wrangler]]></category>
            <category><![CDATA[Logs]]></category>
            <category><![CDATA[Developer Platform]]></category>
            <guid isPermaLink="false">7JrxctC3rnpXqgmtN5ffYL</guid>
            <dc:creator>Joshua Johnson</dc:creator>
            <dc:creator>Adam Murray</dc:creator>
        </item>
        <item>
            <title><![CDATA[Integrate Cloudflare Zero Trust with Datadog Cloud SIEM]]></title>
            <link>https://blog.cloudflare.com/integrate-cloudflare-zero-trust-with-datadog-cloud-siem/</link>
            <pubDate>Thu, 03 Aug 2023 13:00:33 GMT</pubDate>
            <description><![CDATA[ Today, we are very excited to announce the general availability of Cloudflare Zero Trust Integration with Datadog ]]></description>
            <content:encoded><![CDATA[ <p></p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/SCp5IxwJUMOJ6irbWPYHf/4117714bfa2e10409c307dbf48d9e7d2/image5-1.png" />
            
            </figure><p>Cloudflare's Zero Trust platform helps organizations map and adopt a strong security posture. This ranges from Zero Trust Network Access, a Secure Web Gateway to help filter traffic, to Cloud Access Security Broker and Data Loss Prevention to protect data in transit and in the cloud. Customers use Cloudflare to verify, isolate, and inspect all devices managed by IT. Our composable, in-line solutions offer a simplified approach to security and a comprehensive set of logs.</p><p>We’ve heard from many of our customers that they aggregate these logs into Datadog’s Cloud SIEM product. Datadog Cloud SIEM provides threat detection, investigation, and automated response for dynamic, cloud-scale environments. Cloud SIEM analyzes operational and security logs in real time – regardless of volume – while utilizing out-of-the-box integrations and rules to detect threats and investigate them. It also automates response and remediation through out-of-the-box workflow blueprints. Developers, security, and operations teams can also leverage detailed <a href="https://www.cloudflare.com/learning/performance/what-is-observability/">observability</a> data and efficiently collaborate to <a href="https://www.cloudflare.com/learning/security/what-is-siem/">accelerate security investigations</a> in a single, unified platform. We previously had an out-of-the-box dashboard for Cloudflare CDN available on Datadog. These help our customers gain valuable insights into product usage and performance metrics for response times, HTTP status codes, cache hit rate. Customers can collect, visualize, and alert on key Cloudflare metrics.</p><p>Today, we are very excited to announce the general availability of Cloudflare Zero Trust Integration with Datadog. This deeper integration offers the Cloudflare Content Pack within Cloud SIEM which includes out-of-the-box dashboard and detection rules that will help our customers ingesting Zero Trust logs into Datadog, gaining greatly improved security insights over their <a href="https://www.cloudflare.com/learning/security/glossary/what-is-zero-trust/">Zero Trust landscape</a>.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1RYHT6tKmiMbXX3IbZ0jff/f507f8781513f3913b5bb73fd044f818/image4.png" />
            
            </figure><blockquote><p>“<i>Our Datadog SIEM integration with Cloudflare delivers a holistic view of activity across Cloudflare Zero Trust integrations–helping security and dev teams quickly identify and respond to anomalous activity across app, device, and users within the Cloudflare Zero Trust ecosystem. The integration offers detection rules that automatically generate signals based on CASB (cloud access security broker) findings, and impossible travel scenarios, a revamped dashboard for easy spotting of anomalies, and accelerates response and remediation to quickly contain an attacker’s activity through an out-of-the-box workflow automation blueprints.</i>”- <b>Yash Kumar,</b> Senior Director of Product, Datadog</p></blockquote>
    <div>
      <h2>How to get started</h2>
      <a href="#how-to-get-started">
        
      </a>
    </div>
    
    <div>
      <h3>Set up Logpush jobs to your Datadog destination</h3>
      <a href="#set-up-logpush-jobs-to-your-datadog-destination">
        
      </a>
    </div>
    <p>Use the Cloudflare dashboard or API to <a href="https://developers.cloudflare.com/logs/get-started/enable-destinations/datadog/">create a Logpush job</a> with all fields enabled for each dataset you’d like to ingest on Datadog. We have eight account-scoped datasets available to use today (Access Requests, Audit logs, CASB findings, Gateway logs including DNS, Network, HTTP; Zero Trust Session Logs) that can be ingested into Datadog.</p>
    <div>
      <h3>Install the Cloudflare Tile in Datadog</h3>
      <a href="#install-the-cloudflare-tile-in-datadog">
        
      </a>
    </div>
    <p>In your Datadog dashboard, locate and install the Cloudflare Tile within the Datadog Integration catalog. At this stage, Datadog’s out-of-the-box log processing <a href="https://docs.datadoghq.com/logs/log_configuration/pipelines/?tab=source">pipeline</a> will automatically parse and normalize your Cloudflare Zero Trust logs.</p>
    <div>
      <h3>Analyze and correlate your Zero Trust logs with Datadog Cloud SIEM's out-of-the-box content</h3>
      <a href="#analyze-and-correlate-your-zero-trust-logs-with-datadog-cloud-siems-out-of-the-box-content">
        
      </a>
    </div>
    <p>Our new and improved integration with Datadog enables security teams to quickly and easily monitor their Zero Trust components with the Cloudflare Content Pack. This includes the out-of-the-box dashboard that now features a Zero Trust section highlighting various widgets about activity across the applications, devices, and users in your Cloudflare Zero Trust ecosystem. This section gives you a holistic view, helping you spot and respond to anomalies quickly.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4ufPwaIiXySgUYcLsvXbiz/131481c545a01474ea1f26f50308ccf3/image1-2.png" />
            
            </figure>
    <div>
      <h3>Security detections built for CASB</h3>
      <a href="#security-detections-built-for-casb">
        
      </a>
    </div>
    <p>As Enterprises use more SaaS applications, it becomes more critical to have insights and control for data at-rest. Cloudflare CASB findings do just that by providing security risk insights for all integrated SaaS applications.</p><p>With this new integration, Datadog now offers an out-of-the-box detection rule that detects any CASB findings. The alert is triggered at different severity levels for any CASB security finding that could indicate suspicious activity within an integrated SaaS app, like Microsoft 365 and Google Workspace. In the example below, the CASB finding points to an asset whose Google Workspace Domain Record is missing.</p><p>This detection is helpful in identifying and remedying misconfigurations or any security issues saving time and reducing the possibility of security breaches.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5NlUJLmZa43B1LqkTKmdMc/75e15e8a2d66ce46093e5198a6450d94/image2.png" />
            
            </figure>
    <div>
      <h3>Security detections for Impossible Travel</h3>
      <a href="#security-detections-for-impossible-travel">
        
      </a>
    </div>
    <p>One of the most common security issues can show up in surprisingly simple ways. For example, could be a user that seemingly logs in from one location only to login shortly after from a location physically too far away. Datadog’s new detection rule addresses exactly this scenario with their <a href="https://docs.datadoghq.com/security/default_rules/cloudflare-impossible-travel">Impossible Travel detection rule</a>. If Datadog Cloud SIEM determines that two consecutive loglines for a user indicate impossible travel of more than 500 km at over 1,000 km/h, the security alert is triggered. An admin can then determine if it is a security breach and take actions accordingly.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/56UXWQZRCjTg0y0PThuDf9/b033359bf8872fc79a8eb0015fbb8416/image3.png" />
            
            </figure>
    <div>
      <h2>What’s next</h2>
      <a href="#whats-next">
        
      </a>
    </div>
    <p>Customers of Cloudflare and Datadog can now gain a more comprehensive view of their products and security posture with the enhanced dashboards and the new detection rules. We are excited to work on adding more value for our customers and develop unique detection rules.</p><p>If you are a Cloudflare customer using Datadog, explore the new integration starting <a href="https://docs.datadoghq.com/integrations/cloudflare/">today</a>.</p> ]]></content:encoded>
            <category><![CDATA[Logs]]></category>
            <category><![CDATA[Zero Trust]]></category>
            <category><![CDATA[Dashboard]]></category>
            <category><![CDATA[Connectivity Cloud]]></category>
            <guid isPermaLink="false">45DTnvaKqyVXbmubrmQxLM</guid>
            <dc:creator>Mythili Prabhu</dc:creator>
            <dc:creator>Nimisha Saxena (Guest Author)</dc:creator>
        </item>
        <item>
            <title><![CDATA[Adding Zero Trust signals to Sumo Logic for better security insights]]></title>
            <link>https://blog.cloudflare.com/zero-trust-signals-to-sumo-logic/</link>
            <pubDate>Tue, 14 Mar 2023 13:00:00 GMT</pubDate>
            <description><![CDATA[ The Cloudflare App for Sumo Logic now supports Zero Trust logs for out of the box, ready-made security dashboards ]]></description>
            <content:encoded><![CDATA[ <p></p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3hVaAd6pJq09fsIs5GU53e/9b7d0ab2abdc7dcb528072566797c9ab/Dashboard_-Quick-Navigation-1.png" />
            
            </figure><p>A picture is worth a thousand words and the same is true when it comes to getting visualizations, trends, and data in the form of a ready-made security dashboard.</p><p>Today we’re excited to announce the expansion of support for automated normalization and correlation of Zero Trust logs for <a href="https://developers.cloudflare.com/logs/about/">Logpush</a> in Sumo Logic’s <a href="https://www.sumologic.com/solutions/cloud-siem-enterprise/">Cloud SIEM</a>. As a Cloudflare <a href="https://www.cloudflare.com/partners/technology-partners/sumo-logic/">technology partner</a>, Sumo Logic is the pioneer in continuous intelligence, a new category of software which enables organizations of all sizes to address the data challenges and opportunities presented by digital transformation, modern applications, and cloud computing.</p><p>The updated content in Sumo Logic Cloud SIEM helps joint Cloudflare customers reduce alert fatigue tied to Zero Trust logs and accelerates the triage process for security analysts by converging security and network data into high-fidelity insights. This new functionality complements the existing <a href="https://www.sumologic.com/application/cloudflare/">Cloudflare App for Sumo Logic</a> designed to help IT and security teams gain insights, understand anomalous activity, and better trend security and network performance data over time.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4v27Z4MHNSnxP9M0Rs8F75/f0ebc2440edf9807d87a0081777f9463/Cloud-SIEM-HUD.png" />
            
            </figure>
    <div>
      <h3>Deeper integration to deliver Zero Trust insights</h3>
      <a href="#deeper-integration-to-deliver-zero-trust-insights">
        
      </a>
    </div>
    <p>Using Cloudflare Zero Trust helps protect users, devices, and data, and in the process can create a large volume of logs. These logs are helpful and important because they provide the who, what, when, and where for activity happening within and across an organization. They contain <a href="https://www.cloudflare.com/learning/security/what-is-siem/">information</a> such as what website was accessed, who signed in to an application, or what data may have been shared from a SaaS service.</p><p>Up until now, our integrations with Sumo Logic only allowed automated correlation of security signals for Cloudflare only included core services. While it’s critical to ensure collection of <a href="https://www.cloudflare.com/learning/ddos/glossary/web-application-firewall-waf/">WAF</a> and bot detection events across your fabric, extended visibility into <a href="https://www.cloudflare.com/learning/security/glossary/what-is-zero-trust/">Zero Trust components</a> has now become more important than ever with the explosion of distributed work and adoption of hybrid and multi-cloud infrastructure architectures.</p><p>With the expanded Zero Trust logs now available in Sumo Logic Cloud SIEM, customers can now get deeper context into security insights thanks to the broad set of network and security logs produced by Cloudflare products:</p><ul><li><p><a href="https://www.cloudflare.com/products/zero-trust/gateway/">Cloudflare Gateway</a> (Network, DNS, HTTP)</p></li><li><p><a href="https://www.cloudflare.com/products/zero-trust/browser-isolation/">Cloudflare Remote Browser Isolation</a> (included with Gateway logs)</p></li><li><p><a href="https://www.cloudflare.com/products/zero-trust/dlp/">Cloudflare Data Loss Prevention</a> (included with Gateway logs)</p></li><li><p><a href="https://www.cloudflare.com/products/zero-trust/access/">Cloudflare Access</a> (Access audit logs)</p></li><li><p><a href="https://www.cloudflare.com/products/zero-trust/casb/">Cloudflare Cloud Access Security Broker</a> (Findings logs)</p></li></ul><blockquote><p>“As a long time Cloudflare partner, we’ve worked together to help joint customers analyze events and trends from their websites and applications to provide end-to-end visibility and improve digital experiences. We’re excited to expand this partnership to provide real-time insights into the Zero Trust security posture of mutual customers in Sumo Logic’s Cloud SIEM.”- <b>John Coyle</b> - Vice President of Business Development, Sumo Logic</p></blockquote>
    <div>
      <h3>How to get started</h3>
      <a href="#how-to-get-started">
        
      </a>
    </div>
    <p>To take advantage of the suite of integrations available for Sumo Logic and Cloudflare logs available via Logpush, first enable Logpush to Sumo Logic, which will ship logs directly to Sumo Logic’s cloud-native platform. Then, install the Cloudflare App and (for Cloud SIEM customers) enable forwarding of these logs to Cloud SIEM for automated normalization and correlation of security insights.</p><p>Note that Cloudflare’s Logpush service is only available to Enterprise customers. If you are interested in upgrading, <a href="https://www.cloudflare.com/lp/cio-week-2023-cloudflare-one-contact-us/">please contact us here</a>.</p><ol><li><p><a href="https://developers.cloudflare.com/logs/get-started/enable-destinations/sumo-logic/"><b>Enable Logpush to Sumo Logic</b></a>Cloudflare Logpush supports pushing logs directly to Sumo Logic via the Cloudflare dashboard or via API.</p></li><li><p><b>Install the </b><a href="https://www.sumologic.com/application/cloudflare/"><b>Cloudflare App for Sumo Logic</b></a>Locate and install the Cloudflare app from the App Catalog, linked above. If you want to see a preview of the dashboards included with the app before installing, click Preview Dashboards. Once installed, you can now view key information in the Cloudflare Dashboards for all core services.</p></li><li><p><b>(Cloud SIEM Customers) Forward logs to Cloud SIEM</b>After the steps above, enable the updated parser for Cloudflare logs by <a href="https://help.sumologic.com/docs/integrations/saas-cloud/cloudflare/#prerequisites">adding the _parser field</a> to your S3 source created when installing the Cloudflare App.</p></li></ol>
    <div>
      <h3>What’s next</h3>
      <a href="#whats-next">
        
      </a>
    </div>
    <p>As more organizations move towards a Zero Trust model for security, it's increasingly important to have visibility into every aspect of the network with logs playing a crucial role in this effort.</p><p>If your organization is just getting started and not already using a tool like Sumo Logic, <a href="/announcing-logs-engine/">Cloudflare R2 for log storage</a> is worth considering. <a href="#">Cloudflare R2</a> offers a scalable, cost-effective solution for log storage.</p><p>We’re excited to continue closely working with technology partners to expand existing and create new integrations that help customers on their Zero Trust journey.</p> ]]></content:encoded>
            <category><![CDATA[Security Week]]></category>
            <category><![CDATA[Logs]]></category>
            <category><![CDATA[Dashboard]]></category>
            <category><![CDATA[Zero Trust]]></category>
            <category><![CDATA[Sumo Logic]]></category>
            <category><![CDATA[Guest Post]]></category>
            <guid isPermaLink="false">1LqRHHF3LAkWnMU7gmt639</guid>
            <dc:creator>Corey Mahan</dc:creator>
            <dc:creator>Drew Horn (Guest Author)</dc:creator>
        </item>
        <item>
            <title><![CDATA[Send Cloudflare Workers logs to a destination of your choice with Workers Trace Events Logpush]]></title>
            <link>https://blog.cloudflare.com/workers-logpush-ga/</link>
            <pubDate>Fri, 18 Nov 2022 14:02:00 GMT</pubDate>
            <description><![CDATA[ Workers Trace Events Logpush gives new levels of visibility into Workers invocations, console.log messages and errors. It is now available to everyone on the Workers Paid and Enterprise plans! ]]></description>
            <content:encoded><![CDATA[ <p><i></i></p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3VEcX6imb8GNqH9SUD8R6k/addd0bbaf70fd0be029477e06d6ac53c/image1-67.png" />
            
            </figure><p>When writing code, you can only move as fast as you can debug.</p><p>Our goal at Cloudflare is to give our developers the tools to deploy applications faster than ever before. This means giving you tools to do everything from initializing your Workers project to having visibility into your application successfully serving production traffic.</p><p>Last year we introduced <a href="/introducing-workers-dashboard-logs/"><code>wrangler tail</code></a>, letting you access a live stream of Workers logs to help pinpoint errors to debug your applications. Workers Trace Events Logpush (or just Workers Logpush for short) extends this functionality – you can use it to send Workers logs to an <a href="https://www.cloudflare.com/learning/cloud/what-is-object-storage/">object storage</a> destination or analytics platform of your choice.</p><p>Workers Logpush is now available to everyone on the Workers Paid plan! Read on to learn how to get started and about pricing information.</p>
    <div>
      <h3>Move fast and <i>don’t</i> break things</h3>
      <a href="#move-fast-and-dont-break-things">
        
      </a>
    </div>
    <p>With the rise of platforms like Cloudflare Workers over containers and VMs, it now takes just minutes to deploy applications. But, when building an application, any tech stack that you choose comes with its own set of trade-offs.</p><p>As a developer, choosing Workers means you don't need to worry about any of the underlying architecture. You just write code, and it works (hopefully!). A common criticism of this style of platform is that <a href="https://www.cloudflare.com/learning/performance/what-is-observability/">observability</a> becomes more difficult.</p><p>We want to change that.</p><p>Over the years, we’ve made improvements to the testing and debugging tools that we offer — <a href="https://developers.cloudflare.com/workers/learning/debugging-workers/#local-testing-with-wrangler-dev"><code>wrangler dev</code></a>, <a href="https://miniflare.dev/"><code>Miniflare</code></a> and most recently our open sourced runtime <a href="/workerd-open-source-workers-runtime/"><code>workerd</code></a>. These improvements have made debugging locally and running unit tests much easier. However, there will always be edge cases or bugs that are only replicated in production environments.</p>
    <div>
      <h3>If something does break…enter Workers Logpush</h3>
      <a href="#if-something-does-break-enter-workers-logpush">
        
      </a>
    </div>
    <p>Wrangler tail lets you view logs in real time, but we’ve heard from developers that you would also like to set up monitoring for your services and have a historical record to look back on. Workers Logpush includes metadata about requests, <code>console.log()</code> messages and any uncaught exceptions. To give you an idea of what it looks like, below is a sample log line:</p>
            <pre><code>{
   "AccountID":12345678,
   "Event":{
      "RayID":"7605d2b69f961000",
      "Request":{
         "URL":"https://example.com",
         "Method":"GET"
      },
      "Response":{
         "status":200
      },
      "EventTimestampMs":1666814897697,
      "EventType":"fetch",
      "Exceptions":[
      ],
      "Logs":[
         {
            "Level":"log",
            "Message":[
               "please work!"
            ],
            "TimestampMs":1666814897697
         }
      ],
      "Outcome":"ok",
      "ScriptName":"example-script"
   }</code></pre>
            <p><a href="https://developers.cloudflare.com/logs/">Logpush</a> has support for the most popular observability tools. Send logs to Datadog, New Relic or even <a href="https://www.cloudflare.com/developer-platform/products/r2/">R2</a> for storage and <a href="https://developers.cloudflare.com/logs/r2-log-retrieval/">ad hoc querying</a>.</p>
    <div>
      <h3>Pricing</h3>
      <a href="#pricing">
        
      </a>
    </div>
    <p>Workers Logpush is available to both customers on our Workers Paid and Enterprise plans. We wanted this to be very affordable for our developers. Workers Logpush is priced at $0.05 per million requests, and we only charge you for requests that result in logs delivered to an end destination after any filtering or sampling is applied. It also has an included usage of 10M requests each month.</p>
    <div>
      <h3>Configuration</h3>
      <a href="#configuration">
        
      </a>
    </div>
    <p>Logpush is incredibly simple to set up.</p><p>1. Create a Logpush job. The following example sends Workers logs to R2.</p>
            <pre><code>curl -X POST 'https://api.cloudflare.com/client/v4/accounts/&lt;ACCOUNT_ID&gt;/logpush/jobs' \
-H 'X-Auth-Key: &lt;API_KEY&gt;' \
-H 'X-Auth-Email: &lt;EMAIL&gt;' \
-H 'Content-Type: application/json' \
-d '{
"name": "workers-logpush",
"logpull_options": "fields=Event,EventTimestampMs,Outcome,Exceptions,Logs,ScriptName",
"destination_conf": "r2://&lt;BUCKET_PATH&gt;/{DATE}?account-id=&lt;ACCOUNT_ID&gt;&amp;access-key-id=&lt;R2_ACCESS_KEY_ID&gt;&amp;secret-access-key=&lt;R2_SECRET_ACCESS_KEY&gt;",
"dataset": "workers_trace_events",
"enabled": true
}'| jq .</code></pre>
            <p>In Logpush, you can also configure <a href="https://developers.cloudflare.com/logs/reference/filters/">filters</a> and a <a href="https://developers.cloudflare.com/logs/get-started/api-configuration/#sampling-rate">sampling rate</a> to have more control of the volume of data that is sent to your configured destination. For example if you only want to receive logs for resulted in an exception, you could add the following under <code>logpull_options</code>:</p>
            <pre><code>"filter":"{\"where\": {\"key\":\"Outcome\",\"operator\":\"eq\",\"value\":\"exception\"}}"</code></pre>
            <p>2. Enable logging on your Workers script</p><p>You can do this by adding a new property, <code>logpush = true</code>, to your <code>wrangler.toml</code> file. This can be added either in the top level configuration or under an environment. Any new scripts with this property will automatically get picked up by the Logpush job.</p>
    <div>
      <h3>Get started today!</h3>
      <a href="#get-started-today">
        
      </a>
    </div>
    <p>Both customers on our Workers Paid Plan and Enterprise plan can get started with Workers Logpush now! The full guide on how to get started is <a href="https://developers.cloudflare.com/workers/platform/logpush/">here</a>.</p> ]]></content:encoded>
            <category><![CDATA[Developer Week]]></category>
            <category><![CDATA[Cloudflare Workers]]></category>
            <category><![CDATA[Product News]]></category>
            <category><![CDATA[Logs]]></category>
            <category><![CDATA[Developers]]></category>
            <category><![CDATA[Developer Platform]]></category>
            <guid isPermaLink="false">46Q2u0uEAjXfaGwka0ZUnC</guid>
            <dc:creator>Tanushree Sharma</dc:creator>
        </item>
        <item>
            <title><![CDATA[How Cloudflare instruments services using Workers Analytics Engine]]></title>
            <link>https://blog.cloudflare.com/using-analytics-engine-to-improve-analytics-engine/</link>
            <pubDate>Fri, 18 Nov 2022 14:00:00 GMT</pubDate>
            <description><![CDATA[ Learn how Cloudflare uses our own Workers Analytics Engine product to capture analytics about our own products! ]]></description>
            <content:encoded><![CDATA[ 
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/16jjPVsiGz8fzNxVYIgsbR/8c4c93497cd82108d0d74efaf52a96ad/image1-62.png" />
            
            </figure><p>Workers Analytics Engine is a new tool, <a href="/workers-analytics-engine/">announced earlier this year</a>, that enables developers and product teams to build time series analytics about anything, with high dimensionality, high cardinality, and effortless scaling. We built Analytics Engine for teams to gain insights into their code running in Workers, provide analytics to end customers, or even build usage based billing.</p><p>In this blog post we’re going to tell you about how we use Analytics Engine to build Analytics Engine. We’ve instrumented our own Analytics Engine SQL API using Analytics Engine itself and use this data to find bugs and prioritize new product features. We hope this serves as inspiration for other teams who are looking for ways to instrument their own products and gather feedback.</p>
    <div>
      <h3>Why do we need Analytics Engine?</h3>
      <a href="#why-do-we-need-analytics-engine">
        
      </a>
    </div>
    <p>Analytics Engine enables you to generate events (or “data points”) from Workers with <a href="https://developers.cloudflare.com/analytics/analytics-engine/get-started/">just a few lines of code</a>. Using the GraphQL or <a href="https://developers.cloudflare.com/analytics/analytics-engine/sql-api/">SQL API</a>, you can query these events and create useful insights about the business or technology stack. For more about how to get started using Analytics Engine, check out our <a href="https://developers.cloudflare.com/analytics/analytics-engine/">developer docs</a>.</p><p>Since we released the <a href="/analytics-engine-open-beta/">Analytics Engine open beta</a> in September, we’ve been adding new features at a rapid clip based on feedback from developers. However, we’ve had two big gaps in our visibility into the product.</p><p>First, our engineering team needs to answer <a href="https://www.cloudflare.com/learning/performance/what-is-observability/">classic observability questions</a>, such as: how many requests are we getting, how many of those requests result in errors, what are the nature of these errors, etc. They need to be able to view both aggregated data (like average error rate, or p99 response time) and drill into individual events.</p><p>Second, because this is a newly launched product, we are looking for product insights. By instrumenting the SQL API, we can understand the queries our customers write, and the errors they see, which helps us prioritize missing features.</p><p>We realized that Analytics Engine would be an amazing tool for both answering our technical observability questions, and also gathering product insight. That’s because we can log an event for every query to our SQL API, and then query for both aggregated performance issues as well as individual errors and queries that our customers run.</p><p>In the next section, we’re going to walk you through how we use Analytics Engine to monitor that API.</p>
    <div>
      <h2>Adding instrumentation to our SQL API</h2>
      <a href="#adding-instrumentation-to-our-sql-api">
        
      </a>
    </div>
    <p>The Analytics Engine SQL API lets you query events data in the same way you would an ordinary database. For decades, SQL has been the most common language for querying data. We wanted to provide an interface that allows you to immediately start asking questions about your data without having to learn a new query language.</p><p>Our SQL API parses user SQL queries, transforms and validates them, and then executes them against backend database servers. We then write information about the query back into Analytics Engine so that we can run our own analytics.Writing data into Analytics Engine from a Cloudflare Worker is very simple and <a href="https://developers.cloudflare.com/analytics/analytics-engine/get-started/">explained in our documentation</a>. We instrument our SQL API in the same way our users do, and this code excerpt shows the data we write into Analytics Engine:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/L30suydy27OFKzv6ua9ML/c49d03afbb62a1e3df7229e6c30e087c/carbon--3--1.png" />
            
            </figure><p>With that data now being stored in Analytics Engine, we can then pull out insights about every field we’re reporting.</p>
    <div>
      <h2>Querying for insights</h2>
      <a href="#querying-for-insights">
        
      </a>
    </div>
    <p>Having our analytics in an SQL database gives you the freedom to write any query you might want. Compared to using something like metrics which are often predefined and purpose specific, you can define any custom dataset desired, and interrogate your data to ask new questions with ease.</p><p>We need to support datasets comprising trillions of data points. In order to accomplish this, we have implemented a sampling method called <a href="/explaining-cloudflares-abr-analytics/">Adaptive Bit Rate</a> (ABR). With ABR, if you have large amounts of data, your queries may be returned sampled events in order to respond in reasonable time. If you have more typical amounts of data, Analytics Engine will query all your data. This allows you to run any query you like and still get responses in a short length of time. Right now, you have to <a href="https://developers.cloudflare.com/analytics/analytics-engine/sql-api/#sampling">account for sampling in how you make your queries</a>, but we are exploring making it automatic.</p><p>Any data visualization tool can be used to visualize your analytics. At Cloudflare, we heavily use Grafana (<a href="https://developers.cloudflare.com/analytics/analytics-engine/grafana/">and you can too!</a>). This is particularly useful for observability use cases.</p>
    <div>
      <h3>Observing query response times</h3>
      <a href="#observing-query-response-times">
        
      </a>
    </div>
    <p>One query we pay attention to gives us information about the performance of our backend database clusters:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/q8KADDDRyASR7nWPQHoKc/c633325aca2b64e464fc820abfd5e653/image2-45.png" />
            
            </figure><p>As you can see, the 99% percentile (corresponding to the 1% most complex queries to execute) sometimes spikes up to about 300ms. But on average our backend responds to queries within 100ms.</p><p>This visualization is itself generated from an SQL query:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6UchtUYBtnDuXwKO7afWUc/1a584c18a7ce7fb74bcc2599756ed6f7/carbon--2-.png" />
            
            </figure>
    <div>
      <h3>Customer insights from high-cardinality data</h3>
      <a href="#customer-insights-from-high-cardinality-data">
        
      </a>
    </div>
    <p>Another use of Analytics Engine is to draw insights out of customer behavior. Our SQL API is particularly well-suited for this, as you can take full advantage of the power of SQL. Thanks to our ABR technology, even expensive queries can be carried out against huge datasets.</p><p>We use this ability to help prioritize improvements to Analytics Engine. Our SQL API supports a fairly standard dialect of SQL but isn’t feature-complete yet. If a user tries to do something unsupported in an SQL query, they get back a structured error message. Those error messages are reported into Analytics Engine. We’re able to aggregate the kinds of errors that our customers encounter, which helps inform which features to prioritize next.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/AmDjwutzQH089GhoJHvzw/b734eaa557a88f2d513968f20f10f28a/image3-36.png" />
            
            </figure><p>The SQL API returns errors in the format of <code>type of error: more details</code>, and so we can take the first portion before the colon to give us the type of error. We group by that, and get a count of how many times that error happened and how many users it affected:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5Z1KYNLPlcb3rYTPJ9Fi8f/78ac0462fa7b5b1ae2db27d1dfd67d2b/Screenshot-2022-11-18-at-08.33.57.png" />
            
            </figure><p>To perform the above query using an ordinary metrics system, we would need to represent each error type with a different metric. Reporting that many metrics from each microservice creates scalability challenges. That problem doesn’t happen with Analytics Engine, because it’s designed to handle high-cardinality data.</p><p>Another big advantage of a high-cardinality store like Analytics Engine is that you can dig into specifics. If there’s a large spike in SQL errors, we may want to find which customers are having a problem in order to help them or identify what function they want to use. That’s easy to do with another SQL query:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5ZghZO2Jyk153qPnvS13Mk/05891f4f5db7f1b3247e615c7d2373e1/carbon-3.png" />
            
            </figure><p>Inside Cloudflare, we have historically relied on querying our backend database servers for this type of information. Analytics Engine’s SQL API now enables us to open up our technology to our customers, so they can easily gather insights about their services at any scale!</p>
    <div>
      <h2>Conclusion and what’s next</h2>
      <a href="#conclusion-and-whats-next">
        
      </a>
    </div>
    <p>The insights we gathered about usage of the SQL API are a super helpful input to our product prioritization decisions. We already added <a href="https://developers.cloudflare.com/analytics/analytics-engine/sql-reference/">support for <code>substring</code> and <code>position</code> functions</a> which were used in the visualizations above.</p><p>Looking at the top SQL errors, we see numerous errors related to selecting columns. These errors are mostly coming from some usability issues related to the Grafana plugin. Adding support for the DESCRIBE function should alleviate this because without this, the Grafana plugin doesn’t understand the table structure. This, as well as other improvements to our Grafana plugin, is on our roadmap.</p><p>We also can see that users are trying to query time ranges for older data that no longer exists. This suggests that our customers would appreciate having extended data retention. We’ve recently extended our retention from 31 to 92 days, and we will keep an eye on this to see if we should offer further extension.</p><p>We saw lots of errors related to common mistakes or misunderstandings of proper SQL syntax. This indicates that we could provide better examples or error explanations in our documentation to assist users with troubleshooting their queries.</p><p>Stay tuned into our <a href="https://developers.cloudflare.com/analytics/analytics-engine/">developer docs</a> to be informed as we continue to iterate and add more features!</p><p>You can start using Workers Analytics Engine Now! Analytics Engine is now in open beta with free 90-day retention. <a href="https://dash.cloudflare.com/?to=/:account/workers/analytics-engine">Start using it  today</a> or <a href="https://discord.gg/cloudflaredev">join our Discord community</a> to talk with the team.</p> ]]></content:encoded>
            <category><![CDATA[Developer Week]]></category>
            <category><![CDATA[Analytics]]></category>
            <category><![CDATA[Logs]]></category>
            <category><![CDATA[Data]]></category>
            <category><![CDATA[Cloudflare Workers]]></category>
            <category><![CDATA[Developers]]></category>
            <category><![CDATA[Developer Platform]]></category>
            <guid isPermaLink="false">5Vbtic7QOMABAMIJbPSm7v</guid>
            <dc:creator>Jen Sells</dc:creator>
            <dc:creator>Miki Mokrysz</dc:creator>
        </item>
        <item>
            <title><![CDATA[Store and process your Cloudflare Logs... with Cloudflare]]></title>
            <link>https://blog.cloudflare.com/announcing-logs-engine/</link>
            <pubDate>Tue, 15 Nov 2022 14:00:00 GMT</pubDate>
            <description><![CDATA[ Today we’re announcing Cloudflare Logs Engine — a new system that will enable you to do anything you need with Cloudflare Logs, all within Cloudflare ]]></description>
            <content:encoded><![CDATA[ 
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/11il4y7Tq0Uyuzfc3SlYMl/b8f346cf6f306b43c69f1aa07cf71f38/image1-32.png" />
            
            </figure><p>Millions of customers trust Cloudflare to accelerate their website, protect their network, or as a platform to build their own applications. But, once you’re running in production, how do you know what’s going on with your application? You need <a href="https://developers.cloudflare.com/logs/">logs from Cloudflare</a> – a record of what happened on our network when your customers interacted with your product that uses Cloudflare.</p><p>Cloudflare Logs are an indispensable tool for debugging applications, identifying security vulnerabilities, or just understanding how users are interacting with your product. However, our customers generate petabytes of logs, and store them for months or years at a time. Log data is tantalizing: all those answers, just waiting to be revealed with the right query! But until now, it’s been too hard for customers to actually store, search, and understand their logs without expensive and cumbersome third party tools.</p><p>Today we’re announcing Cloudflare Logs Engine: a new product to enable any kind of investigation with Cloudflare Logs — all within Cloudflare.</p><p>Starting today, Cloudflare customers who push their logs to R2 can retrieve them by time range and unique identifier. Over the coming months we want to enable customers to:</p><ul><li><p>Store logs for any Cloudflare dataset, for as long as you want, with a few clicks</p></li><li><p>Access logs no matter what plan you use, without relying on third party tools</p></li><li><p>Write queries that include multiple datasets</p></li><li><p>Quickly identify the logs you need and take action based on what you find</p></li></ul>
    <div>
      <h3>Why Cloudflare Logs?</h3>
      <a href="#why-cloudflare-logs">
        
      </a>
    </div>
    <p>When it comes to visibility into your traffic, most customers start with <i>analytics</i>. Cloudflare dashboard is full of analytics about all of our products, which give a high-level overview of what’s happening: for example, number of requests served, the ratio of cache hits, or the amount of CPU time used.</p><p>But sometimes, more detail is needed. Developers especially need to be able to read individual log lines to debug applications. For example, suppose you notice a problem where your application throws an error in an unexpected way – you need to know the cause of that error and see every request with that pattern.</p><p>Cloudflare offers tools like <a href="https://developers.cloudflare.com/logs/instant-logs/">Instant Logs</a> and <a href="https://developers.cloudflare.com/workers/wrangler/commands/#tail">wrangler tail</a> which excel at real-time debugging. These are incredibly helpful if you’re making changes on the fly, or if the problem occurs frequently enough that it will appear during your debugging session.</p><p>In other cases, you need to find that needle in a haystack — the one rare event that causes everything to go wrong. Or you might have identified a security issue and want to make sure you’ve identified <i>every</i> time that issue could have been exploited in your application’s history.</p><p>When this happens, you need logs. In particular, you need <i>forensics:</i> the ability to search the entire history of your logs.</p>
    <div>
      <h3>A brief overview of log analysis</h3>
      <a href="#a-brief-overview-of-log-analysis">
        
      </a>
    </div>
    <p>Before we take a look at Logs Engine itself, I want to briefly talk about alternatives – how have our customers been dealing with their logs so far?</p><p>Cloudflare has long offered <a href="https://developers.cloudflare.com/logs/logpull/">Logpull</a> and <a href="https://developers.cloudflare.com/logs/about/">Logpush</a>. Logpull enables enterprise customers to store their HTTP logs on Cloudflare for up to seven days, and retrieve them by either time or RayID. Logpush can send your Cloudflare logs just about anywhere on the Internet, quickly and reliably. While Logpush provides more flexibility, it’s been up to customers to actually store and analyze those logs.</p><p>Cloudflare has a number of <a href="https://developers.cloudflare.com/logs/get-started/enable-destinations/">partnerships</a> with SIEMs and data warehouses/data lakes. Many of these tools even have pre-built Cloudflare dashboards for easy visibility. And third party tools have a big advantage in that you can store and search across many log sources, not just Cloudflare.</p><p>That said, we’ve heard from customers that they have some challenges with these solutions.</p><p>First, third party log tooling can be expensive! Most tools require that you pay not just for storage, but for indexing all of that data when it’s ingested. While that enables powerful search functionality later on, Cloudflare (by its nature) is often one of the largest emitters of logs that a developer will have. If you were to store and index every log line we generate, it can cost more money to analyze the logs than to deliver the actual service.</p><p>Second, these tools can be hard to use. Logs are often used to track down an issue that customers discover via analytics in the Cloudflare dashboard. After finding what you need in logs, it can be hard to get back to the right part of the Cloudflare dashboard to make the appropriate configuration changes.</p><p>Finally, Logpush was previously limited to Enterprise plans. Soon, we will start offering these services to customers at any scale, regardless of plan type or how they choose to pay.</p>
    <div>
      <h3>Why Logs Engine?</h3>
      <a href="#why-logs-engine">
        
      </a>
    </div>
    <p>With Logs Engine, we wanted to solve these problems. We wanted to build something affordable, easy to use, and accessible to any Cloudflare customer. And we wanted it to work for any Cloudflare logs dataset, for any span of time.</p><p>Our first insight was that to make logs affordable, we need to separate storage and compute. The cost of Storage is actually quite low! Thanks to R2, there’s no reason many of our customers can’t store all of their logs for long periods of time. At the same time, we want to separate out the <i>analysis</i> of logs so that customers only pay for the compute of logs they analyze – not every line ingested. While we’re still developing our query pricing, our aim is to be predictable, transparent and upfront. You should never be surprised by the cost of a query (or land a huge bill by accident).</p><p>It’s great to separate storage and compute. But, if you need to scan all of your logs anyway to answer the first question you have, you haven’t gained any benefits to this separation. In order to realize cost savings, it’s critical to narrow down your search before executing a query. That’s where our next big idea came in: a tight integration with analytics.</p><p>Most of the time, when analyzing logs, you don’t know what you’re looking for. For example, if you’re trying to find the cause of a specific origin status code, you may need to spend some time understanding which origins are impacted, which clients are sending them, and the time range in which these errors happened. Thanks to our <a href="/explaining-cloudflares-abr-analytics/">ABR analytics</a>, we can provide a good summary of the data very quickly – but not the exact details of what happened. By integrating with analytics, we can help customers narrow down their queries, then switch to Logs Engine once you know exactly what you’re looking for.</p><p>Finally, we wanted to make logs accessible to anyone. That means all plan types – not just Enterprise.</p><p>Additionally, we want to make it easy to both set up log storage and analysis, and also to take action on logs once you find problems. With Logs Engine, it will be possible to search logs right from the dashboard, and to immediately create <a href="https://developers.cloudflare.com/rules/">rules</a> based on the patterns you find there.</p>
    <div>
      <h3>What’s available today and our roadmap</h3>
      <a href="#whats-available-today-and-our-roadmap">
        
      </a>
    </div>
    <p>Today, Enterprise customers can store logs in R2 and <a href="https://developers.cloudflare.com/logs/r2-log-retrieval/">retrieve them</a> via time range. Currently in beta, we also allow customers to retrieve logs by RayID (see our <a href="/r2-rayid-retrieval/">companion blog post</a>) — to join the beta, please email <a href="#">logs-engine-beta@cloudflare.com</a>.</p><p>Coming soon, we will enable customers on <i>all</i> plan types — not just Enterprise — to ingest logs into Logs Engine. Details on pricing will follow soon.</p><p>We also plan to build more powerful querying capability, beyond time range and RayID lookup. For example, we plan to support arbitrary filtering on any column, plus more expressive queries that can look across datasets or aggregate data.</p><p>But why stop at logs? This foundation lays the groundwork to support other types of data sources and queries one day. We are just getting started. Over the long term, we’re also exploring the ability to ingest data sources <i>outside</i> of Cloudflare and query them. Paired with <a href="https://developers.cloudflare.com/analytics/analytics-engine/">Analytics Engine</a> this is a formidable way to explore any data set in a cost-effective way!</p> ]]></content:encoded>
            <category><![CDATA[Developer Week]]></category>
            <category><![CDATA[Logs]]></category>
            <category><![CDATA[Product News]]></category>
            <guid isPermaLink="false">7l7PSKkuMDp5hbuUzIeZU5</guid>
            <dc:creator>Jon Levine</dc:creator>
            <dc:creator>Jen Sells</dc:creator>
        </item>
        <item>
            <title><![CDATA[Indexing millions of HTTP requests using Durable Objects]]></title>
            <link>https://blog.cloudflare.com/r2-rayid-retrieval/</link>
            <pubDate>Tue, 15 Nov 2022 14:00:00 GMT</pubDate>
            <description><![CDATA[ A deep-dive into how Cloudflare uses Durable Objects and the Streams API to index millions of HTTP requests stored in R2 ]]></description>
            <content:encoded><![CDATA[ 
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5un0eKgtUR9mf6rWmZDXyM/7a918c41a2c463599d4fca0c43da97eb/image4-13.png" />
            
            </figure><p>Our customers rely on their Cloudflare logs to troubleshoot problems and debug issues. One of the biggest challenges with logs is the cost of managing them, so earlier this year, we launched the ability to <a href="/store-and-retrieve-logs-on-r2/">store and retrieve</a> Cloudflare logs using R2.</p><p>In this post, I’ll explain how we built the <a href="https://developers.cloudflare.com/logs/r2-log-retrieval/">R2 Log Retrieval API</a> using Cloudflare Workers with a focus on Durable Objects and the Streams API. Using these, allows a customer to index and query millions of their Cloudflare logs stored in batches on R2.</p><p>Before we dive into the internals you might be wondering why one doesn't just use a traditional database to index these logs? After all, databases are a well proven technology. Well, the reason is that individual developers or companies, both large and small, often don't have the resources necessary to maintain such a database and the surrounding infrastructure to create this kind of setup.</p><p>Our approach instead relies on Durable Objects to maintain indexes of the data stored in R2, removing the complexity of managing and maintaining your own database. It was also super easy to add Durable Objects to our existing Workers code with just a few lines of config and some code. And R2 is very economical.</p>
    <div>
      <h2>Indexing</h2>
      <a href="#indexing">
        
      </a>
    </div>
    <p>Indexing data is often used to reduce the lookup time for a query by first pre-processing the data and computing an index – usually a file (or set of files) that have a known structure which can be used to perform lookups on the underlying data. This approach makes lookups quick as indexes typically contain the answer for a given query, or at the very least, tells you how to find it. For this project we are going to index records by the unique identifier called a <a href="https://developers.cloudflare.com/fundamentals/get-started/reference/cloudflare-ray-id/">RayID</a> which our customers use to identify an HTTP request in their logs, but this solution can be modified to index many other types of data.</p><p>When indexing RayIDs for logs stored in R2, we choose an index structure that is fairly straightforward and is commonly known as a forward-index. This type of index consists of a key-value mapping between a document's name and a list of words contained in that document. The terms "document" and "words" are meant to be generic, and you get to define what a document and a word is.</p><p>In our case, a document is a batch of logs stored in R2 and the words are RayIDs contained within that document. For example, our index currently looks like this:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/GSL12fcNI0rI91FttQe9Y/317612cd90d9dfb4ed4ea5cb8c4d1f8b/image1-33.png" />
            
            </figure>
    <div>
      <h3>Building an index using Durable Objects and the Streams API</h3>
      <a href="#building-an-index-using-durable-objects-and-the-streams-api">
        
      </a>
    </div>
    <p>In order to maintain the state of our index, we chose to use Durable Objects. Each Durable Object has its own transactional key-value storage. Therefore, to store our index, we assign each key to be the document (batch) name and the value being a JSON-array of RayIDs within that document. When extracting the RayIDs for a given batch, updating the index becomes as simple as <code>storage.put(batchName, rayIds)</code>. Likewise, getting all the RayIDs for a document is just a call to <code>storage.get(batchName)</code>.</p><p>When performing indexing, since the batches are stored using compression and often with a 30-100x compression ratio, reading an entire batch into memory can lead to out-of-memory (OOM) errors in our Worker. To get around this, we use the Streams API to avoid the OOM errors by processing the data in smaller chunks. There are two types of streams available: byte-oriented and value-oriented. Byte-oriented streams operate at the byte-level for things such as compressing and decompressing data, while the value-orientated streams work on first class values in JavaScript. Numbers, strings, <code>undefined</code>, <code>null</code>, objects, you name it. If it's a valid JavaScript value, you can process it using a value-oriented stream. The Streams API also allows us to define our own JavaScript transformations for both byte- and value-oriented streams.</p><p>So, when our API receives a request to index a batch, our Worker streams the contents from R2 into a pipeline of <code>TransformationStream</code>s to decompress the data, decode the bytes into strings, split the data into records based on the newlines, and finally collect all the RayIDs. Once we've collected all the RayIDs, the data is then persisted in the index by making calls to the Durable Object, which in turn calls the aforementioned <code>storage.put</code> method to persist the data to the index. To illustrate what I mean, I include some code detailing the steps described above.</p>
            <pre><code>async function index(r2, bucket, key, storage) {
  const obj = await getObject(r2, bucket, key);

  const rawStream = obj.Body as ReadableStream;
  const index: Record&lt;string, string[]&gt; = {};

  const rayIdIndexingStream = new TransformStream({
    transform(chunk: string) {
      for (const match of chunk.matchAll(RAYID_FIELD_REGEX)) {
        const { rayid } = match.groups!;
        if (key in index) {
          index[key].push(rayid);
        } else {
          index[key] = [rayid];
        }
      }
    }
  });

  await collectStream(rawStream.pipeThrough(new DecompressionStream('gzip')).pipeThrough(textDecoderStream()).pipeThrough(readlineStream()).pipeThrough(rayIdIndexingStream));
  storage.put(index);
}</code></pre>
            
    <div>
      <h3>Searching for a RayID</h3>
      <a href="#searching-for-a-rayid">
        
      </a>
    </div>
    <p>Once a batch has been indexed, and the index is persisted to our Durable Object, we can then query it using a combination of the RayID and a batch name to check the presence of a RayID in that given batch. Assuming that a zone is producing a batch of logs at the rate of one batch per second this means that over 86,400 batches would be produced in a day! Over the course of a week, there would be far too many keys in our index for us to be able to iterate through them all in a timely manner. This is where the encoding of a RayID and the naming of each batch comes into play.</p><p>A RayID is currently, and I emphasize currently because this can change and has over time, a 16-byte hex encoded value where the first 36-bits encode a timestamp, and it looks something like: <code>764660d8ec5c2ab1</code>. Note that the format of the RayID is likely to evolve in the near future, but for now we can use it to optimize retrieval.</p><p>Each batch produced by Logpush also happens to encode the time the batch was started and completed. Last but not least, upon analysis of our logging pipeline we found that 95% of RayIDs can be found in a batch produced within five minutes of the request completing. (Note that the encoded time sets a <i>lower bound</i> of the batch time we need to search).</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/CQQIphdmvG7aDifAvU3YT/8a4461caaaec9ad2bbea1bfea5cfd4d2/image5-8.png" />
            
            </figure><p>Encoding of a RayID</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3kqEm2Y08awXjSUjx1oGg4/12c4768e6779434f31ce51b55dc99dec/image2-26.png" />
            
            </figure><p>Encoding of a batch name</p><p>For example: say we have a request that was made on November 3 at 16:00:00 UTC. We only need to check the batches under the prefix <code>20221103</code> and those batches that contain the time range of 16:00:00 to 16:05:00 UTC.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2Y64vyD65p3JPXHZyEOoak/ea61d45ae19b2f36c75a2fba2d81bf19/image3-20.png" />
            
            </figure><p>By reducing the number of batches to just a small handful of possibilities for a given RayID, we can simply ask our index if any of those batches contains the RayID by iterating through all the batch names (keys).</p>
            <pre><code>async function lookup(r2, bucket, prefix, start, end, rayid, storage) {
  const keys = await listObjects(r2, bucket, prefix, start, end);
  for (const key of keys) {
    const haystack: string[] | undefined = await storage.get(key);
    if (haystack &amp;&amp; haystack.includes(rayid)) {
      return key
    }
  }
  return undefined
}</code></pre>
            <p>If the RayID is found in a batch, we then stream the corresponding batch back from R2 using another <code>TransformationStream</code> pipeline to filter out any non-matching records from the final response. If no result was found, we return an error message saying we were unable to find the RayID.</p>
    <div>
      <h2>Summary</h2>
      <a href="#summary">
        
      </a>
    </div>
    <p>To recap, we showed how we can use Durable Objects and their underlying storage to create and manage forward-indexes for performing efficient lookup on the RayID for potentially millions of records. All without needing to manage your own database or logging infrastructure or doing a full-scan of the data.</p><p>While this is just one possible use case for Durable Objects, we are just getting started. If you haven't read it before, checkout <a href="/how-we-built-instant-logs/">How we built Instant Logs</a> to see another application of Durable Objects to stream millions of logs in real-time to your Cloudflare Dashboard!</p>
    <div>
      <h2>What’s next</h2>
      <a href="#whats-next">
        
      </a>
    </div>
    <p>We currently offer RayID lookups for the HTTP Requests, Firewall Events, and soon support for Workers Trace Events. This is just the beginning for our Log Retrieval API, and we are already looking to add the ability to index and query on more types of fields such as status codes and host names. We also plan to integrate this into the dashboard so that developers can quickly retrieve the logs they need without needing to craft the necessary API calls by hand.</p><p>Last but not least, we want to make our retrieval functionality even more powerful and are looking at adding more complex types of filters and queries that you can run against your logs.</p><p>As always, stay tuned to the blog for more updates to our <a href="https://developers.cloudflare.com/logs/r2-log-retrieval/">developer documentation</a> for instructions on how to get started with log retrieval. If you’re interested in joining the beta, please email <a href="#">logs-engine-beta@cloudflare.com</a>.</p> ]]></content:encoded>
            <category><![CDATA[Developer Week]]></category>
            <category><![CDATA[undefined]]></category>
            <category><![CDATA[Cloudflare Workers]]></category>
            <category><![CDATA[Logs]]></category>
            <category><![CDATA[Developer Platform]]></category>
            <category><![CDATA[Developers]]></category>
            <guid isPermaLink="false">4ldcTm79WWoqO7tdPj8FNj</guid>
            <dc:creator>Cole MacKenzie</dc:creator>
        </item>
        <item>
            <title><![CDATA[Logpush: now lower cost and with more visibility]]></title>
            <link>https://blog.cloudflare.com/logpush-filters-alerts/</link>
            <pubDate>Thu, 22 Sep 2022 13:30:00 GMT</pubDate>
            <description><![CDATA[ Logpush jobs can now be filtered to contain only logs of interest. Also, you can receive alerts when jobs are failing, as well as get statistics on the health of your jobs ]]></description>
            <content:encoded><![CDATA[ <p></p><p>Logs are a critical part of every successful application. Cloudflare products and services around the world generate massive amounts of logs upon which customers of all sizes depend. Structured logging from our products are used by customers for purposes including analytics, debugging performance issues, <a href="https://www.cloudflare.com/application-services/solutions/app-performance-monitoring/">monitoring application health</a>, maintaining security standards for compliance reasons, and much more.</p><p>Logpush is Cloudflare’s product for pushing these critical logs to customer systems for consumption and analysis. Whenever our products generate logs as a result of traffic or data passing through our systems from anywhere in the world, we buffer these logs and push them directly to customer-defined destinations like <a href="https://www.cloudflare.com/products/r2/">Cloudflare R2</a>, Splunk, AWS S3, and many more.</p><p>Today we are announcing three new key features related to Cloudflare’s <a href="https://developers.cloudflare.com/logs/about/">Logpush</a> product. First, the ability to have only logs matching certain criteria be sent. Second, the ability to get alerted when logs are failing to be pushed due to customer destinations having issues or network issues occurring between Cloudflare and the customer destination. In addition, customers will also be able to query for analytics around the health of Logpush jobs like how many bytes and records were pushed, number of successful pushes, and number of failing pushes.</p>
    <div>
      <h3>Filtering logs before they are pushed</h3>
      <a href="#filtering-logs-before-they-are-pushed">
        
      </a>
    </div>
    <p>Because logs are both critical and generated with high volume, many customers have to maintain complex infrastructure just to ingest and store logs, as well as deal with ever-increasing related costs. On a typical day, a real, example customer receives about 21 billion records, or 2.1 terabytes (about 24.9 TB uncompressed) of gzip compressed logs. Over the course of a month, that could easily be hundreds of billions of events and hundreds of terabytes of data.</p><p>It is often unnecessary to store and analyze all of this data, and customers could get by with specific subsets of the data matching certain criteria. For example, a customer might want just the set of HTTP data that had status code &gt;= 400, or the set of firewall data where the action taken was to block the user.We can now achieve this in our Logpush jobs by setting specific filters on the fields of the log messages themselves. You can use either our <a href="https://developers.cloudflare.com/logs/reference/filters/">API</a> or the Cloudflare dashboard to set up filters.</p><p>To do this in the dashboard, either create a new Logpush job or modify an existing job. You will see the option to set certain filters. For example, an ecommerce customer might want to receive logs only for the checkout page where the bot score was non-zero:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/AA0ciHtY8MWoYGyHOKMnQ/bc24585b7d80216f039b8fea1d003086/image2-30.png" />
            
            </figure>
    <div>
      <h3>Logpush job alerting</h3>
      <a href="#logpush-job-alerting">
        
      </a>
    </div>
    <p>When logs are a critical part of your infrastructure, you want peace of mind that logging infrastructure is healthy. With that in mind, we are announcing the ability to get notified when your Logpush jobs have been retrying to push and failing for 24 hours.</p><p>To set up alerts in the Cloudflare dashboard:</p><p>1. First, navigate to “Notifications” in the left-panel of the account view</p><p>2. Next, Click the “add” button</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/JLWRGmLeEDAsWSgD6QNGY/8226b630dcee4b2963a8a1447ebfccb5/image3-22.png" />
            
            </figure><p>3. Select the alert “Failing Logpush Job Disabled”</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7xw94OPKQCGuN9GwOUnanf/bf6179316ec0ed87075debd816777b93/image1-35.png" />
            
            </figure><p>4. Configure the alert and click Save.</p><p>That’s it — you will receive an email alert if your Logpush job is disabled.</p>
    <div>
      <h3>Logpush Job Health API</h3>
      <a href="#logpush-job-health-api">
        
      </a>
    </div>
    <p>We have also added the ability to query for stats related to the health of your Logpush jobs to our graphql API. Customers can now use our GraphQL API to query for things like the number of bytes pushed, number of compressed bytes pushed, number of records pushed, the status of each push, and much more. Using these stats, customers can have greater visibility into a core part of infrastructure. The GraphQL API is self documenting so full details about the new <code>logpushHealthAdaptiveGroups</code> node can be found using any GraphQL client, but head to <a href="https://developers.cloudflare.com/analytics/graphql-api/">GraphQL docs</a> for more information.</p><p>Below are a couple example queries of how you can use the GraphQL to find stats related to your Logpush jobs.</p><p>Query for number of pushes to S3 that resulted in status code != 200</p>
            <pre><code>query
{
  viewer
  {
    zones(filter: { zoneTag: $zoneTag})
    {
      logpushHealthAdaptiveGroups(filter: {
        datetime_gt:"2022-08-15T00:00:00Z",
        destinationType:"s3",
        status_neq:200
      }, 
      limit:10)
      {
        count,
        dimensions {
          jobId,
          status,
          destinationType
        }
      }
    }
  }
}</code></pre>
            <p>Getting the number of bytes, compressed bytes and records that were pushed</p>
            <pre><code>query
{
  viewer
  {
    zones(filter: { zoneTag: $zoneTag})
    {
      logpushHealthAdaptiveGroups(filter: {
        datetime_gt:"2022-08-15T00:00:00Z",
        destinationType:"s3",
        status:200
      }, 
      limit:10)
      {
        sum {
          bytes,
          bytesCompressed,
          records
        }
      }
    }
  }
}</code></pre>
            
    <div>
      <h3>Summary</h3>
      <a href="#summary">
        
      </a>
    </div>
    <p>Logpush is a robust and flexible platform for customers who need to integrate their own logging and monitoring systems with Cloudflare. Different Logpush jobs can be deployed to support multiple destinations or, with filtering, multiple subsets of logs.</p><p>Customers who haven't created Logpush jobs are encouraged to do so. Try pushing your logs to R2 for safe-<a href="/store-and-retrieve-logs-on-r2/">keeping</a>! For customers who don't currently have access to this powerful tool, consider upgrading your plan.</p> ]]></content:encoded>
            <category><![CDATA[GA Week]]></category>
            <category><![CDATA[General Availability]]></category>
            <category><![CDATA[Logs]]></category>
            <category><![CDATA[Product News]]></category>
            <guid isPermaLink="false">5Wf012tJlAxide9kVaxpDX</guid>
            <dc:creator>Duc Nguyen</dc:creator>
        </item>
        <item>
            <title><![CDATA[Store and retrieve your logs on R2]]></title>
            <link>https://blog.cloudflare.com/store-and-retrieve-logs-on-r2/</link>
            <pubDate>Wed, 21 Sep 2022 14:15:00 GMT</pubDate>
            <description><![CDATA[ Log Storage on R2: a cost-effective solution to store event logs for any of our products ]]></description>
            <content:encoded><![CDATA[ <p></p><p>Following today’s announcement of General Availability of Cloudflare R2 <a href="https://www.cloudflare.com/learning/cloud/what-is-object-storage/">object storage</a>, we’re excited to announce that customers can also store and retrieve their logs on R2.</p><p>Cloudflare’s Logging and Analytics products provide vital insights into customers’ applications. Though we have a breadth of capabilities, logs in particular play a pivotal role in understanding what occurs at a granular level; we produce detailed logs containing metadata generated by Cloudflare products via events flowing through our network, and they are depended upon to illustrate or investigate anything (and everything) from the general performance or health of applications to closely examining security incidents.</p><p>Until today, we have only provided customers with the ability to export logs to 3rd-party destinations - to both store and perform analysis. However, with Log Storage on R2 we are able to offer customers a cost-effective solution to store event logs for any of our products.</p>
    <div>
      <h3>The cost conundrum</h3>
      <a href="#the-cost-conundrum">
        
      </a>
    </div>
    <p>We’ve <a href="/logs-r2/">unpacked the commercial impact in a previous blog post,</a> but to recap, the <a href="https://r2-calculator.cloudflare.com/">cost of storage can vary broadly</a> depending on the volume of requests Internet properties receive. On top of that - and specifically pertaining to logs - there’s usually more expensive fees to access that data whenever the need arises. This can be incredibly problematic, especially when customers are having to balance their budget with the need to access their logs - whether it's to mitigate a potential catastrophe or just out of curiosity.</p><p>With R2, not only do we not charge customers <a href="https://developers.cloudflare.com/r2/platform/pricing/">egress costs</a>, but we also provide the opportunity to make further operational savings by centralizing storage and retrieval. Though, most of all, we just want to make it easy and convenient for customers to access their logs via our Retrieval API - all you need to do is provide a time range!</p>
    <div>
      <h3>Logs on R2: get started!</h3>
      <a href="#logs-on-r2-get-started">
        
      </a>
    </div>
    <p>Why would you want to store your logs on <a href="#">Cloudflare R2</a>? First, R2 is S3 API compatible, so your existing tooling will continue to work as is. Second, not only is R2 cost-effective for storage, we also do not charge any egress fees if you want to get your logs out of Cloudflare to be ingested into your own systems. You can store logs for any Cloudflare product, and you can also store what you need for as long as you need; retention is completely within your control.</p>
    <div>
      <h3>Storing Logs on R2</h3>
      <a href="#storing-logs-on-r2">
        
      </a>
    </div>
    <p>To create Logpush jobs pushing to R2, you can use either the dashboard or Cloudflare API. Using the dashboard, you can create a job and select R2 as the destination during configuration:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4LxlofboVHwn12lpAJ3iok/d8a69475331cf09feae68d1523b5dd76/image2-26.png" />
            
            </figure><p>To use the Cloudflare API to create the job, do something like:</p>
            <pre><code>curl -s -X POST 'https://api.cloudflare.com/client/v4/zones/&lt;ZONE_ID&gt;/logpush/jobs' \
-H "X-Auth-Email: &lt;EMAIL&gt;" \
-H "X-Auth-Key: &lt;API_KEY&gt;" \
-d '{
 "name":"&lt;DOMAIN_NAME&gt;",
"destination_conf":"r2://&lt;BUCKET_PATH&gt;/{DATE}?account-id=&lt;ACCOUNT_ID&gt;&amp;access-key-id=&lt;R2_ACCESS_KEY_ID&gt;&amp;secret-access-key=&lt;R2_SECRET_ACCESS_KEY&gt;",
 "dataset": "http_requests",
"logpull_options":"fields=ClientIP,ClientRequestHost,ClientRequestMethod,ClientRequestURI,EdgeEndTimestamp,EdgeResponseBytes,EdgeResponseStatus,EdgeStartTimestamp,RayID&amp;timestamps=rfc3339",
 "kind":"edge"
}' | jq .</code></pre>
            <p>Please see <a href="https://developers.cloudflare.com/logs/get-started/enable-destinations/r2/">Logpush over R2</a> docs for more information.</p>
    <div>
      <h3>Log Retrieval on R2</h3>
      <a href="#log-retrieval-on-r2">
        
      </a>
    </div>
    <p>If you have your logs pushed to R2, you could use the Cloudflare API to retrieve logs in specific time ranges like the following:</p>
            <pre><code>curl -s -g -X GET 'https://api.cloudflare.com/client/v4/accounts/&lt;ACCOUNT_ID&gt;/logs/retrieve?start=2022-09-25T16:00:00Z&amp;end=2022-09-25T16:05:00Z&amp;bucket=&lt;YOUR_BUCKET&gt;&amp;prefix=&lt;YOUR_FILE_PREFIX&gt;/{DATE}' \
-H "X-Auth-Email: &lt;EMAIL&gt;" \
-H "X-Auth-Key: &lt;API_KEY&gt;" \ 
-H "R2-Access-Key-Id: R2_ACCESS_KEY_ID" \
-H "R2-Secret-Access-Key: R2_SECRET_ACCESS_KEY" | jq .</code></pre>
            <p>See <a href="https://developers.cloudflare.com/logs/r2-log-retrieval/">Log Retrieval API</a> for more details.</p><p>Now that you have critical logging infrastructure on Cloudflare, you probably want to be able to monitor the health of these Logpush jobs as well as get relevant alerts when something needs your attention.</p>
    <div>
      <h3>Looking forward</h3>
      <a href="#looking-forward">
        
      </a>
    </div>
    <p>While we have a vision to build out log analysis and forensics capabilities on top of R2 - and a roadmap to get us there - we’d still love to hear your thoughts on any improvements we can make, particularly to our retrieval options.</p><p>Get setup on <a href="https://www.cloudflare.com/products/r2/">R2</a> to start <a href="https://developers.cloudflare.com/logs/get-started/enable-destinations/r2/">pushing logs</a> today! If your current plan doesn’t include Logpush, storing logs on R2 is another great reason to upgrade!</p> ]]></content:encoded>
            <category><![CDATA[GA Week]]></category>
            <category><![CDATA[General Availability]]></category>
            <category><![CDATA[Logs]]></category>
            <category><![CDATA[Product News]]></category>
            <category><![CDATA[Storage]]></category>
            <guid isPermaLink="false">1jAMjJ9M01pYKP2KUM2EVn</guid>
            <dc:creator>Shelley Jones</dc:creator>
            <dc:creator>Duc Nguyen</dc:creator>
        </item>
        <item>
            <title><![CDATA[Integrating Network Analytics Logs with your SIEM dashboard]]></title>
            <link>https://blog.cloudflare.com/network-analytics-logs/</link>
            <pubDate>Tue, 17 May 2022 15:46:30 GMT</pubDate>
            <description><![CDATA[ We’re excited to announce the availability of Network Analytics Logs for maximum visibility into L3/4 traffic and DDoS attacks ]]></description>
            <content:encoded><![CDATA[ <p></p><p>We’re excited to announce the availability of Network Analytics Logs. <a href="https://www.cloudflare.com/magic-transit/">Magic Transit</a>, <a href="https://www.cloudflare.com/magic-firewall/">Magic Firewall</a>, <a href="https://www.cloudflare.com/magic-wan/">Magic WAN</a>, and <a href="https://www.cloudflare.com/products/cloudflare-spectrum/">Spectrum</a> customers on the Enterprise plan can feed packet samples directly into storage services, <a href="https://www.cloudflare.com/network-services/solutions/network-monitoring-tools/">network monitoring tools</a> such as Kentik, or their <a href="https://www.cloudflare.com/learning/security/what-is-siem/">Security Information Event Management (SIEM)</a> systems such as Splunk to gain near real-time visibility into network traffic and <a href="https://www.cloudflare.com/learning/ddos/what-is-a-ddos-attack/">DDoS attacks</a>.</p>
    <div>
      <h2>What’s included in the logs</h2>
      <a href="#whats-included-in-the-logs">
        
      </a>
    </div>
    <p>By creating a Network Analytics Logs job, Cloudflare will continuously push logs of packet samples directly to the HTTP endpoint of your choice, including Websockets. The logs arrive in JSON format which makes them easy to parse, transform, and aggregate. The logs include packet samples of traffic dropped and passed by the following systems:</p><ol><li><p>Network-layer DDoS Protection Ruleset</p></li><li><p>Advanced TCP Protection</p></li><li><p>Magic Firewall</p></li></ol><p>Note that not all mitigation systems are applicable to all Cloudflare services. Below is a table describing which mitigation service is applicable to which Cloudflare service:</p>
<table>
<thead>
  <tr>
    <th><br /><span>Mitigation System</span></th>
    <th><span>Cloudflare Service</span></th>
  </tr>
  <tr>
    <th><span>Magic Transit</span></th>
    <th><span>Magic WAN</span></th>
    <th><span>Spectrum</span></th>
  </tr>
</thead>
<tbody>
  <tr>
    <td><span>Network-layer DDoS Protection Ruleset</span></td>
    <td><span>✅</span></td>
    <td><span>❌</span></td>
    <td><span>✅</span></td>
  </tr>
  <tr>
    <td><span>Advanced TCP Protection</span></td>
    <td><span>✅</span></td>
    <td><span>❌</span></td>
    <td><span>❌</span></td>
  </tr>
  <tr>
    <td><span>Magic Firewall </span></td>
    <td><span>✅</span></td>
    <td><span>✅</span></td>
    <td><span>❌</span></td>
  </tr>
</tbody>
</table><p>Packets are processed by the mitigation systems in the order outlined above. Therefore, a packet that passed all three systems may produce three packet samples, one from each system. This can be very insightful when troubleshooting and wanting to understand where in the stack a packet was dropped. To avoid overcounting the total passed traffic, Magic Transit users should only take into consideration the passed packets from the last mitigation system, Magic Firewall.</p><p>An example of a packet sample log:</p>
            <pre><code>{"AttackCampaignID":"","AttackID":"","ColoName":"bkk06","Datetime":1652295571783000000,"DestinationASN":13335,"Direction":"ingress","IPDestinationAddress":"(redacted)","IPDestinationSubnet":"/24","IPProtocol":17,"IPSourceAddress":"(redacted)","IPSourceSubnet":"/24","MitigationReason":"","MitigationScope":"","MitigationSystem":"magic-firewall","Outcome":"pass","ProtocolState":"","RuleID":"(redacted)","RulesetID":"(redacted)","RulesetOverrideID":"","SampleInterval":100,"SourceASN":38794,"Verdict":"drop"}</code></pre>
            <p>All the available log fields are documented here: <a href="https://developers.cloudflare.com/logs/reference/log-fields/account/network_analytics_logs/">https://developers.cloudflare.com/logs/reference/log-fields/account/network_analytics_logs/</a></p>
    <div>
      <h2>Setting up the logs</h2>
      <a href="#setting-up-the-logs">
        
      </a>
    </div>
    <p>In this walkthrough, we will demonstrate how to feed the Network Analytics Logs into Splunk via <a href="https://www.postman.com/">Postman</a>. At this time, it is only possible to set up Network Analytics Logs via API. Setting up the logs requires three main steps:</p><ol><li><p>Create a Cloudflare API token.</p></li><li><p>Create a Splunk Cloud HTTP Event Collector (HEC) token.</p></li><li><p>Create and enable a Cloudflare Logpush job.</p></li></ol><p>Let’s get started!</p>
    <div>
      <h3>1) Create a Cloudflare API token</h3>
      <a href="#1-create-a-cloudflare-api-token">
        
      </a>
    </div>
    <ol><li><p>Log in to your Cloudflare account and navigate to <b>My Profile.</b></p></li><li><p>On the left-hand side, in the collapsing navigation menu, click <b>API Tokens.</b></p></li><li><p>Click <b>Create Token</b> and then, under <b>Custom token</b>, click <b>Get started.</b></p></li><li><p>Give your custom token a name, and select an Account scoped permission to edit Logs. You can also scope it to a specific/subset/all of your accounts.</p></li><li><p>At the bottom, click <b>Continue to summary</b>, and then <b>Create Token</b>.</p></li><li><p><b>Copy</b> and save your token. You can also test your token with the provided snippet in Terminal.</p></li></ol><p>When you're using an API token, you don't need to provide your email address as part of the API credentials.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4HD1OtGie9s6CbcVPA7KUx/2f103d64615a7d44e9d3adebc1e12a5b/image5-17.png" />
            
            </figure><p>Read more about creating an API token on the Cloudflare Developers website: <a href="https://developers.cloudflare.com/api/tokens/create/">https://developers.cloudflare.com/api/tokens/create/</a></p>
    <div>
      <h3>2) Create a Splunk token for an HTTP Event Collector</h3>
      <a href="#2-create-a-splunk-token-for-an-http-event-collector">
        
      </a>
    </div>
    <p>In this walkthrough, we’re using a Splunk Cloud free trial, but <a href="https://developers.cloudflare.com/logs/get-started/enable-destinations/">you can use almost any service that can accept logs over HTTPS</a>. In some cases, if you’re using an on-premise SIEM solution, you may need to allowlist <a href="https://www.cloudflare.com/ips/">Cloudflare IP address</a> in your firewall to be able to receive the logs.</p><ol><li><p>Create a Splunk Cloud account. I created a trial account for the purpose of this blog.</p></li><li><p>In the Splunk Cloud dashboard, go to <b>Settings</b> &gt; <b>Data Input.</b></p></li><li><p>Next to <b>HTTP Event Collector</b>, click <b>Add new.</b></p></li><li><p>Follow the steps to create a token.</p></li><li><p>Copy your token and your allocated Splunk hostname and save both for later.</p></li></ol>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3n4C580G6FxELEaOqPO7Wz/58c6f8fa136f71413b008ace676c3ca9/image2-41.png" />
            
            </figure><p>Read more about using Splunk with Cloudflare Logpush on the Cloudflare Developers website: <a href="https://developers.cloudflare.com/logs/get-started/enable-destinations/splunk/">https://developers.cloudflare.com/logs/get-started/enable-destinations/splunk/</a></p><p>Read more about creating an HTTP Event Collector token on Splunk’s website: <a href="https://docs.splunk.com/Documentation/Splunk/8.2.6/Data/UsetheHTTPEventCollector">https://docs.splunk.com/Documentation/Splunk/8.2.6/Data/UsetheHTTPEventCollector</a></p>
    <div>
      <h3>3) Create a Cloudflare Logpush job</h3>
      <a href="#3-create-a-cloudflare-logpush-job">
        
      </a>
    </div>
    <p>Creating and enabling a job is very straightforward. It requires only one API call to Cloudflare to create and enable a job.</p><p>To send the API calls I used <a href="https://www.postman.com/">Postman</a>, which is a user-friendly API client that was recommended to me by a colleague. It allows you to save and customize API calls. You can also use Terminal/CMD or any other API client/script of your choice.</p><p>One thing to notice is Network Analytics Logs are <b>account</b>-scoped. The API endpoint is therefore a tad different from what you would normally use for zone-scoped datasets such as HTTP request logs and DNS logs.</p><p>This is the endpoint for creating an account-scoped Logpush job:</p><p><code>https://api.cloudflare.com/client/v4/accounts/**{account-id}**/logpush/jobs</code></p><p>Your account identifier number is a unique identifier of your account. It is a string of 32 numbers and characters. If you’re not sure what your account identifier is, log in to Cloudflare, select the appropriate account, and copy the string at the end of the URL.</p><p><code>https://dash.cloudflare.com/**{account-id}**</code></p><p>Then, set up a new request in Postman (or any other API client/CLI tool).</p><p>To successfully create a Logpush job, you’ll need the HTTP method, URL, Authorization token, and request body (data). The request body must include a destination configuration (<code>destination_conf</code>), the specified dataset (<code>network_analytics_logs</code>, in our case), and the token (your Splunk token).</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1SlAhohD7nH7Ge6ODVPiWT/132cb3aa127ba2cc4596b2fbc2f2c6e8/image1-48.png" />
            
            </figure><p><b>Method</b>:</p><p><code>POST</code></p><p><b>URL</b>:</p><p><code>https://api.cloudflare.com/client/v4/accounts/**{account-id}**/logpush/jobs</code></p><p><b>Authorization</b>: Define a Bearer authorization in the <b>Authorization</b> tab, or add it to the header, and add your Cloudflare API token.</p><p><b>Body</b>: Select a <b>Raw</b> &gt; <b>JSON</b></p>
            <pre><code>{
"destination_conf": "{your-unique-splunk-configuration}",
"dataset": "network_analytics_logs",
"token": "{your-splunk-hec-tag}",
"enabled": "true"
}</code></pre>
            <p>If you’re using Splunk Cloud, then your unique configuration has the following format:</p><p><code>**{your-unique-splunk-configuration}=**splunk://**{your-splunk-hostname}**.splunkcloud.com:8088/services/collector/raw?channel=**{channel-id}**&amp;header_Authorization=Splunk%20**{your-splunk–hec-token}**&amp;insecure-skip-verify=false</code></p><p>Definition of the variables:</p><p><code><b>{your-splunk-hostname}</b></code>= Your allocated Splunk Cloud hostname.</p><p><code><b>{channel-id}</b></code> = A unique ID that you choose to assign for.<b>`{your-splunk–hec-token}`</b> = The token that you generated for your Splunk HEC.</p><p>An important note is that customers should have a valid <a href="https://www.cloudflare.com/application-services/products/ssl/">SSL/TLS certificate</a> on their Splunk instance to support an encrypted connection.</p><p>After you’ve done that, you can create a GET request to the same URL (no request body needed) to verify that the job was created and is enabled.</p><p>The response should be similar to the following:</p>
            <pre><code>{
    "errors": [],
    "messages": [],
    "result": {
        "id": {job-id},
        "dataset": "network_analytics_logs",
        "frequency": "high",
        "kind": "",
        "enabled": true,
        "name": null,
        "logpull_options": null,
        "destination_conf": "{your-unique-splunk-configuration}",
        "last_complete": null,
        "last_error": null,
        "error_message": null
    },
    "success": true
}</code></pre>
            <p>Shortly after, you should start receiving logs to your Splunk HEC.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/45iJNwBrNbf0ptNsRdc4N/7a2464708c75292b4f704a89a20220f2/image4-27.png" />
            
            </figure><p>Read more about enabling Logpush on the Cloudflare Developers website: <a href="https://developers.cloudflare.com/logs/reference/logpush-api-configuration/examples/example-logpush-curl/">https://developers.cloudflare.com/logs/reference/logpush-api-configuration/examples/example-logpush-curl/</a></p>
    <div>
      <h2>Reduce costs with R2 storage</h2>
      <a href="#reduce-costs-with-r2-storage">
        
      </a>
    </div>
    <p>Depending on the amount of logs that you read and write, the cost of third party cloud storage can skyrocket — forcing you to decide between managing a tight budget and being able to properly investigate networking and security issues. However, we believe that you shouldn’t have to make those trade-offs. With <a href="/logs-r2/">R2’s low costs</a>, we’re making this decision easier for our customers. Instead of feeding logs to a third party, you can reap the cost benefits of <a href="/logs-r2/">storing them in R2</a>.</p><p>To learn more about the <a href="https://www.cloudflare.com/developer-platform/r2/">R2 features and pricing</a>, check out the <a href="/r2-open-beta/">full blog post</a>. To enable R2, contact your account team.</p>
    <div>
      <h2>Cloudflare logs for maximum visibility</h2>
      <a href="#cloudflare-logs-for-maximum-visibility">
        
      </a>
    </div>
    <p><a href="https://www.cloudflare.com/plans/enterprise/">Cloudflare Enterprise</a> customers have access to detailed logs of the metadata generated by our products. These logs are helpful for troubleshooting, identifying network and configuration adjustments, and generating reports, especially when combined with logs from other sources, such as your servers, firewalls, routers, and other appliances.</p><p>Network Analytics Logs joins Cloudflare’s family of products on Logpush: <a href="https://developers.cloudflare.com/logs/reference/log-fields/zone/dns_logs/">DNS logs</a>, <a href="https://developers.cloudflare.com/logs/reference/log-fields/zone/firewall_events/">Firewall events</a>, <a href="https://developers.cloudflare.com/logs/reference/log-fields/zone/http_requests/">HTTP requests</a>, <a href="https://developers.cloudflare.com/logs/reference/log-fields/zone/nel_reports/">NEL reports</a>, <a href="https://developers.cloudflare.com/logs/reference/log-fields/zone/spectrum_events/">Spectrum events</a>, <a href="https://developers.cloudflare.com/logs/reference/log-fields/account/audit_logs/">Audit logs</a>, <a href="https://developers.cloudflare.com/logs/reference/log-fields/account/gateway_dns/">Gateway DNS</a>, <a href="https://developers.cloudflare.com/logs/reference/log-fields/account/gateway_http/">Gateway HTTP</a>, and <a href="https://developers.cloudflare.com/logs/reference/log-fields/account/gateway_network/">Gateway Network</a>.</p><p>Not using Cloudflare yet? <a href="https://dash.cloudflare.com/sign-up">Start now</a> with our Free and <a href="https://www.cloudflare.com/plans/pro/">Pro plans</a> to protect your websites against DDoS attacks, or <a href="https://www.cloudflare.com/magic-transit/">contact us</a> for comprehensive <a href="https://www.cloudflare.com/ddos/">DDoS protection</a> and <a href="https://www.cloudflare.com/learning/cloud/what-is-a-cloud-firewall/">firewall-as-a-service</a> for your entire network.</p> ]]></content:encoded>
            <category><![CDATA[DDoS]]></category>
            <category><![CDATA[Data]]></category>
            <category><![CDATA[Magic Transit]]></category>
            <category><![CDATA[Spectrum]]></category>
            <category><![CDATA[Logs]]></category>
            <category><![CDATA[Network]]></category>
            <category><![CDATA[SIEM]]></category>
            <guid isPermaLink="false">7J0cgdiD9dX3Xb9q1OaN3f</guid>
            <dc:creator>Omer Yoachimik</dc:creator>
            <dc:creator>Kyle Bowman</dc:creator>
        </item>
        <item>
            <title><![CDATA[Logs on R2: slash your logging costs]]></title>
            <link>https://blog.cloudflare.com/logs-r2/</link>
            <pubDate>Wed, 11 May 2022 12:59:24 GMT</pubDate>
            <description><![CDATA[ You shouldn’t have to make trade-offs between keeping logs that you need and managing tight budgets. R2’s low costs makes this decision easier and now you can use Logpush to store logs on R2. ]]></description>
            <content:encoded><![CDATA[ <p></p><p>Hot on the heels of the <a href="/r2-open-beta">R2 open beta announcement</a>, we’re excited that Cloudflare enterprise customers can now use Logpush to store logs on R2!</p><p>Raw logs from our products are used by our customers for debugging performance issues, to investigate security incidents, to keep up security standards for compliance and much more. You shouldn’t have to make tradeoffs between keeping logs that you need and managing tight budgets. With <a href="https://www.cloudflare.com/developer-platform/products/r2/">R2’s low costs</a>, we’re making this decision easier for our customers!</p>
    <div>
      <h3>Getting into the numbers</h3>
      <a href="#getting-into-the-numbers">
        
      </a>
    </div>
    <p>Cloudflare helps customers at different levels of scale — from a few requests per day, up to a million requests per second. Because of this, the cost of log storage also varies widely. For customers with higher-traffic websites, log storage costs can grow large, quickly.</p><p>As an example, imagine a website that gets 100,000 requests per second. This site would generate about 9.2 TB of HTTP request logs per day, or 850 GB/day after gzip compression. Over a month, you’ll be storing about 26 TB (compressed) of HTTP logs.</p><p>For a typical use case, imagine that you write and read the data exactly once – for example, you might write the data to <a href="https://www.cloudflare.com/learning/cloud/what-is-object-storage/">object storage</a> before ingesting it into an alerting system. <a href="https://r2-calculator.cloudflare.com/">Compare the costs of R2 and S3</a> (note that this excludes costs per operation to read/write data).</p><table><tr><td><p><b>Provider</b></p></td><td><p><b>Storage price</b></p></td><td><p><b>Data transfer price</b></p></td><td><p><b>Total cost assuming data is read once</b></p></td></tr><tr><td><p>R2</p></td><td><p>$0.015/GB</p></td><td><p>$0</p></td><td><p>$390/month</p></td></tr><tr><td><p>S3 (Standard, US East)</p></td><td><p><a href="http://web.archive.org/web/20220531135033/https://aws.amazon.com/s3/pricing/">$0.023/GB</a></p></td><td><p><a href="http://web.archive.org/web/20220531135033/https://aws.amazon.com/s3/pricing/">$0.09/GB</a> for first 10 TB; then <a href="http://web.archive.org/web/20220531135033/https://aws.amazon.com/s3/pricing/">$0.085/GB</a></p></td><td><p>$2,858/month</p></td></tr></table><p>In this example, R2 leads to 86% savings! It’s worth noting that querying logs is where another hefty price tag comes in because Amazon Athena charges based on the amount of data scanned. If your team is looking back through historical data, each query can be hundreds of dollars.</p><p>Many of our customers have tens to hundreds of domains behind Cloudflare and the majority of our Enterprise customers also use multiple Cloudflare products. Imagine how costs will scale if you need to store HTTP, WAF and Spectrum logs for all of your Internet properties behind Cloudflare.</p><p>For SaaS customers that are building the next big thing on Cloudflare, logs are important to get visibility into customer usage and performance. Your customer’s developers may also want access to raw logs to understand errors during development and to troubleshoot production issues. Costs for storing logs multiply and add up quickly!</p>
    <div>
      <h3>The flip side: log retrieval</h3>
      <a href="#the-flip-side-log-retrieval">
        
      </a>
    </div>
    <p>When designing products, one of Cloudflare’s core principles is ease of use. We take on the complexity, so you don’t have to. Storing logs is only half the battle, you also need to be able to access relevant logs when you need them – in the heat of an incident or when doing an in depth analysis.</p><p>Our product, <a href="https://developers.cloudflare.com/logs/logpull/">Logpull</a>, offers seven days of <a href="https://www.cloudflare.com/learning/performance/log-retention-best-practices/">log retention</a> and an easy-to-use API to access. Our customers love that Logpull doesn’t need any setup on third parties since it's completely managed by Cloudflare. However, Logpull is limited in the retention of logs, the type of logs that we store (only HTTP request logs) and the amount of data that can be queried at one time.</p><p>We’re building tools for log retrieval that make it super easy to get your data out of R2 from any of our datasets. Similar to Logpull, we’ll start by supporting lookups by time period and rayId. From there, we’ll tackle more complex functions like returning logs within time X and Y that have 500 errors or where WAF action = <code>block</code>.</p><p>We’re looking for customers to join a closed beta for our Log Retrieval API. If you’re interested in testing it out, giving feedback and ultimately helping us shape the product sign up <a href="https://docs.google.com/forms/d/e/1FAIpQLSeIzZk_giT5KFLL7PyUQofZKMLMp9BIo0ObCbxqKg1vlD0dlw/viewform?usp=sf_link">here</a>.</p>
    <div>
      <h3>Logs on R2: How to get started</h3>
      <a href="#logs-on-r2-how-to-get-started">
        
      </a>
    </div>
    <p>Enterprise customers first need to get R2 added to their contract. Reach out to your account team if this is something you’re interested in! Once enabled, create an R2 bucket for your logs and follow the Logpush setup flow to create your job.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2nzN2tZ2qIJv23lDiIrlbU/2a0a51af78eb5ab678e36378b016b09d/pasted-image-0.png" />
            
            </figure><p>It’s that simple! If you have questions, our <a href="https://developers.cloudflare.com/logs/get-started/enable-destinations/r2//">Logpush to R2</a> developer docs go into more detail.</p>
    <div>
      <h3>More to come</h3>
      <a href="#more-to-come">
        
      </a>
    </div>
    <p>We’re continuing to build out more advanced Logpush features with a focus on customization. Here’s a preview of what’s next on the roadmap:</p><ul><li><p>New datasets: Network Analytics Logs, Worker's Trace Events</p></li><li><p>Log filtering</p></li><li><p>Custom log formatting</p></li></ul><p>We also have exciting plans to build out log analysis and forensics capabilities on top of R2. We want to make log storage tightly coupled to the Cloudflare dash, so you can see high level analytics and drill down into individual log lines all in one view. Stay tuned to the blog for more!</p> ]]></content:encoded>
            <category><![CDATA[Platform Week]]></category>
            <category><![CDATA[Product News]]></category>
            <category><![CDATA[Logs]]></category>
            <category><![CDATA[Beta]]></category>
            <guid isPermaLink="false">22cFwEOvRYecsonDb4YVCQ</guid>
            <dc:creator>Tanushree Sharma</dc:creator>
        </item>
        <item>
            <title><![CDATA[Get full observability into your Cloudflare logs with New Relic]]></title>
            <link>https://blog.cloudflare.com/announcing-the-new-relic-direct-log-integration/</link>
            <pubDate>Mon, 14 Mar 2022 12:59:17 GMT</pubDate>
            <description><![CDATA[ Correlating Cloudflare logs across your stack in New Relic One is powerful for monitoring and debugging in order to keep services safe and reliable. We’re excited to have partnered with New Relic to create a direct integration that provides this visibility ]]></description>
            <content:encoded><![CDATA[ <p></p><p>Building a great customer experience is at the heart of any business. Building resilient products is half the battle — teams also need observability into their applications and services that are running across their stack.</p><p>Cloudflare provides analytics and logs for our products in order to give our customers visibility to extract insights. Many of our customers use Cloudflare along with other applications and network services and want to be able to correlate data through all of their systems.</p><p>Understanding normal traffic patterns, causes of latency and errors can be used to improve performance and ultimately the customer experience. For example, for websites behind Cloudflare, analyzing application logs and origin server logs along with Cloudflare’s HTTP request logs give our customers an end-to-end visibility about the journey of a request.</p><p>We’re excited to have partnered with New Relic to create a direct integration that provides this visibility. The direct integration with our logging product, Logpush, means customers no longer need to pay for middleware to get their Cloudflare data into New Relic. The result is a faster log delivery and fewer costs for our mutual customers!</p><p>We’ve invited the New Relic team to dig into how New Relic One can be used to provide insights into Cloudflare.</p>
    <div>
      <h3>New Relic Log Management</h3>
      <a href="#new-relic-log-management">
        
      </a>
    </div>
    <p>New Relic provides an open, extensible, cloud-based observability platform that gives visibility into your entire stack. Logs, metrics, events, and traces are automatically correlated to help our customers improve user experience, accelerate time to market, and reduce MTTR.</p><p>Deploying <a href="https://newrelic.com/products/log-management">log management</a> in context and at scale has never been faster, easier, or more attainable. With New Relic One, you can collect, search, and correlate logs and other telemetry data from applications, infrastructure, network devices, and more for faster troubleshooting and investigation.</p><p>New Relic correlates events from your applications, infrastructure, serverless environments, along with mobile errors, traces and spans to your logs — so you find exactly what you need with less toil. All your logs are only a click away, so there’s no need to dig through logs in a separate etool to manually correlate them with errors and traces.</p><p>See how engineers have used logs in New Relic to better serve their customers in the short video below.</p>
    <div>
      <h3>A quickstart for Cloudflare Logpush and New Relic</h3>
      <a href="#a-quickstart-for-cloudflare-logpush-and-new-relic">
        
      </a>
    </div>
    <p>To help you get the most of the new Logpush integration with New Relic, we’ve created the <a href="https://newrelic.com/instant-observability/cloudflare-network-logs/fc2bb0ac-6622-43c6-8c1f-6a4c26ab5434/?utm_source=external_partners&amp;utm_medium=referral&amp;utm_campaign=global-ever-green-io-partner">Cloudflare Logpush quickstart for New Relic</a>. The Cloudflare quickstart will enable you to monitor and analyze web traffic metrics on a single pre-built dashboard, integrating with New Relic’s database to provide an at-a-glance overview of the most important logs and metrics from your websites and applications.</p><p>Getting started is simple:</p><ul><li><p>First, ensure that you have enabled pushing logs directly into New Relic by following the documentation <a href="https://developers.cloudflare.com/logs/get-started/enable-destinations/new-relic/">“Enable Logpush to New Relic”</a>.</p></li><li><p>You’ll also need a New Relic account. If you don’t have one yet, get a free-forever account <a href="https://newrelic.com/signup?utm_source=external_partners&amp;utm_medium=content&amp;utm_campaign=global-fy22-q4-io-partner-cloudflare">here</a>.</p></li><li><p>Next, visit the <a href="https://newrelic.com/instant-observability/cloudflare-network-logs/fc2bb0ac-6622-43c6-8c1f-6a4c26ab5434/?utm_source=external_partners&amp;utm_medium=referral&amp;utm_campaign=global-ever-green-io-partner">Cloudflare quickstart in New Relic</a>, click “Install quickstart”, and follow the guided click-through.</p></li></ul><p>For full instructions to set up the integration and quickstart, read the <a href="http://newrelic.com/blog/how-to-relic/cloudflare-log-integration">New Relic blog post</a>.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/686cjyzad7TdJGQzRMobPY/2ddb0260e727059f8c327f7fa7d77f57/image1-16.png" />
            
            </figure><p>As a result, you’ll get a rich ready-made dashboard with key metrics about your Cloudflare logs!</p><p>Correlating Cloudflare logs across your stack in New Relic One is powerful for monitoring and debugging in order to keep services safe and reliable. Cloudflare customers get access to logs as part of the Enterprise account, if you aren’t using Cloudflare Enterprise, <a href="https://www.cloudflare.com/plans/enterprise/contact/">contact us</a>. If you’re not already a New Relic user, <a href="https://newrelic.com/signup">sign up for New Relic</a> to get a free account which includes this new experience and all of our product capabilities.</p> ]]></content:encoded>
            <category><![CDATA[Security Week]]></category>
            <category><![CDATA[Logs]]></category>
            <category><![CDATA[Analytics]]></category>
            <category><![CDATA[Guest Post]]></category>
            <guid isPermaLink="false">2PBumqKZPAc7RpJjXlMoZa</guid>
            <dc:creator>Tanushree Sharma</dc:creator>
            <dc:creator>Mike Neville-O'Neill (Guest Author)</dc:creator>
        </item>
        <item>
            <title><![CDATA[Leverage IBM QRadar SIEM to get insights from Cloudflare logs]]></title>
            <link>https://blog.cloudflare.com/announcing-the-ibm-qradar-direct-log-integration/</link>
            <pubDate>Mon, 14 Mar 2022 12:59:13 GMT</pubDate>
            <description><![CDATA[ We’re excited to announce that Cloudflare customers are now able to push their logs directly to QRadar. This direct integration leads to cost savings and faster log delivery for Cloudflare and QRadar SIEM customers ]]></description>
            <content:encoded><![CDATA[ <p></p><p>It’s just gone midnight, and you’ve just been notified that there is a malicious IP hitting your servers. You need to triage the situation; find the who, what, where, when, why as fast and in as much detail as possible.</p><p>Based on what you find out, your next steps could fall anywhere between classifying the alert as a false positive, to escalating the situation and alerting on-call staff from around your organization with a middle of the night wake up.</p><p>For anyone that’s gone through a similar situation, you’re aware that the security tools you have on hand can make the situation infinitely easier. It’s invaluable to have one platform that provides complete visibility of all the endpoints, systems and operations that are running at your company.</p><p>Cloudflare protects customers’ applications through application services: DNS, CDN and WAF to name a few. We also have products that protect corporate applications, like our <a href="https://www.cloudflare.com/learning/security/glossary/what-is-zero-trust/">Zero Trust</a> offerings Access and Gateway. Each of these products generates logs that provide customers visibility into what’s happening in their environments. Many of our customers use Cloudflare’s services along with other network or application services, such as endpoint management, containerized systems and their own servers.</p><p>We’re excited to announce that Cloudflare customers are now able to push their logs directly to IBM Security QRadar SIEM. This direct integration leads to cost savings and faster log delivery for Cloudflare and QRadar SIEM customers because there is no intermediary cloud storage required.</p><p>Cloudflare has invited our partner from the IBM QRadar SIEM team to speak to the capabilities this unlocks for our mutual customers.</p>
    <div>
      <h3>IBM QRadar SIEM</h3>
      <a href="#ibm-qradar-siem">
        
      </a>
    </div>
    <p>QRadar SIEM provides security teams centralized visibility and insights across users, endpoints, clouds, applications, and networks – helping you detect, investigate, and respond to threats enterprise wide. QRadar SIEM helps security teams work quickly and efficiently by turning thousands to millions of events into a manageable number of prioritized alerts and accelerating investigations with automated, AI-driven enrichment and root cause analysis. With QRadar SIEM, increase the productivity of your team, address critical use cases, and mature your security operation.</p><p>Cloudflare’s reverse proxy and enterprise security products are a key part of customer’s environments. <a href="https://www.cloudflare.com/learning/security/what-is-siem/">Security analysis</a> can gain visibility about logs from these products along with data from tools that span their network to build out detections and response workflows.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/U23IR6bdNFpsSfnjbBRWg/7d63da63229f0c4889eba72b4d21bc9d/image1-15.png" />
            
            </figure><p>IBM and Cloudflare have partnered together for years to provide a single pane of glass view for our customers. This new enhanced integration means that QRadar SIEM customers can ingest Cloudflare logs directly from Cloudflare’s Logpush product. QRadar SIEM also continues to support customers who are leveraging existing integration via S3 storage.</p><p>For more information about how to use this new integration, refer to the <a href="https://www.ibm.com/docs/en/dsm?topic=configuration-cloudflare-logs">Cloudflare Logs DSM guide</a>. Also, check out the blog post on the <a href="https://community.ibm.com/community/user/security/blogs/gaurav-sharma/2022/03/07/beyondthedsmguide-qradar-cloudflare-integration">QRadar Community blog</a> for more details!</p> ]]></content:encoded>
            <category><![CDATA[Security Week]]></category>
            <category><![CDATA[Logs]]></category>
            <category><![CDATA[Analytics]]></category>
            <category><![CDATA[Guest Post]]></category>
            <guid isPermaLink="false">6GxLJgfX1YTdqaAlUPMHoi</guid>
            <dc:creator>Tanushree Sharma</dc:creator>
            <dc:creator>Gaurav Gyan Sharma (Guest Author)</dc:creator>
        </item>
        <item>
            <title><![CDATA[Slicing and Dicing Instant Logs: Real-time Insights on the Command Line]]></title>
            <link>https://blog.cloudflare.com/instant-logs-on-the-command-line/</link>
            <pubDate>Mon, 07 Feb 2022 17:04:28 GMT</pubDate>
            <description><![CDATA[ Today, I am going to introduce you to Instant Logs in your terminal ]]></description>
            <content:encoded><![CDATA[ 
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5x3aLlINzMShJK4anPOXaS/17359d254b8f621e69e874dbcd19f1df/image3-10-2.png" />
            
            </figure><p>During Speed Week 2021 we announced a new offering for Enterprise customers, <a href="/how-we-built-instant-logs/">Instant Logs</a>. Since then, the team has not slowed down and has been working on new ways to enable our customers to consume their logs and gain powerful insights into their HTTP traffic in real time.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4vhooHoc2Xv3yigrIOQBkW/362e50f9e8c80818a2888e20ffbdd8c4/image1-6.png" />
            
            </figure><p>We recognize that as developers, UIs are useful but sometimes there is the need for a more powerful alternative. Today, I am going to introduce you to Instant Logs in your terminal! In order to get started we need to install a few open-source tools to help us:</p><ul><li><p><a href="https://github.com/vi/websocat">Websocat</a> - to connect to WebSockets.</p></li><li><p><a href="https://github.com/rcoh/angle-grinder">Angle Grinder</a> - a utility to slice-and-dice a stream of logs on the command line.</p></li></ul>
    <div>
      <h2>Creating an Instant Log’s Session</h2>
      <a href="#creating-an-instant-logs-session">
        
      </a>
    </div>
    <p>For enterprise zones with access to Instant Logs, you can create a new session by sending a POST request to our jobs' endpoint. You will need:</p><ul><li><p>Your Zone Tag / ID</p></li><li><p>An Authentication Key or an API Token with at least the Zone Logs Read permission</p></li></ul>
            <pre><code>curl -X POST 'https://api.cloudflare.com/client/v4/zones/${ZONE_ID}/logpush/edge/jobs' \
-H 'X-Auth-Key: &lt;KEY&gt;' \
-H 'X-Auth-Email: &lt;EMAIL&gt;' \
-H 'Content-Type: application/json' \
--data-raw '{
    "fields": "ClientIP,ClientRequestHost,ClientRequestMethod,ClientRequestPath,EdgeEndTimestamp,EdgeResponseBytes,EdgeResponseStatus,EdgeStartTimestamp,RayID",
    "sample": 1,
    "filter": "",
    "kind": "instant-logs"
}'</code></pre>
            
    <div>
      <h3>Response</h3>
      <a href="#response">
        
      </a>
    </div>
    <p>The response will include a new field called <code>destination_conf</code>. The value of this field is your unique WebSocket address that will receive messages directly from our network!</p>
            <pre><code>{
    "errors": [],
    "messages": [],
    "result": {
        "id": 401,
        "fields": "ClientIP,ClientRequestHost,ClientRequestMethod,ClientRequestPath,EdgeEndTimestamp,EdgeResponseBytes,EdgeResponseStatus,EdgeStartTimestamp,RayID",
        "sample": 1,
        "filter": "",
        "destination_conf": "wss://logs.cloudflare.com/instant-logs/ws/sessions/949f9eb846f06d8f8b7c91b186a349d2",
        "kind": "instant-logs"
    },
    "success": true
}</code></pre>
            
    <div>
      <h2>Connect to WebSocket</h2>
      <a href="#connect-to-websocket">
        
      </a>
    </div>
    <p>Using a CLI utility like <a href="https://github.com/vi/websocat">Websocat</a>, you can connect to the WebSocket and start immediately receiving logs of line-delimited JSON.</p>
            <pre><code>websocat wss://logs.cloudflare.com/instant-logs/ws/sessions/949f9eb846f06d8f8b7c91b186a349d2
{"ClientRequestHost":"example.com","ClientRequestMethod":"GET","ClientRequestPath":"/","EdgeEndTimestamp":"2022-01-25T17:23:05Z","EdgeResponseBytes":363,"EdgeResponseStatus":200,"EdgeStartTimestamp":"2022-01-25T17:23:05Z","RayID":"6d332ff74fa45fbe","sampleInterval":1}
{"ClientRequestHost":"example.com","ClientRequestMethod":"GET","ClientRequestPath":"/","EdgeEndTimestamp":"2022-01-25T17:23:06Z","EdgeResponseBytes":363,"EdgeResponseStatus":200,"EdgeStartTimestamp":"2022-01-25T17:23:06Z","RayID":"6d332fffe9c4fc81","sampleInterval":1}</code></pre>
            
    <div>
      <h2>The Scenario</h2>
      <a href="#the-scenario">
        
      </a>
    </div>
    <p>Now that you are able to create a new Instant Logs session let’s give it a purpose! Say you just recently deployed a new firewall rule blocking users from downloading a specific asset that is only available to users in Canada. For the purpose of this example, the asset is available at the path <code>/canadians-only</code>.</p>
    <div>
      <h3>Specifying Fields</h3>
      <a href="#specifying-fields">
        
      </a>
    </div>
    <p>In order to see what firewall actions (if any) were taken, we need to make sure that we include <code>ClientRequestCountry</code>, <code>​​FirewallMatchesActions</code> and <code>FirewallMatchesRuleIDs</code> fields when creating our session.</p><p>Additionally, any field available in our HTTP request dataset is usable by Instant Logs. You can view the entire list of HTTP Request fields <a href="https://developers.cloudflare.com/logs/reference/log-fields/zone/http_requests">on our developer docs</a>.</p>
    <div>
      <h3>Choosing a Sample Rate</h3>
      <a href="#choosing-a-sample-rate">
        
      </a>
    </div>
    <p>Instant Logs users also have the ability to choose a sample rate. The <code>sample</code> parameter is the inverse probability of selecting a log. This means that <code>"sample": 1</code> is 100% of records, <code>"sample": 10</code> is 10% and so on.</p><p>Going back to our example of validating that our newly deployed firewall rule is working as expected, in this case, we are choosing not to sample the data by setting <code>"sample": 1</code>.</p><p>Please note that Instant Logs has a maximum data rate supported. For high volume domains, we sample server side as indicated in the "sampleInterval" parameter returned in the logs. For example, "sampleInterval": 10 indicates this log message is just one out of 10 logs received.</p>
    <div>
      <h3>Defining the Filters</h3>
      <a href="#defining-the-filters">
        
      </a>
    </div>
    <p>Since we are only interested in requests with the path of <code>/canadians-only</code>, we can use filters to remove any logs that do not match that specific path. The filters consist of three parts: <code>key</code>, <code>operator</code> and <code>value</code>. The <code>key</code> can be any field specified in the <code>"fields": ""</code> list when creating the session. The complete list of supported operators can be found on our <a href="https://developers.cloudflare.com/logs/instant-logs">Instant Logs documentation</a>.</p><p>To only get the logs of requests destined to <code>/canadians-only</code>, we can specify the following filter:</p>
            <pre><code>{
  "filter": "{\"where\":{\"and\":[{\"key\":\"ClientRequestPath\",\"operator\":\"eq\",\"value\":\"/canadians-only\"}]}}"
}</code></pre>
            
    <div>
      <h2>Creating an Instant Logs Session: Canadians Only</h2>
      <a href="#creating-an-instant-logs-session-canadians-only">
        
      </a>
    </div>
    <p>Using the information above, we can now craft the request for our custom Instant Logs session.</p>
            <pre><code>curl -X POST 'https://api.cloudflare.com/client/v4/zones/${ZONE_ID}/logpush/edge/jobs' \
-H 'X-Auth-Key: &lt;KEY&gt;' \
-H 'X-Auth-Email: &lt;EMAIL&gt;' \
-H 'Content-Type: application/json' \
--data-raw '{
  "fields": "ClientIP,ClientRequestHost,ClientRequestMethod,ClientRequestPath,ClientCountry,EdgeEndTimestamp,EdgeResponseBytes,EdgeResponseStatus,EdgeStartTimestamp,RayID,FirewallMatchesActions,FirewallMatchesRuleIDs",
  "sample": 1,
  "kind": "instant-logs",
  "filter": "{\"where\":{\"and\":[{\"key\":\"ClientRequestPath\",\"operator\":\"eq\",\"value\":\"/canadians-only\"}]}}"
}'</code></pre>
            
    <div>
      <h3>Angle Grinder</h3>
      <a href="#angle-grinder">
        
      </a>
    </div>
    <p>Now that we have a connection to our WebSocket and are receiving logs that match the request path <code>/canadians-only</code>, we can start slicing and dicing the logs to see that the rule is working as expected. A handy tool to use for this is <a href="https://github.com/rcoh/angle-grinder">Angle Grinder</a>. Angle Grinder lets you apply filtering, transformations and aggregations on stdin with first class JSON support. For example, to get the number of visitors from each country we can sum the number of events by the <code>ClientCountry</code> field.</p>
            <pre><code>websocat wss://logs.cloudflare.com/instant-logs/ws/sessions/949f9eb846f06d8f8b7c91b186a349d2 | agrind '* | json | sum(sampleInterval) by ClientCountry'
ClientCountry    	_sum
---------------------------------
pt               	4
fr               	3
us               	3</code></pre>
            <p>Using Angle Grinder, we can create a query to count the firewall actions by each country.</p>
            <pre><code>websocat wss://logs.cloudflare.com/instant-logs/ws/sessions/949f9eb846f06d8f8b7c91b186a349d2 |  agrind '* | json | sum(sampleInterval) by ClientCountry,FirewallMatchesActions'
ClientCountry        FirewallMatchesActions        _sum
---------------------------------------------------------------
ca                   []                            5
us                   [block]                       1</code></pre>
            <p>Looks like our newly deployed firewall rule is working :)</p><p>Happy Logging!</p> ]]></content:encoded>
            <category><![CDATA[CLI]]></category>
            <category><![CDATA[Logs]]></category>
            <guid isPermaLink="false">5upQVQrIIaXHco5X9VKRGb</guid>
            <dc:creator>Cole MacKenzie</dc:creator>
        </item>
    </channel>
</rss>