
<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:media="http://search.yahoo.com/mrss/">
    <channel>
        <title><![CDATA[ The Cloudflare Blog ]]></title>
        <description><![CDATA[ Get the latest news on how products at Cloudflare are built, technologies used, and join the teams helping to build a better Internet. ]]></description>
        <link>https://blog.cloudflare.com</link>
        <atom:link href="https://blog.cloudflare.com/" rel="self" type="application/rss+xml"/>
        <language>en-us</language>
        
        <lastBuildDate>Mon, 06 Apr 2026 15:08:01 GMT</lastBuildDate>
        <item>
            <title><![CDATA[How Workers powers our internal maintenance scheduling pipeline]]></title>
            <link>https://blog.cloudflare.com/building-our-maintenance-scheduler-on-workers/</link>
            <pubDate>Mon, 22 Dec 2025 14:00:00 GMT</pubDate>
            <description><![CDATA[ Physical data center maintenance is risky on a global network. We built a maintenance scheduler on Workers to safely plan disruptive operations, while solving scaling challenges by viewing the state of our infrastructure through a graph interface on top of multiple data sources and metrics pipelines. ]]></description>
            <content:encoded><![CDATA[ <p>Cloudflare has data centers in over <a href="https://www.cloudflare.com/network/"><u>330 cities globally</u></a>, so you might think we could easily disrupt a few at any time without users noticing when we plan data center operations. However, the reality is that <a href="https://developers.cloudflare.com/support/disruptive-maintenance/"><u>disruptive maintenance</u></a> requires careful planning, and as Cloudflare grew, managing these complexities through manual coordination between our infrastructure and network operations specialists became nearly impossible.</p><p>It is no longer feasible for a human to track every overlapping maintenance request or account for every customer-specific routing rule in real time. We reached a point where manual oversight alone couldn't guarantee that a routine hardware update in one part of the world wouldn't inadvertently conflict with a critical path in another.</p><p>We realized we needed a centralized, automated "brain" to act as a safeguard — a system that could see the entire state of our network at once. By building this scheduler on <a href="https://workers.cloudflare.com/"><u>Cloudflare Workers</u></a>, we created a way to programmatically enforce safety constraints, ensuring that no matter how fast we move, we never sacrifice the reliability of the services on which our customers depend.</p><p>In this blog post, we’ll explain how we built it, and share the results we’re seeing now.</p>
    <div>
      <h2>Building a system to de-risk critical maintenance operations</h2>
      <a href="#building-a-system-to-de-risk-critical-maintenance-operations">
        
      </a>
    </div>
    <p>Picture an edge router that acts as one of a small, redundant group of gateways that collectively connect the public Internet to the many Cloudflare data centers operating in a metro area. In a populated city, we need to ensure that the multiple data centers sitting behind this small cluster of routers do not get cut off because the routers were all taken offline simultaneously. </p><p>Another maintenance challenge comes from our Zero Trust product, Dedicated CDN Egress IPs, which allows customers to choose specific data centers from which their user traffic will exit Cloudflare and be sent to their geographically close origin servers for low latency. (For the purpose of brevity in this post, we'll refer to the Dedicated CDN Egress IPs product as "Aegis," which was its former name.) If all the data centers a customer chose are offline at once, they would see higher latency and possibly 5xx errors, which we must avoid. </p><p>Our maintenance scheduler solves problems like these. We can make sure that we always have at least one edge router active in a certain area. And when scheduling maintenance, we can see if the combination of multiple scheduled events would cause all the data centers for a customer’s Aegis pools to be offline at the same time.</p><p>Before we created the scheduler, these simultaneous disruptive events could cause downtime for customers. Now, our scheduler notifies internal operators of potential conflicts, allowing us to propose a new time to avoid overlapping with other related data center maintenance events.</p><p>We define these operational scenarios, such as edge router availability and customer rules, as maintenance constraints which allow us to plan more predictable and safe maintenance.</p>
    <div>
      <h2>Maintenance constraints</h2>
      <a href="#maintenance-constraints">
        
      </a>
    </div>
    <p>Every constraint starts with a set of proposed maintenance items, such as a network router or list of servers. We then find all the maintenance events in the calendar that overlap with the proposed maintenance time window.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2vHCauxOGRXzhrO6DNDr2S/cf38b93ac9b812e5e064f800e537e549/image4.png" />
          </figure><p>Next, we aggregate product APIs, such as a list of Aegis customer IP pools. Aegis returns a set of IP ranges where a customer requested egress out of specific data center IDs, shown below.</p>
            <pre><code>[
    {
      "cidr": "104.28.0.32/32",
      "pool_name": "customer-9876",
      "port_slots": [
        {
          "dc_id": 21,
          "other_colos_enabled": true,
        },
        {
          "dc_id": 45,
          "other_colos_enabled": true,
        }
      ],
      "modified_at": "2023-10-22T13:32:47.213767Z"
    },
]</code></pre>
            <p>In this scenario, data center 21 and data center 45 relate to each other because we need at least one data center online for the Aegis customer 9876 to receive egress traffic from Cloudflare. If we tried to take data centers 21 and 45 down simultaneously, our coordinator would alert us that there would be unintended consequences for that customer workload.</p><p>We initially had a naive solution to load all data into a single Worker. This included all server relationships, product configurations, and metrics for product and infrastructure health to compute constraints. Even in our proof of concept phase, we ran into problems with “out of memory” errors.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1v4Q6bXsZLBXLbrbRrcW3o/00d291ef3db459e99ae9b620965b6bc7/image2.png" />
          </figure><p>We needed to be more cognizant of Workers’ <a href="https://developers.cloudflare.com/workers/platform/limits/"><u>platform limits</u></a>. This required loading only as much data as was absolutely necessary to process the constraint’s business logic. If a maintenance request for a router in Frankfurt, Germany, comes in, we almost certainly do not care what is happening in Australia since there is no overlap across regions. Thus, we should only load data for neighboring data centers in Germany. We needed a more efficient way to process relationships in our dataset.</p>
    <div>
      <h2>Graph processing on Workers</h2>
      <a href="#graph-processing-on-workers">
        
      </a>
    </div>
    <p>As we looked at our constraints, a pattern emerged where each constraint boiled down to two concepts: objects and associations. In graph theory, these components are known as vertices and edges, respectively. An object could be a network router and an association could be the list of Aegis pools in the data center that requires the router to be online. We took inspiration from Facebook’s <a href="https://research.facebook.com/publications/tao-facebooks-distributed-data-store-for-the-social-graph/"><u>TAO</u></a> research paper to establish a graph interface on top of our product and infrastructure data. The API looks like the following:</p>
            <pre><code>type ObjectID = string

interface MainTAOInterface&lt;TObject, TAssoc, TAssocType&gt; {
  object_get(id: ObjectID): Promise&lt;TObject | undefined&gt;

  assoc_get(id1: ObjectID, atype: TAssocType): AsyncIterable&lt;TAssoc&gt;
}</code></pre>
            <p>The core insight is that associations are typed. For example, a constraint would call the graph interface to retrieve Aegis product data.</p>
            <pre><code>async function constraint(c: AppContext, aegis: TAOAegisClient, datacenters: string[]): Promise&lt;Record&lt;string, PoolAnalysis&gt;&gt; {
  const datacenterEntries = await Promise.all(
    datacenters.map(async (dcID) =&gt; {
      const iter = aegis.assoc_get(c, dcID, AegisAssocType.DATACENTER_INSIDE_AEGIS_POOL)
      const pools: string[] = []
      for await (const assoc of iter) {
        pools.push(assoc.id2)
      }
      return [dcID, pools] as const
    }),
  )

  const datacenterToPools = new Map&lt;string, string[]&gt;(datacenterEntries)
  const uniquePools = new Set&lt;string&gt;()
  for (const pools of datacenterToPools.values()) {
    for (const pool of pools) uniquePools.add(pool)
  }

  const poolTotalsEntries = await Promise.all(
    [...uniquePools].map(async (pool) =&gt; {
      const total = aegis.assoc_count(c, pool, AegisAssocType.AEGIS_POOL_CONTAINS_DATACENTER)
      return [pool, total] as const
    }),
  )

  const poolTotals = new Map&lt;string, number&gt;(poolTotalsEntries)
  const poolAnalysis: Record&lt;string, PoolAnalysis&gt; = {}
  for (const [dcID, pools] of datacenterToPools.entries()) {
    for (const pool of pools) {
      poolAnalysis[pool] = {
        affectedDatacenters: new Set([dcID]),
        totalDatacenters: poolTotals.get(pool),
      }
    }
  }

  return poolAnalysis
}</code></pre>
            <p>We use two association types in the code above:</p><ol><li><p>DATACENTER_INSIDE_AEGIS_POOL, which retrieves the Aegis customer pools that a data center resides in.</p></li><li><p>AEGIS_POOL_CONTAINS_DATACENTER, which retrieves the data centers an Aegis pool needs to serve traffic.</p></li></ol><p>The associations are inverted indices of one another. The access pattern is exactly the same as before, but now the graph implementation has much more control of how much data it queries. Before, we needed to load all Aegis pools into memory and filter inside constraint business logic. Now, we can directly fetch only the data that matters to the application.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4b68YLIHiOPt5EeyTUTeBt/5f624f0d0912e7dfd0e308a3427d194c/unnamed.png" />
          </figure><p>The interface is powerful because our graph implementation can improve performance behind the scenes without complicating the business logic. This lets us use the scalability of Workers and Cloudflare’s CDN to fetch data from our internal systems very quickly.</p>
    <div>
      <h2>Fetch pipeline</h2>
      <a href="#fetch-pipeline">
        
      </a>
    </div>
    <p>We switched to using the new graph implementation, sending more targeted API requests. Response sizes dropped by 100x overnight, switching from loading a few massive requests to many tiny requests.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/71aDOicyippmUbj4ypXKw/73dacdf16ca0ac422efdfec9e86e9dbf/image5.png" />
          </figure><p>While this solves the issue of loading too much into memory, we now have a subrequest problem because instead of a few large HTTP requests, we make an order of magnitude more of small requests. Overnight, we started consistently breaching subrequest limits.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/36KjfOU8xIuUkwF7QOlNkK/e2275a50ff1bef497cdb201c2d3a6249/image3.png" />
          </figure><p>In order to solve this problem, we built a smart middleware layer between our graph implementation and the <code>fetch</code> API.</p>
            <pre><code>export const fetchPipeline = new FetchPipeline()
  .use(requestDeduplicator())
  .use(lruCacher({
    maxItems: 100,
  }))
  .use(cdnCacher())
  .use(backoffRetryer({
    retries: 3,
    baseMs: 100,
    jitter: true,
  }))
  .handler(terminalFetch);</code></pre>
            <p>If you’re familiar with Go, you may have seen the <a href="https://pkg.go.dev/golang.org/x/sync/singleflight"><u>singleflight</u></a> package before. We took inspiration from this idea and the first middleware component in the fetch pipeline deduplicates inflight HTTP requests, so they all wait on the same Promise for data instead of producing duplicate requests in the same Worker. Next, we use a lightweight Least Recently Used (LRU) cache to internally cache requests that we have already seen before.</p><p>Once both of those are complete, we use Cloudflare’s <code>caches.default.match</code> function to cache all GET requests in the region that the Worker is running. Since we have multiple data sources with different performance characteristics, we choose time to live (TTL) values carefully. For example, real-time data is only cached for 1 minute. Relatively static infrastructure data could be cached for 1–24 hours depending on the type of data. Power management data might be changed manually and infrequently, so we can cache it for longer at the edge.</p><p>In addition to those layers, we have the standard exponential backoff, retries and jitter. This helps reduce wasted <code>fetch</code> calls where a downstream resource might be unavailable temporarily. By backing off slightly, we increase the chance that we fetch the next request successfully. Conversely, if the Worker sends requests constantly without backoff, it will easily breach the subrequest limit when the origin starts returning 5xx errors.</p><p>Putting it all together, we saw ~99% cache hit rate. <a href="https://www.cloudflare.com/learning/cdn/what-is-a-cache-hit-ratio/"><u>Cache hit rate</u></a> is the percentage of HTTP requests served from Cloudflare’s fast cache memory (a "hit") versus slower requests to data sources running in our control plane (a "miss"), calculated as (hits / (hits + misses)). A high rate means better HTTP request performance and lower costs because querying data from cache in our Worker is an order of magnitude faster than fetching from an origin server in a different region. After tuning settings, for our in memory and CDN caches, hit rates have increased dramatically. Since much of our workload is real-time, we will never have a 100% hit rate as we must request fresh data at least once per minute.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1jifI33QpBkQPd7tE5Tapi/186a74b922faac3abe091b79f03d640b/image1.png" />
          </figure><p>We have talked about improving the fetching layer, but not about how we made origin HTTP requests faster. Our maintenance coordinator needs to react in real-time to network degradation and failure of machines in data centers. We use our distributed <a href="https://blog.cloudflare.com/how-cloudflare-runs-prometheus-at-scale/"><u>Prometheus</u></a> query engine, Thanos, to deliver performant metrics from the edge into the coordinator.</p>
    <div>
      <h2>Thanos in real-time</h2>
      <a href="#thanos-in-real-time">
        
      </a>
    </div>
    <p>To explain how our choice in using the graph processing interface affected our real-time queries, let’s walk through an example. In order to analyze the health of edge routers, we could send the following query:</p>
            <pre><code>sum by (instance) (network_snmp_interface_admin_status{instance=~"edge.*"})</code></pre>
            <p>Originally, we asked our Thanos service, which stores Prometheus metrics, for a list of each edge router’s current health status and would manually filter for routers relevant to the maintenance inside the Worker. This is suboptimal for many reasons. For example, Thanos returned multi-MB responses which it needed to decode and encode. The Worker also needed to cache and decode these large HTTP responses only to filter out the majority of the data while processing a specific maintenance request. Since TypeScript is single-threaded and parsing JSON data is CPU-bound, sending two large HTTP requests means that one is blocked waiting for the other to finish parsing.</p><p>Instead, we simply use the graph to find targeted relationships such as the interface links between edge and spine routers, denoted as <code>EDGE_ROUTER_NETWORK_CONNECTS_TO_SPINE</code>.</p>
            <pre><code>sum by (lldp_name) (network_snmp_interface_admin_status{instance=~"edge01.fra03", lldp_name=~"spine.*"})</code></pre>
            <p>The result is 1 Kb on average instead of multiple MBs, or approximately 1000x smaller. This also massively reduces the amount of CPU required inside the Worker because we offload most of the deserialization to Thanos. As we explained before, this means we need to make a higher number of these smaller fetch requests, but load balancers in front of Thanos can spread the requests evenly to increase throughput for this use case. </p><p>Our graph implementation and fetch pipeline successfully tamed the 'thundering herd' of thousands of tiny real-time requests. However, historical analysis presents a different I/O challenge. Instead of fetching small, specific relationships, we need to scan months of data to find conflicting maintenance windows. In the past, Thanos would issue a massive amount of random reads to our object store, <a href="https://www.cloudflare.com/developer-platform/products/r2/">R2</a>. To solve this massive bandwidth penalty without losing performance, we adopted a new approach the Observability team developed internally this year.</p>
    <div>
      <h2>Historical data analysis</h2>
      <a href="#historical-data-analysis">
        
      </a>
    </div>
    <p>There are enough maintenance use cases that we must rely on historical data to tell us if our solution is accurate and will scale with the growth of Cloudflare’s network. We do not want to cause incidents, and we also want to avoid blocking proposed physical maintenance unnecessarily. In order to balance these two priorities, we can use time series data about maintenance events that happened two months or even a year ago to tell us how often a maintenance event is violating one of our constraints, e.g. edge router availability or Aegis. We blogged earlier this year about using Thanos to <a href="https://blog.cloudflare.com/safe-change-at-any-scale/"><u>automatically release and revert software</u></a> to the edge.</p><p>Thanos primarily fans out to Prometheus, but when Prometheus' retention is not enough to answer the query it has to download data from object storage — R2 in our case. Prometheus TSDB blocks were originally designed for local SSDs, relying on random access patterns that become a bottleneck when moved to object storage. When our scheduler needs to analyze months of historical maintenance data to identify conflicting constraints, random reads from object storage incur a massive I/O penalty. To solve this, we implemented a conversion layer that transforms these blocks into <a href="https://parquet.apache.org/"><u>Apache Parquet</u></a> files. Parquet is a columnar format native to big data analytics that organizes data by column rather than row, which — together with rich statistics — allows us to only fetch what we need.</p><p>Furthermore, since we are rewriting TSDB blocks into Parquet files, we can also store the data in a way that allows us to read the data in just a few big sequential chunks.</p>
            <pre><code>sum by (instance) (hmd:release_scopes:enabled{dc_id="45"})</code></pre>
            <p>In the example above we would choose the tuple “(__name__, dc_id)” as a primary sorting key so that metrics with the name “hmd:release_scopes:enabled” and the same value for “dc_id” get sorted close together.</p><p>Our Parquet gateway now issues precise R2 range requests to fetch only the specific columns relevant to the query. This reduces the payload from megabytes to kilobytes. Furthermore, because these file segments are immutable, we can aggressively cache them on the Cloudflare CDN.</p><p>This turns R2 into a low-latency query engine, allowing us to backtest complex maintenance scenarios against long-term trends instantly, avoiding the timeouts and high tail latency we saw with the original TSDB format. The graph below shows a recent load test, where Parquet reached up to 15x the P90 performance compared to the old system for the same query pattern.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6lVj6W4W4MMUy6cEsDpk5G/21614b7ac003a86cb5162a2ba75f4c42/image8.png" />
          </figure><p>To get a deeper understanding of how the Parquet implementation works, you can watch this talk at PromCon EU 2025, <a href="https://www.youtube.com/watch?v=wDN2w2xN6bA&amp;list=PLoz-W_CUquUlHOg314_YttjHL0iGTdE3O&amp;index=16"><u>Beyond TSDB: Unlocking Prometheus with Parquet for Modern Scale</u></a>.</p>
    <div>
      <h2>Building for scale</h2>
      <a href="#building-for-scale">
        
      </a>
    </div>
    <p>By leveraging Cloudflare Workers, we moved from a system that ran out of memory to one that intelligently caches data and uses efficient observability tooling to analyze product and infrastructure data in real time. We built a maintenance scheduler that balances network growth with product performance.</p><p>But “balance” is a moving target.</p><p>Every day, we add more hardware around the world, and the logic required to maintain it without disrupting customer traffic gets exponentially harder with more products and types of maintenance operations. We’ve worked through the first set of challenges, but now we’re staring down more subtle, complex ones that only appear at this massive scale.</p><p>We need engineers who aren't afraid of hard problems. Join our <a href="https://www.cloudflare.com/careers/jobs/?department=Infrastructure"><u>Infrastructure team</u></a> and come build with us.</p> ]]></content:encoded>
            <category><![CDATA[Cloudflare Workers]]></category>
            <category><![CDATA[Reliability]]></category>
            <category><![CDATA[Prometheus]]></category>
            <category><![CDATA[Infrastructure]]></category>
            <guid isPermaLink="false">5pdspiP2m71MeIoVL8wv1i</guid>
            <dc:creator>Kevin Deems</dc:creator>
            <dc:creator>Michael Hoffmann</dc:creator>
        </item>
        <item>
            <title><![CDATA[Scaling with safety: Cloudflare's approach to global service health metrics and software releases]]></title>
            <link>https://blog.cloudflare.com/safe-change-at-any-scale/</link>
            <pubDate>Mon, 05 May 2025 14:00:00 GMT</pubDate>
            <description><![CDATA[ Learn how Cloudflare tackles the challenge of scaling global service health metrics to safely release new software across our global network. ]]></description>
            <content:encoded><![CDATA[ <p>Has your browsing experience ever been disrupted by this error page? Sometimes Cloudflare returns <a href="https://developers.cloudflare.com/support/troubleshooting/cloudflare-errors/troubleshooting-cloudflare-5xx-errors/#error-500-internal-server-error"><u>"Error 500"</u></a> when our servers cannot respond to your web request. This inability to respond could have several potential causes, including problems caused by a bug in one of the services that make up Cloudflare's software stack.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1rIrBE5GP2IJwbn2k6mVTV/fd796ed1ef591fb8bd95395aa8f604d1/1.png" />
          </figure><p>We know that our testing platform will inevitably miss <a href="https://blog.cloudflare.com/pipefail-how-a-missing-shell-option-slowed-cloudflare-down/"><u>some software bugs</u></a>, so we built guardrails to gradually and safely release new code before a feature reaches all users. Health Mediated Deployments (HMD) is Cloudflare’s data-driven solution to automating software updates across our <a href="https://www.cloudflare.com/network/"><u>global network</u></a>. HMD works by querying <a href="https://thanos.io/"><u>Thanos</u></a>, a system for storing and scaling <a href="https://blog.cloudflare.com/how-cloudflare-runs-prometheus-at-scale/"><u>Prometheus</u></a> metrics. Prometheus collects detailed data about the performance of our services, and Thanos makes that data accessible across our distributed network. HMD uses these metrics to determine whether new code should continue to roll out, pause for further evaluation, or be automatically reverted to prevent widespread issues.</p><p>Cloudflare engineers configure signals from their service, such as alerting rules or <a href="https://sre.google/workbook/implementing-slos/"><u>Service Level Objectives (SLOs)</u></a>. For example, the following Service Level Indicator (SLI) checks the rate of HTTP 500 errors over 10 minutes returned from a service in our software stack.</p>
            <pre><code>sum(rate(http_request_count{code="500"}[10m])) / sum(rate(http_request_count[10m]))</code></pre>
            <p>An SLO is a combination of an SLI and an objective threshold. For example, the service returns 500 errors &lt;0.1% of the time.</p><p>If the success rate is unexpectedly decreasing where the new code is running, HMD reverts the change in order to stabilize the system, reacting before humans even know what Cloudflare service was broken. Below, HMD recognizes the degradation in signal in an early release stage and reverts the code back to the prior version to limit the blast radius.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2O6gCfhZsoU1QCf3lu0QMl/bb4377cccbf982b607ce3564e4bf9fbd/2.png" />
          </figure><p>
Cloudflare’s network serves millions of requests per second across diverse geographies. How do we know that HMD will react quickly the next time we accidentally release code that contains a bug? HMD performs a testing strategy called <a href="https://en.wikipedia.org/wiki/Backtesting"><u>backtesting</u></a>, outside the release process, which uses historical incident data to test how long it would take to react to degrading signals in a future release.</p><p>We use Thanos to join thousands of small Prometheus deployments into a single unified query layer while keeping our monitoring reliable and cost-efficient. To backfill historical incident metric data that has fallen out of Prometheus’ retention period, we use our <a href="https://www.cloudflare.com/developer-platform/products/r2/">object storage solution</a>, R2.</p><p>Today, we store 4.5 billion distinct time series for a year of retention, which results in roughly 8 petabytes of data in 17 million objects distributed all over the globe.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5TlfqQIPqS7TVxFB38PztG/65dc562db7af5304562b3fed9ab6486d/3.png" />
          </figure>
    <div>
      <h2>Making it work at scale</h2>
      <a href="#making-it-work-at-scale">
        
      </a>
    </div>
    <p>To give a sense of scale, we can estimate the impact of a batch of backtests:</p><ul><li><p>Each backtest run is made up of multiple SLOs to evaluate a service's health.</p></li><li><p>Each SLO is evaluated using multiple queries containing batches of data centers.</p></li><li><p>Each data center issues anywhere from tens to thousands of requests to R2.</p></li></ul><p>Thus, in aggregate, a batch can translate to hundreds of thousands of <a href="https://prometheus.io/docs/prometheus/latest/querying/basics/"><u>PromQL queries</u></a> and millions of requests to R2. Initially, batch runs would take about 30 hours to complete but through blood, sweat, and tears, we were able to cut this down to 2 hours.</p><p>Let’s review how we made this processing more efficient.</p>
    <div>
      <h3>Recording rules</h3>
      <a href="#recording-rules">
        
      </a>
    </div>
    <p>HMD slices our fleet of machines across multiple dimensions. For the purposes of this post, let’s refer to them as “tier” and “color”. Given a pair of tier and color, we would use the following PromQL expression to find the machines that make up this combination:</p>
            <pre><code>group by (instance, datacenter, tier, color) (
  up{job="node_exporter"}
  * on (datacenter) group_left(tier) datacenter_metadata{tier="tier3"}
  * on (instance) group_left(color) server_metadata{color="green"}
  unless on (instance) (machine_in_maintenance == 1)
  unless on (datacenter) (datacenter_disabled == 1)
)</code></pre>
            <p>Most of these series have a cardinality of approximately the number of machines in our fleet. That’s a substantial amount of data we need to fetch from <a href="https://www.cloudflare.com/learning/cloud/what-is-object-storage/">object storage</a> and transmit home for query evaluation, as well as a significant number of series we need to decode and join together.</p><p>Since this is a fairly common query that is issued in every HMD run, it makes sense to precompute it. In the Prometheus ecosystem, this is commonly done with <a href="https://prometheus.io/docs/prometheus/latest/configuration/recording_rules/"><u>recording rules</u></a>:</p>
            <pre><code>hmd:release_scopes:info{tier="tier3", color="green"}</code></pre>
            <p>Aside from looking much cleaner, this also reduces the load at query time significantly. Since all the joins involved can only have matches within a data center, it is well-defined to evaluate those rules directly in the Prometheus instances inside the data center itself.</p><p>Compared to the original query, the cardinality we need to deal with now scales with the size of the release scope instead of the size of the entire fleet.</p><p>This is significantly cheaper and also less likely to be affected by network issues along the way, which in turn reduces the amount that we need to retry the query, on average. </p>
    <div>
      <h3>Distributed query processing</h3>
      <a href="#distributed-query-processing">
        
      </a>
    </div>
    
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2X1dlDO1DYXfFo29DRIBeX/c2d8a7f88d24dc4b562d068a4774a6dd/4.png" />
          </figure><p>HMD and the Thanos Querier, depicted above, are stateless components that can run anywhere, with highly available deployments in North America and Europe. Let us quickly recap what happens when we evaluate the SLI expression from HMD in our introduction:</p>
            <pre><code>sum(rate(http_request_count{code="500"}[10m]))
/ 
sum(rate(http_request_count[10m]))</code></pre>
            <p>Upon receiving this query from HMD, the Thanos Querier will start requesting raw time series data for the “http_requests_total” metric from its connected <a href="https://thanos.io/v0.4/components/sidecar/"><u>Thanos Sidecar</u></a> and <a href="https://thanos.io/tip/components/store.md/"><u>Thanos Store</u></a> instances all over the world, wait for all the data to be transferred to it, decompress it, and finally compute its result:</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4jcK7cvQfMtqeQMeuFnTMz/3bdca2132a4e700050512dc15d823ef3/5.png" />
          </figure><p>While this works, it is not optimal for several reasons. We have to wait for raw data from thousands of data sources all over the world to arrive in one location before we can even start to decompress it, and then we are limited by all the data being processed by one instance. If we double the number of data centers, we also need to double the amount of memory we allocate for query evaluation.</p><p>Many SLIs come in the form of simple aggregations, typically to boil down some aspect of the service's health to a number, such as the percentage of errors. As with the aforementioned recording rule, those aggregations are often distributive — we can evaluate them inside the data center and coalesce the sub-aggregations again to arrive at the same result.</p><p>To illustrate, if we had a recording rule per data center, we could rewrite our example like this:</p>
            <pre><code>sum(datacenter:http_request_count:rate10m{code="500"})
/ 
sum(datacenter:http_request_count:rate10m)</code></pre>
            <p>This would solve our problems, because instead of requesting raw time series data for high-cardinality metrics, we would request pre-aggregated query results. Generally, these pre-aggregated results are an order of magnitude less data that needs to be sent over the network and processed into a final result.</p><p>However, recording rules come with a steep write-time cost in our architecture, evaluated frequently across thousands of Prometheus instances in production, just to speed up a less frequent ad-hoc batch process. Scaling recording rules alongside our growing set of service health SLIs quickly would be unsustainable. So we had to go back to the drawing board.</p><p>It would be great if we could evaluate data center-scoped queries remotely and coalesce their result back again — for arbitrary queries and at runtime. To illustrate, we would like to evaluate our example like this:</p>
            <pre><code>(sum(rate(http_requests_total{status="500", datacenter="dc1"}[10m])) + ...)
/
(sum(rate(http_requests_total{datacenter="dc1"}[10m])) + ...)</code></pre>
            <p>This is exactly what Thanos’ <a href="https://thanos.io/tip/proposals-done/202301-distributed-query-execution.md/"><u>distributed query engine</u></a> is capable of doing. Instead of requesting raw time series data, we request data center scoped aggregates and only need to send those back home where they get coalesced back again into the full query result:</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/41XYc4JFFrNsmr3p0nYD2h/e3719dbe8fb8055cbb8f72c88729dfd9/6.png" />
          </figure><p>Note that we ensure all the expensive data paths are as short as possible by utilizing R2 <a href="https://developers.cloudflare.com/r2/reference/data-location/#location-hints"><u>location hints</u></a> to specify the primary access region.
</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6H31Ad1XCjJWpuAQGPYSlt/ddaa971e4fa59bffdf283e10d0be0b8c/7.png" />
          </figure>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1kxTrCfN0wZiNu90MNaOm8/8c3df56f9b8724b0ec01e6e9270bb989/8.png" />
          </figure><p>To measure the effectiveness of this approach, we used <a href="https://cloudprober.org/"><u>Cloudprober</u></a> and wrote probes that evaluate the relatively cheap, but still global, query <code>count(node_uname_info)</code>.</p>
            <pre><code>sum(thanos_cloudprober_latency:rate6h{component="thanos-central"})
/
sum(thanos_cloudprober_latency:rate6h{component="thanos-distributed"})</code></pre>
            <p>In the graph below, the y-axis represents the speedup of the distributed execution deployment relative to the centralized deployment. On average, distributed execution responds 3–5 times faster to probes.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1FzUb7uthpVG0yeSEFQxnd/6061eb1bc585565a47017ed9ddddae0a/9.png" />
          </figure><p>Anecdotally, even slightly more complex queries quickly time out or even crash our centralized deployment, but they still can be comfortably computed by the distributed one. For a slightly more expensive query like <code>count(up)</code> for about 17 million scrape jobs, we had difficulty getting the centralized querier to respond and had to scope it to a single region, which took about 42 seconds:</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3lhd497YLjRAhmSz55jGlG/2b258c6f634dc8a435f78703e38ec56c/10.png" />
          </figure><p>Meanwhile, our distributed queriers were able to return the full result in about 8 seconds:</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4tr2sKnLeKrzXLnMsfRViZ/675a2aade0d922548fc07a0bd8ad5fc5/11.png" />
          </figure>
    <div>
      <h3>Congestion control</h3>
      <a href="#congestion-control">
        
      </a>
    </div>
    <p>HMD batch processing leads to spiky load patterns that are hard to provision for. In a perfect world, it would issue a steady and predictable stream of queries. At the same time, HMD batch queries have lower priority to us than the queries that on-call engineers issue to triage production problems. We tackle both of those problems by introducing an adaptive priority-based concurrency control mechanism. After reading Netflix’s work on <a href="https://netflixtechblog.medium.com/performance-under-load-3e6fa9a60581"><u>adaptive concurrency limits</u></a>, we implemented a similar proxy to dynamically limit batch request flow when Thanos SLOs start to degrade. For example, one such SLO is its cloudprober failure rate over the last minute:</p>
            <pre><code>sum(thanos_cloudprober_fail:rate1m)
/
(sum(thanos_cloudprober_success:rate1m) + sum(thanos_cloudprober_fail:rate1m))</code></pre>
            <p>We apply jitter, a random delay, to smooth query spikes inside the proxy. Since batch processing prioritizes overall query throughput over individual query latency, jitter helps HMD send a burst of queries, while allowing Thanos to process queries gradually over several minutes. This reduces instantaneous load on Thanos, improving overall throughput, even if individual query latency increases. Meanwhile, HMD encounters fewer errors, minimizing retries and boosting batch efficiency.</p><p>Our solution simulates how TCP’s congestion control algorithm, <a href="https://en.wikipedia.org/wiki/Additive_increase/multiplicative_decrease"><u>additive increase/multiplicative decrease</u></a>, works. When the proxy server receives a successful request from Thanos, it allows one more concurrent request through next time. If backpressure signals breach defined thresholds, the proxy limits the congestion window proportional to the failure rate.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4M4s0lmq8h3bmZumLPUfXu/c3a967d51b367d155c26f4d95c673cd1/12.png" />
          </figure><p>As the failure rate increases past the “warn” threshold, approaching the “emergency” threshold, the proxy gets exponentially closer to allowing zero additional requests through the system. However, to prevent bad signals from halting all traffic, we cap the loss with a configured minimum request rate.</p>
    <div>
      <h3>Columnar experiments</h3>
      <a href="#columnar-experiments">
        
      </a>
    </div>
    <p>Because Thanos deals with Prometheus TSDB blocks that were never designed for being read over a slow medium like object storage, it does a lot of random I/O. Inspired by <a href="https://www.youtube.com/watch?v=V8Y4VuUwg8I"><u>this excellent talk</u></a>, we started storing our time series data in <a href="https://parquet.apache.org/"><u>Parquet</u></a> files, with some promising preliminary results. This project is still too early to draw any robust conclusions, but we wanted to share our implementation with the Prometheus community, so we are publishing our experimental object storage gateway as <a href="https://github.com/cloudflare/parquet-tsdb-poc"><u>parquet-tsdb-poc</u></a> on GitHub.</p>
    <div>
      <h2>Conclusion</h2>
      <a href="#conclusion">
        
      </a>
    </div>
    <p>We built Health Mediated Deployments (HMD) to enable safe and reliable software releases while pushing the limits of our <a href="https://www.cloudflare.com/learning/performance/what-is-observability/">observability</a> infrastructure. Along the way, we significantly improved Thanos’ ability to handle high-load queries, reducing batch runtimes by 15x.</p><p>But this is just the beginning. We’re excited to continue working with the observability, resiliency, and R2 teams to push our infrastructure to its limits — safely and at scale. As we explore new ways to enhance observability, one exciting frontier is optimizing time series storage for object storage.</p><p>We’re sharing this work with the community as an open-source proof of concept. If you’re interested in exploring Parquet-based time series storage and its potential for large-scale observability, check out the GitHub project linked above.</p> ]]></content:encoded>
            <category><![CDATA[R2]]></category>
            <category><![CDATA[Prometheus]]></category>
            <category><![CDATA[Reliability]]></category>
            <guid isPermaLink="false">5D2xgj0sJ6yj8oOh6qrNUb</guid>
            <dc:creator>Harshal Brahmbhatt</dc:creator>
            <dc:creator>Kevin Deems</dc:creator>
            <dc:creator>Nina Giunta</dc:creator>
            <dc:creator>Michael Hoffmann</dc:creator>
        </item>
        <item>
            <title><![CDATA[Minimizing on-call burnout through alerts observability]]></title>
            <link>https://blog.cloudflare.com/alerts-observability/</link>
            <pubDate>Fri, 29 Mar 2024 13:00:36 GMT</pubDate>
            <description><![CDATA[ Learn how Cloudflare used open-source tools to enhance alert observability, leading to increased resilience and improved on-call team well-being ]]></description>
            <content:encoded><![CDATA[ 
    <div>
      <h3>Introduction</h3>
      <a href="#introduction">
        
      </a>
    </div>
    
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/eGxy8JaWamWJ4cahiswIx/859834037827feccb185badb8b35a37e/Screenshot-2024-01-18-at-10.49.47-PM.png" />
            
            </figure><p>Many people have probably come across the ‘<a href="https://en.wikipedia.org/wiki/Gunshow_(webcomic)#cite_note-Verge-8:~:text=A%202013%20Gunshow,internet%20meme.">this is fine</a>’ meme or the <a href="https://gunshowcomic.com/648">original comic</a>. This is what a typical day for a lot of on-call personnel looks like. On-calls get a lot of alerts, and dealing with too many alerts can result in alert fatigue – a feeling of exhaustion caused by responding to alerts that lack priority or clear actions. Ensuring the alerts are actionable and accurate, not false positives, is crucial because repeated false alarms can desensitize on-call personnel. To this end, within Cloudflare, numerous teams conduct periodic alert analysis, with each team developing its own dashboards for reporting. As members of the Observability team, we've encountered situations where teams reported inaccuracies in alerts or instances where alerts failed to trigger, as well as provided assistance in dealing with noisy/flapping alerts.</p><p>Observability aims to enhance insight into the technology stack by gathering and analyzing a broader spectrum of data. In this blog post, we delve into alert observability, discussing its importance and Cloudflare's approach to achieving it. We'll also explore how we overcome shortcomings in alert reporting within our architecture to simplify troubleshooting using open-source tools and best practices. Join us to understand how we use alerts effectively and use simple tools and practices to enhance our alerts observability, resilience, and on-call personnel health.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6qWK558VCZwC2jF5d2zaYJ/6046122acf14201640f4a8ae757d94b1/Screenshot-2024-01-18-at-10.52.33-PM.png" />
            
            </figure><p>Being on-call can disrupt sleep patterns, impact social life, and hinder leisure activities, potentially leading to burnout. While burnout can be caused by several factors, one contributing factor can be excessively noisy alerts or receiving alerts that are neither important nor actionable. Analyzing alerts can help mitigate the risk of such burnout by reducing unnecessary interruptions and improving the overall efficiency of the on-call process. It involves periodic review and feedback to the system for improving alert quality. Unfortunately, only some companies or teams do alert analysis, even though it is essential information that every on-call or manager should have access to.</p><p>Alert analysis is useful for on-call personnel, enabling them to easily see which alerts have fired during their shift to help draft handover notes and not miss anything important. In addition, managers can generate reports from these stats to see the improvements over time, as well as helping assess on-call vulnerability to burnout. Alert analysis also helps with writing incident reports, to see if alerts were fired, or to determine when an incident started.</p><p>Let’s first understand the alerting stack and how we used open-source tools to gain greater visibility into it, which allowed us to analyze and optimize its effectiveness.</p>
    <div>
      <h3>Prometheus architecture at Cloudflare</h3>
      <a href="#prometheus-architecture-at-cloudflare">
        
      </a>
    </div>
    
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/B78dhbx9vkOS1axZyxHtJ/32b86612925a465cd4b3220830dbcd27/Untitled-2.png" />
            
            </figure><p>At Cloudflare, we <a href="/how-cloudflare-runs-prometheus-at-scale/">rely heavily on Prometheus</a> for monitoring. We have data centers in more than 310 cities, and each has several <a href="https://prometheus.io/docs/introduction/faq/#:~:text=Apache%202.0%20license.-,What%20is%20the%20plural%20of%20Prometheus%3F,Prometheus%27%20is%20%27Prometheis%27">Prometheis</a>. In total, we have over 1100 Prometheus servers. All alerts are sent to a central <a href="https://prometheus.io/docs/alerting/latest/alertmanager/">Alertmanager</a>, where we have various integrations to route them. Additionally, <a href="https://prometheus.io/docs/alerting/latest/configuration/#:~:text=The%20Alertmanager%20will%20send%20HTTP%20POST%20requests%20in%20the%20following%20JSON%20format%20to%20the%20configured%20endpoint%3A">using an alertmanager webhook</a>, we store all alerts in a datastore for analysis.</p>
    <div>
      <h3>Lifecycle of an alert</h3>
      <a href="#lifecycle-of-an-alert">
        
      </a>
    </div>
    <p>Prometheus collects metrics from configured targets at given intervals, evaluates rule expressions, displays the results, and can trigger alerts when the alerting conditions are met. Once an alert goes into firing state, it will be sent to the alertmanager.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4UkkuPB3o9pI0XIrrYueGn/21cde72388f6d8926ffb4f1fcf488a85/Screenshot-2024-01-18-at-10.53.57-PM.png" />
            
            </figure><p>Depending on the configuration, once Alertmanager receives an alert, it can inhibit, group, silence, or route the alerts to the correct receiver integration, such as chat, PagerDuty, or ticketing system. When configured properly, Alertmanager can mitigate a lot of alert noise. Unfortunately, that is not the case all the time, as not all alerts are optimally configured.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4Ne1iAiuXESU3YsK9AmO4r/00dcd310477cad614b8d1c88b4a9d6e3/Screenshot-2024-03-26-at-1.13.46-PM.png" />
            
            </figure><p><i>In Alertmanager, alerts initially enter the firing state, where they may be inhibited or silenced. They return to the firing state when the silence expires or the inhibiting alert resolves, and eventually transition to the resolved state.</i></p><p>Alertmanager sends notifications for <code>firing</code> and <code>resolved</code> alert events via webhook integration. We were using <a href="https://github.com/cloudflare/alertmanager2es">alertmanager2es</a>, which receives webhook alert notifications from Alertmanager and inserts them into an Elasticsearch index for searching and analysis. Alertmanager2es has been a reliable tool for us over the years, offering ways to monitor alerting volume, noisy alerts and do some kind of alert reporting. However, it had its limitations. The absence of <code>silenced</code> and <code>inhibited</code> alert states made troubleshooting issues challenging. We often found ourselves guessing why an alert didn't trigger - was it silenced by another alert or perhaps inhibited by one? Without concrete data, we lacked the means to confirm what was truly happening.</p><p>Since the Alertmanager doesn’t provide notifications for <code>silenced</code> or <code>inhibited</code> alert events via webhook integration, the alert reporting we were doing was somewhat lacking or incomplete. However, the <a href="https://raw.githubusercontent.com/prometheus/alertmanager/master/api/v2/openapi.yaml">Alertmanager API</a> provides querying capabilities and by querying the <code>/api/alerts</code> alertmanager endpoint, we can get the <code>silenced</code> and <code>inhibited</code> alert states. Having all four states in a datastore will enhance our ability to improve alert reporting and troubleshoot Alertmanager issues.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5QeyKMHibWHpSpAOJjHjSM/55e4385b5a8367feb29958062ebe2846/Screenshot-2024-03-26-at-1.14.02-PM.png" />
            
            </figure><p><i>Interfaces for providing information about alert states</i></p>
    <div>
      <h2>Solution</h2>
      <a href="#solution">
        
      </a>
    </div>
    <p>We opted to aggregate all states of the alerts (firing, silenced, inhibited, and resolved) into a datastore. Given that we're gathering data from two distinct sources (the webhook and API) each in varying formats and potentially representing different events, we correlate alerts from both sources using the fingerprint field. The <a href="https://github.com/prometheus/common/blob/main/model/alert.go#L48-L52">fingerprint</a> is a unique hash of the alert’s label set which enables us to match alerts across responses from the Alertmanager webhook and API.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5NKcBwPWjrCou8eQnGPs5u/3bb1d9631cd3d18fe8e356cc6685cd7c/Screenshot-2024-01-18-at-10.59.39-PM.png" />
            
            </figure><p><i>Alertmanager webhook and API response of same alert event</i></p><p>The Alertmanager API offers additional fields compared to the webhook (highlighted in pastel red on the right), such as <code>silencedBy</code> and <code>inhibitedBy</code> IDs, which aid in identifying silenced and inhibited alerts. We store both webhook and API responses in the datastore as separate rows. While querying, we match the alerts using the fingerprint field.</p><p>We decided to use a <a href="https://vector.dev/">vector.dev</a> instance to transform the data as necessary, and store it in a data store. Vector.dev (<a href="https://www.datadoghq.com/blog/datadog-acquires-timber-technologies-vector/">acquired by Datadog</a>) is an open-source, high-performance, <a href="https://www.cloudflare.com/learning/performance/what-is-observability/">observability data pipeline</a> that supports a vast range of sources to read data from and supports a lot of sinks for writing data to, as well as a variety of data transformation operations.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/13sHyssbezB3dZyEwKpWw6/bcb33d2dbadd33670dcbb8abf507af50/Screenshot-2024-01-18-at-10.50.14-PM.png" />
            
            </figure><p><i>Here, we use one http_server vector instance to receive Alertmanager webhook notifications, two http_client sources to query alerts and silence API endpoints, and two sinks for writing all of the state logs in ClickHouse into alerts and silences tables</i></p><p>Although we use ClickHouse to store this data, any other database can be used here. ClickHouse was chosen as a data store because it provides various data manipulation options. It allows aggregating data during insertion using Materialized Views, reduces duplicates with the replacingMergeTree table engine, and supports JOIN statements.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1b6fKIha9P9fUt8Rc2WQOl/38cec0a498874d33e8695104851e69ec/Untitled-1-1.png" />
            
            </figure><p>If we were to create individual columns for all the alert labels, the number of columns would grow exponentially with the addition of new alerts and unique labels. Instead, we decided to create individual columns for a few common labels like alert priority, instance, dashboard, alert-ref, alertname, etc., which helps us analyze the data in general and keep all other labels in a column of type <code>Map(String, String)</code>. This was done because we wanted to keep all the labels in the datastore with minimal resource usage and allow users to query specific labels or filter alerts based on particular labels. For example, we can select all Prometheus alerts using  <code>labelsmap[‘service’’] = ‘Prometheus’</code>.</p>
    <div>
      <h2>Dashboards</h2>
      <a href="#dashboards">
        
      </a>
    </div>
    
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/14NoQxvbOYttnGrlFmIR3Z/6d0299ffa240ba562da42ccb8a6e2ddd/Screenshot-2024-01-18-at-11.02.31-PM.png" />
            
            </figure><p>We built multiple dashboards on top of this data:</p><ul><li><p><b>Alerts overview</b>: To get insights into all the alerts the Alertmanager receives.</p></li><li><p><b>Alertname overview</b>: To drill down on a specific alert.</p></li><li><p><b>Alerts overview by receiver:</b> This is similar to alerts overview but specific to a team or receiver.</p></li><li><p><b>Alerts state timeline</b>: This dashboard shows a snapshot of alert volume at a glance.</p></li><li><p><b>Jiralerts overview</b>: To get insights into the alerts the ticket system receives.</p></li><li><p><b>Silences overview</b>: To get insights into the Alertmanager silences.</p></li></ul>
    <div>
      <h3>Alerts overview</h3>
      <a href="#alerts-overview">
        
      </a>
    </div>
    
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2pclX6PtvC0zJRhX4ibl9U/8125519c56fd4c7bd0f6286f7b374cda/Screenshot-2024-03-26-at-1.14.24-PM.png" />
            
            </figure><p>The image is a screenshot of the collapsed alerts overview dashboard by receiver. This dashboard comprises general stats, components, services, and alertname breakdown. The dashboard also highlights the number of P1 / P2 alerts in the last one day / seven days / thirty days, top alerts for the current quarter, and quarter-to-quarter comparison.</p>
    <div>
      <h3>Component breakdown</h3>
      <a href="#component-breakdown">
        
      </a>
    </div>
    
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2RQfVZgBxphfO13f6AtBSA/b91ccaeb93740218f5699b4d003b4ea9/Screenshot-2024-01-19-at-3.47.38-PM.png" />
            
            </figure><p>We route alerts to teams and a team can have multiple services or components. This panel shows firing alerts component counts over time for a receiver. For example, the alerts are sent to the observability team, which owns multiple components like logging, metrics, traces, and errors. This panel gives an alerting component count over time, and provides a good idea about which component is noisy and at what time at a glance.</p>
    <div>
      <h3>Timeline of alerts</h3>
      <a href="#timeline-of-alerts">
        
      </a>
    </div>
    
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/cqVYUsyX4kospY8JWdjGd/1aa9a0ad96bbb28f503983b34d4a5740/Screenshot-2024-01-19-at-3.22.12-PM.png" />
            
            </figure><p>We created this swimlane view using Grafana’s state timeline panel for the receivers. The panel shows how busy the on-call was and at what point. Red here means the alert started firing, orange represents the alert is active and green means it has resolved. It displays the start time, active duration, and resolution of an alert. This highlighted alert is changing state too frequently from firing to resolved - this looks like a flapping alert. Flapping occurs when an alert changes state too frequently. This can happen when alerts are not configured properly and need tweaking, such as adjusting the alert threshold or increasing the <code>for duration</code> period in the alerting rule. The <code>for duration</code> field in the alerting rules adds time tolerance before an alert starts firing. In other words, the alert won’t fire unless the condition is met for ‘X’ minutes.</p>
    <div>
      <h2>Findings</h2>
      <a href="#findings">
        
      </a>
    </div>
    <p>There were a few interesting findings within our analysis. We found a few alerts that were firing and did not have a notify label set, which means the alerts were firing but were not being sent or routed to any team, creating unnecessary load on the Alertmanager. We also found a few components generating a lot of alerts, and when we dug in, we found that they were for a cluster that was decommissioned where the alerts were not removed. These dashboards gave us excellent visibility and cleanup opportunities.</p>
    <div>
      <h3>Alertmanager inhibitions</h3>
      <a href="#alertmanager-inhibitions">
        
      </a>
    </div>
    <p>Alertmanager inhibition allows suppressing a set of alerts or notifications based on the presence of another set of alerts. We found that Alertmanager inhibitions were not working sometimes. Since there was no way to know about this, we only learned about it when a user reported getting alerted for inhibited alerts. Imagine a Venn diagram of firing and inhibited alerts to understand failed inhibitions. Ideally, there should be no overlap because the inhibited alerts shouldn’t be firing. But if there is an overlap, that means inhibited alerts are firing, and this overlap is considered a failed inhibition alert.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7icbS8sRQDJJrK21dGzD2u/8adba907e2fc6a7514468c829788c08b/Screenshot-2024-03-26-at-1.14.39-PM.png" />
            
            </figure><p><i>Failed inhibition venn diagram</i></p><p>After storing alert notifications in ClickHouse, we were able to come up with a query to find the fingerprint of the `alertnames` where the inhibitions were failing using the following query:</p>
            <pre><code>SELECT $rollup(timestamp) as t, count() as count
FROM
(
    SELECT
        fingerprint, timestamp
    FROM alerts
    WHERE
        $timeFilter
        AND status.state = 'firing'
    GROUP BY
        fingerprint, timestamp
) AS firing
ANY INNER JOIN
(
    SELECT
        fingerprint, timestamp
    FROM alerts
    WHERE
        $timeFilter
        AND status.state = 'suppressed' AND notEmpty(status.inhibitedBy)
    GROUP BY
        fingerprint, timestamp
) AS suppressed USING (fingerprint)
GROUP BY t</code></pre>
            <p>The first panel in the image below is the total number of firing alerts, the second panel is the number of failed inhibitions.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2mIaWG4xVdNgc6319DDTzn/a277241d0ff845578f627be148330de0/Screenshot-2024-03-15-at-10.40.59-PM.png" />
            
            </figure><p>We can also create breakdown for each failed inhibited alert</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2LSWZNB9WQ2d446vnL7V3B/f99463370e71486a8f12335ba14b5ea2/Screenshot-2024-01-24-at-5.03.08-PM.png" />
            
            </figure><p>By looking up the fingerprint from the database, we could map the alert inhibitions and found that the failed inhibited alerts have an inhibition loop. For example, alert <code>Service_XYZ_down</code> is inhibited by alert <code>server_OOR</code>, alert <code>server_OOR</code> is inhibited by alert <code>server_down</code>, and <code>server_down</code> is inhibited by alert <code>server_OOR</code>.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2LELAVh6xxaSuVB2S43Emx/c337de542422b8b72c778bcc7c25ac04/Screenshot-2024-03-26-at-1.14.54-PM.png" />
            
            </figure><p>Failed inhibitions can be avoided if alert inhibitions are configured carefully.</p>
    <div>
      <h3>Silences</h3>
      <a href="#silences">
        
      </a>
    </div>
    
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4l3wAB9Q2jLts6e6oN23Se/0fa489b3b08b1f132197731f390ea6e8/Screenshot-2024-03-26-at-1.15.09-PM.png" />
            
            </figure><p>Alertmanager provides a mechanism to silence an alert while it is being worked on or during maintenance. Silence can mute the alerts for a given time and it can be configured based on matchers, which can be an exact match, a regex, an alert name, or any other label. The silence matcher doesn’t necessarily translate to the alertname. By doing alert analysis, we could map the alerts and the silence ID by doing a JOIN query on the alerts and silences tables. We also discovered a lot of stale silences, where silence was created for a long duration and is not relevant anymore.</p>
    <div>
      <h2>DIY Alert analysis</h2>
      <a href="#diy-alert-analysis">
        
      </a>
    </div>
    <p><a href="https://github.com/cloudflare/cloudflare-blog/tree/master/2024-03-alerts-observability">The directory</a> contains a basic demo for implementing alerts observability. Running `docker-compose up` spawns several containers, including Prometheus, Alertmanager, Vector, ClickHouse, and Grafana. The vector.dev container queries the Alertmanager alerts API and writes the data into ClickHouse after transforming it. The Grafana dashboard showcases a demo of Alerts and Silences overview.</p><p>Make sure you have docker installed and run <code>docker compose up</code> to get started.</p><p>Visit <a href="http://localhost:3000/dashboards">http://localhost:3000/dashboards</a> to explore the prebuilt demo dashboards.</p>
    <div>
      <h2>Conclusion</h2>
      <a href="#conclusion">
        
      </a>
    </div>
    <p>As part of the observability team, we manage the Alertmanager, which is a multi-tenant system. It's crucial for us to have visibility to detect and address system misuse, ensuring proper alerting. The use of alert analysis tools has significantly enhanced the experience for on-call personnel and our team, offering swift access to the alert system. Alerts observability has facilitated the troubleshooting of events such as why an alert did not fire, why an inhibited alert fired, or which alert silenced / inhibited another alert, providing valuable insights for improving alert management.</p><p>Moreover, alerts overview dashboards facilitate rapid review and adjustment, streamlining operations. Teams use these dashboards in the weekly alert reviews to provide tangible evidence of how an on-call shift went, identify which alerts fire most frequently, becoming candidates for cleanup or aggregation thus curbing system misuse and bolstering overall alert management. Additionally, we can pinpoint services that may require particular attention. Alerts observability has also empowered some teams to make informed decisions about on-call configurations, such as transitioning to longer but less frequent shifts or integrating on-call and unplanned work shifts.</p><p>In conclusion, alert observability plays a crucial role in averting burnout by minimizing interruptions and enhancing on-call duties' efficiency. Offering alerts observability as a service benefits all teams by obviating the need for individual dashboard development and fostering a proactive monitoring culture.If you found this blog post interesting and want to work on observability, please check out our job openings – we’re hiring for <a href="https://www.cloudflare.com/en-gb/careers/jobs/?department=Production+Engineering&amp;title=alerting">Alerting</a> and <a href="https://www.cloudflare.com/en-gb/careers/jobs/?department=Production+Engineering&amp;title=logging">Logging</a>!</p> ]]></content:encoded>
            <category><![CDATA[Observability]]></category>
            <category><![CDATA[Developers]]></category>
            <category><![CDATA[Developer Platform]]></category>
            <category><![CDATA[Prometheus]]></category>
            <category><![CDATA[Alertmanager]]></category>
            <guid isPermaLink="false">7vVEEoczGFUNpXkkopTmEU</guid>
            <dc:creator>Monika Singh</dc:creator>
        </item>
        <item>
            <title><![CDATA[How Cloudflare runs Prometheus at scale]]></title>
            <link>https://blog.cloudflare.com/how-cloudflare-runs-prometheus-at-scale/</link>
            <pubDate>Fri, 03 Mar 2023 14:00:00 GMT</pubDate>
            <description><![CDATA[ Here at Cloudflare we run over 900 instances of Prometheus with a total of around 4.9 billion time series.
Operating such a large Prometheus deployment doesn’t come without challenges .
In this blog post we’ll cover some of the issues we hit and how we solved them ]]></description>
            <content:encoded><![CDATA[ <p></p><p>We use <a href="https://prometheus.io/">Prometheus</a> to gain insight into all the different pieces of hardware and software that make up our global network. Prometheus allows us to measure health &amp; performance over time and, if there’s anything wrong with any service, let our team know before it becomes a problem.</p><p>At the moment of writing this post we run 916 Prometheus instances with a total of around 4.9 billion time series. Here’s a screenshot that shows exact numbers:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4eQSVoAaOO2Bi1xpCxbkMP/c8086dc6482c75d73cbd4e45cd52e4b8/pasted-image-0--7-.png" />
            
            </figure><p>That’s an average of around 5 million time series per instance, but in reality we have a mixture of very tiny and very large instances, with the biggest instances storing around 30 million time series each.</p><p>Operating such a large Prometheus deployment doesn’t come without challenges. In this blog post we’ll cover some of the issues one might encounter when trying to collect many millions of time series per Prometheus instance.</p>
    <div>
      <h2>Metrics cardinality</h2>
      <a href="#metrics-cardinality">
        
      </a>
    </div>
    <p>One of the first problems you’re likely to hear about when you start running your own Prometheus instances is <a href="https://en.wikipedia.org/wiki/Cardinality">cardinality</a>, with the most dramatic cases of this problem being referred to as “cardinality explosion”.</p><p>So let’s start by looking at what cardinality means from Prometheus' perspective, when it can be a problem and some of the ways to deal with it.</p><p>Let’s say we have an application which we want to <a href="https://prometheus.io/docs/instrumenting/clientlibs/">instrument</a>, which means add some observable properties in the form of <a href="https://prometheus.io/docs/concepts/metric_types/">metrics</a> that Prometheus can read from our application. A metric can be anything that you can express as a number, for example:</p><ul><li><p>The speed at which a vehicle is traveling.</p></li><li><p>Current temperature.</p></li><li><p>The number of times some specific event occurred.</p></li></ul><p>To create metrics inside our application we can use one of many Prometheus client libraries. Let’s pick <a href="https://github.com/prometheus/client_python">client_python</a> for simplicity, but the same concepts will apply regardless of the language you use.</p>
            <pre><code>from prometheus_client import Counter

# Declare our first metric.
# First argument is the name of the metric.
# Second argument is the description of it.
c = Counter(mugs_of_beverage_total, 'The total number of mugs drank.')

# Call inc() to increment our metric every time a mug was drank.
c.inc()
c.inc()</code></pre>
            <p>With this simple code Prometheus client library will create a single metric. For Prometheus to collect this metric we need our application to run an HTTP server and expose our metrics there. The simplest way of doing this is by using functionality provided with client_python itself - see documentation <a href="https://github.com/prometheus/client_python#http">here</a>.</p><p>When Prometheus sends an HTTP request to our application it will receive this response:</p>
            <pre><code># HELP mugs_of_beverage_total The total number of mugs drank.
# TYPE mugs_of_beverage_total counter
mugs_of_beverage_total 2</code></pre>
            <p>This format and underlying data model are both covered extensively in Prometheus' own documentation.</p><p>Please see <a href="https://prometheus.io/docs/concepts/data_model/">data model</a> and <a href="https://prometheus.io/docs/instrumenting/exposition_formats/">exposition format</a> pages for more details.</p><p>We can add more metrics if we like and they will all appear in the HTTP response to the metrics endpoint.</p><p>Prometheus metrics can have extra dimensions in form of labels. We can use these to add more information to our metrics so that we can better understand what’s going on.</p><p>With our example metric we know how many mugs were consumed, but what if we also want to know what kind of beverage it was? Or maybe we want to know if it was a cold drink or a hot one? Adding labels is very easy and all we need to do is specify their names. Once we do that we need to pass label values (in the same order as label names were specified) when incrementing our counter to pass this extra information.</p><p>Let’s adjust the example code to do this.</p>
            <pre><code>from prometheus_client import Counter

c = Counter(mugs_of_beverage_total, 'The total number of mugs drank.', ['content', 'temperature'])

c.labels('coffee', 'hot').inc()
c.labels('coffee', 'hot').inc()
c.labels('coffee', 'cold').inc()
c.labels('tea', 'hot').inc()</code></pre>
            <p>Our HTTP response will now show more entries:</p>
            <pre><code># HELP mugs_of_beverage_total The total number of mugs drank.
# TYPE mugs_of_beverage_total counter
mugs_of_beverage_total{content="coffee", temperature="hot"} 2
mugs_of_beverage_total{content="coffee", temperature="cold"} 1
mugs_of_beverage_total{content="tea", temperature="hot"} 1</code></pre>
            <p>As we can see we have an entry for each unique combination of labels.</p><p>And this brings us to the definition of cardinality in the context of metrics. Cardinality is the <b>number of unique combinations of all labels</b>. The more labels you have and the more values each label can take, the more unique combinations you can create and the higher the cardinality.</p>
    <div>
      <h3>Metrics vs samples vs time series</h3>
      <a href="#metrics-vs-samples-vs-time-series">
        
      </a>
    </div>
    <p>Now we should pause to make an important distinction between <i>metrics</i> and <i>time series</i>.</p><p>A metric is an observable property with some defined dimensions (labels). In our example case it’s a Counter class object.</p><p>A time series is an instance of that metric, with a unique combination of all the dimensions (labels), plus a series of timestamp &amp; value pairs - hence the name “time series”. Names and labels tell us what is being observed, while timestamp &amp; value pairs tell us how that observable property changed over time, allowing us to plot graphs using this data.</p><p>What this means is that a single metric will create <b>one or more</b> time series. The number of time series depends purely on the number of labels and the number of all possible values these labels can take.</p><p>Every time we add a new label to our metric we risk multiplying the number of time series that will be exported to Prometheus as the result.</p><p>In our example we have two labels, “content” and “temperature”, and both of them can have two different values. So the maximum number of time series we can end up creating is four (2*2). If we add another label that can also have two values then we can now export up to eight time series (2*2*2). The more labels we have or the more distinct values they can have the more time series as a result.</p><p>If all the label values are controlled by your application you will be able to count the number of all possible label combinations. But the real risk is when you create metrics with label values coming from the outside world.</p><p>If instead of beverages we tracked the number of HTTP requests to a web server, and we used the request path as one of the label values, then anyone making a huge number of random requests could force our application to create a huge number of time series. To avoid this it’s in general best to never accept label values from untrusted sources.</p><p>To make things more complicated you may also hear about “samples” when reading Prometheus documentation. A sample is something in between metric and time series - it’s a time series value for a specific timestamp. Timestamps here can be explicit or implicit. If a sample lacks any explicit timestamp then it means that the sample represents the most recent value - it’s the current value of a given time series, and the timestamp is simply the time you make your observation at.</p><p>If you look at the HTTP response of our example metric you’ll see that none of the returned entries have timestamps. There’s no timestamp anywhere actually. This is because the Prometheus server itself is responsible for timestamps. When Prometheus collects metrics it records the time it started each collection and then it will use it to write timestamp &amp; value pairs for each time series.</p><p>That’s why what our application exports isn’t really metrics or time series - it’s samples.</p><p>Confusing? Let’s recap:</p><ul><li><p>We start with a <b>metric</b> - that’s simply a definition of something that we can observe, like the number of mugs drunk.</p></li><li><p>Our metrics are exposed as a HTTP response. That response will have a list of <b>samples</b> - these are individual instances of our metric (represented by name &amp; labels), plus the current value.</p></li><li><p>When Prometheus collects all the samples from our HTTP response it adds the timestamp of that collection and with all this information together we have a <b>time series</b>.</p></li></ul>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5lTDb6LR4pGWmpAC2mrug2/4771271c4b28e30930be606bfd5b4213/blog-4.png" />
            
            </figure>
    <div>
      <h3>Cardinality related problems</h3>
      <a href="#cardinality-related-problems">
        
      </a>
    </div>
    <p>Each time series will cost us resources since it needs to be kept in memory, so the more time series we have, the more resources metrics will consume. This is true both for client libraries and Prometheus server, but it’s more of an issue for Prometheus itself, since a single Prometheus server usually collects metrics from many applications, while an application only keeps its own metrics.</p><p>Since we know that the more labels we have the more time series we end up with, you can see when this can become a problem. Simply adding a label with two distinct values to all our metrics might double the number of time series we have to deal with. Which in turn will double the memory usage of our Prometheus server. If we let Prometheus consume more memory than it can physically use then it will crash.</p><p>This scenario is often described as “cardinality explosion” - some metric suddenly adds a huge number of distinct label values, creates a huge number of time series, causes Prometheus to run out of memory and you lose all <a href="https://www.cloudflare.com/learning/performance/what-is-observability/">observability</a> as a result.</p>
    <div>
      <h2>How is Prometheus using memory?</h2>
      <a href="#how-is-prometheus-using-memory">
        
      </a>
    </div>
    <p>To better handle problems with cardinality it’s best if we first get a better understanding of how Prometheus works and how time series consume memory.</p><p>For that let’s follow all the steps in the life of a time series inside Prometheus.</p>
    <div>
      <h3>Step one - HTTP scrape</h3>
      <a href="#step-one-http-scrape">
        
      </a>
    </div>
    <p>The process of sending HTTP requests from Prometheus to our application is called “scraping”. Inside the Prometheus configuration file we define a <a href="https://prometheus.io/docs/prometheus/latest/configuration/configuration/#scrape_config">“scrape config”</a> that tells Prometheus where to send the HTTP request, how often and, optionally, to apply extra processing to both requests and responses.</p><p>It will record the time it sends HTTP requests and use that later as the timestamp for all collected time series.</p><p>After sending a request it will parse the response looking for all the samples exposed there.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2g84l6ioJCxTsF64iZoHr8/8ae8b1a7140517cd8a8eb330482c7377/blog-1.png" />
            
            </figure>
    <div>
      <h3>Step two - new time series or an update?</h3>
      <a href="#step-two-new-time-series-or-an-update">
        
      </a>
    </div>
    <p>Once Prometheus has a list of samples collected from our application it will save it into <a href="https://pkg.go.dev/github.com/prometheus/prometheus/tsdb">TSDB</a> - Time Series DataBase - the database in which Prometheus keeps all the time series.</p><p>But before doing that it needs to first check which of the samples belong to the time series that are already present inside TSDB and which are for completely new time series.</p><p>As we mentioned before a time series is generated from metrics. There is a single time series for each unique combination of metrics labels.</p><p>This means that Prometheus must check if there’s already a time series with identical name and exact same set of labels present. Internally time series names are just another label called __name__, so there is no practical distinction between name and labels. Both of the representations below are different ways of exporting the same time series:</p>
            <pre><code>mugs_of_beverage_total{content="tea", temperature="hot"} 1
{__name__="mugs_of_beverage_total", content="tea", temperature="hot"} 1</code></pre>
            <p>Since everything is a label Prometheus can simply hash all labels using sha256 or any other algorithm to come up with a single ID that is unique for each time series.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4cXuu38kxa2u51z8Me0aBP/9dcc5694aef45e5a65a19dd4d7a7b8a7/blog-2.png" />
            
            </figure><p>Knowing that it can quickly check if there are any time series already stored inside TSDB that have the same hashed value. Basically our labels hash is used as a primary key inside TSDB.</p>
    <div>
      <h3>Step three - appending to TSDB</h3>
      <a href="#step-three-appending-to-tsdb">
        
      </a>
    </div>
    <p>Once TSDB knows if it has to insert new time series or update existing ones it can start the real work.</p><p>Internally all time series are stored <a href="https://github.com/prometheus/prometheus/blob/v2.42.0/tsdb/head.go#L1604-L1616">inside a map</a> on a structure called <a href="https://github.com/prometheus/prometheus/blob/v2.42.0/tsdb/head.go#L65">Head</a>. That map uses labels hashes as keys and a structure called <a href="https://github.com/prometheus/prometheus/blob/v2.42.0/tsdb/head.go#L1827">memSeries</a> as values. Those memSeries objects are storing all the time series information. The struct definition for memSeries is fairly big, but all we really need to know is that it has a copy of all the <a href="https://github.com/prometheus/prometheus/blob/v2.42.0/tsdb/head.go#L1831">time series labels</a> and <a href="https://github.com/prometheus/prometheus/blob/v2.42.0/tsdb/head.go#L1843-L1844">chunks</a> that hold all the samples (timestamp &amp; value pairs).</p><p>Labels are stored once per each memSeries instance.</p><p>Samples are stored inside chunks using <a href="https://prometheus.io/blog/2016/05/08/when-to-use-varbit-chunks/#what-is-varbit-encoding">"varbit" encoding</a> which is a lossless compression scheme optimized for time series data. Each chunk represents a series of samples for a <a href="https://github.com/prometheus/prometheus/blob/v2.42.0/tsdb/head.go#L1969">specific time range</a>. This helps Prometheus query data faster since all it needs to do is first locate the memSeries instance with labels matching our query and then find the chunks responsible for time range of the query.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2hE16dmyvOEjiTJUGnDVTy/11b37f49775fac13165d75ff3d7b1c78/blog-5.png" />
            
            </figure><p><a href="https://github.com/prometheus/prometheus/blob/v2.42.0/cmd/prometheus/main.go#L300-L301">By default</a> Prometheus will create a chunk per each <a href="https://github.com/prometheus/prometheus/blob/v2.42.0/tsdb/db.go#L53">two hours</a> of <b>wall clock</b>. So there would be a chunk for: 00:00 - 01:59, 02:00 - 03:59, 04:00 - 05:59, …, 22:00 - 23:59.</p><p>There’s only one chunk that we can append to, it’s called the “Head Chunk”. It’s the chunk responsible for the most recent time range, including the time of our scrape. Any other chunk holds historical samples and therefore is read-only.</p><p>There is a maximum of <a href="https://github.com/prometheus/prometheus/blob/v2.42.0/tsdb/head_append.go#L1337">120</a> samples each chunk can hold. This is because once we have more than 120 samples on a chunk efficiency of “varbit” encoding drops. TSDB <a href="https://github.com/prometheus/prometheus/blob/v2.42.0/tsdb/head_append.go#L1371-L1386">will try to estimate</a> when a given chunk will reach 120 samples and it will set the maximum allowed time for current Head Chunk accordingly.</p><p>If we try to append a sample with a timestamp higher than the maximum allowed time for current Head Chunk, then TSDB will create a new Head Chunk and calculate a new maximum time for it based on the rate of appends.</p><p>All chunks must be aligned to those two hour slots of wall clock time, so if TSDB was building a chunk for 10:00-11:59 and it was already “full” at 11:30 then it would create an extra chunk for the 11:30-11:59 time range.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1U540qt4t8r4OjMdKpBEIf/93aa0eb5e741683f71ef0eff002d6b1f/blog-6.png" />
            
            </figure><p>Since the default Prometheus scrape interval is one minute it would take two hours to reach 120 samples.</p><p>What this means is that using Prometheus defaults each memSeries should have a single chunk with 120 samples on it for every two hours of data.</p><p>Going back to our time series - at this point Prometheus either creates a new memSeries instance or uses already existing memSeries. Once it has a memSeries instance to work with it will append our sample to the Head Chunk. This might require Prometheus to create a new chunk if needed.</p>
    <div>
      <h3>Step four - memory-mapping old chunks</h3>
      <a href="#step-four-memory-mapping-old-chunks">
        
      </a>
    </div>
    <p>After a few hours of Prometheus running and scraping metrics we will likely have more than one chunk on our time series:</p><ul><li><p>One “Head Chunk” - containing up to two hours of the last two hour wall clock slot.</p></li><li><p>One or more for historical ranges - these chunks are only for reading, Prometheus won’t try to append anything here.</p></li></ul><p>Since all these chunks are stored in memory Prometheus will try to reduce memory usage by writing them to disk and memory-mapping. The advantage of doing this is that memory-mapped chunks don’t use memory unless TSDB needs to read them.</p><p>The Head Chunk is never memory-mapped, it’s always stored in memory.</p>
    <div>
      <h3>Step five - writing blocks to disk</h3>
      <a href="#step-five-writing-blocks-to-disk">
        
      </a>
    </div>
    <p>Up until now all time series are stored entirely in memory and the more time series you have, the higher Prometheus memory usage you’ll see. The only exception are memory-mapped chunks which are offloaded to disk, but will be read into memory if needed by queries.</p><p>This allows Prometheus to scrape and store thousands of samples per second, our biggest instances are appending 550k samples per second, while also allowing us to query all the metrics simultaneously.</p><p>But you can’t keep everything in memory forever, even with memory-mapping parts of data.</p><p>Every two hours Prometheus will persist chunks from memory onto the disk. This process is also aligned with the wall clock but <a href="https://github.com/prometheus/prometheus/blob/v2.42.0/tsdb/head.go#L1489-L1494">shifted by one hour</a>.</p><p>When using Prometheus defaults and assuming we have a single chunk for each two hours of wall clock we would see this:</p><ul><li><p>02:00 - create a new chunk for 02:00 - 03:59 time range</p></li><li><p>03:00 - write a block for 00:00 - 01:59</p></li><li><p>04:00 - create a new chunk for 04:00 - 05:59 time range</p></li><li><p>05:00 - write a block for 02:00 - 03:59</p></li><li><p>…</p></li><li><p>22:00 - create a new chunk for 22:00 - 23:59 time range</p></li><li><p>23:00 - write a block for 20:00 - 21:59</p></li></ul>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/Bt2X8rR89TBWekMduXrSZ/d1cfc97d7fe7a60c431b123027755714/blog-7.png" />
            
            </figure><p>Once a chunk is written into a block it is removed from memSeries and thus from memory. Prometheus will keep each block on disk for the configured retention period.</p><p>Blocks will eventually be “compacted”, which means that Prometheus will take multiple blocks and merge them together to form a single block that covers a bigger time range. This process helps to reduce disk usage since each block has an index taking a good chunk of disk space. By merging multiple blocks together, big portions of that index can be reused, allowing Prometheus to store more data using the same amount of storage space.</p>
    <div>
      <h3>Step six - garbage collection</h3>
      <a href="#step-six-garbage-collection">
        
      </a>
    </div>
    <p>After a chunk was written into a block and removed from memSeries we might end up with an instance of memSeries that has no chunks. This would happen if any time series was no longer being exposed by any application and therefore there was no scrape that would try to append more samples to it.</p><p>A common pattern is to export software versions as a build_info metric, Prometheus itself does this too:</p>
            <pre><code>prometheus_build_info{version="2.42.0"} 1</code></pre>
            <p>When Prometheus 2.43.0 is released this metric would be exported as:</p>
            <pre><code>prometheus_build_info{version="2.43.0"} 1</code></pre>
            <p>Which means that a time series with version=”2.42.0” label would no longer receive any new samples.</p><p>Once the last chunk for this time series is written into a block and removed from the memSeries instance we have no chunks left. This means that our memSeries still consumes some memory (mostly labels) but doesn’t really do anything.</p><p>To get rid of such time series Prometheus will run “head garbage collection” (remember that Head is the structure holding all memSeries) right after writing a block. This garbage collection, among other things, will look for any <a href="https://github.com/prometheus/prometheus/blob/v2.42.0/tsdb/head.go#L1642-L1648">time series without a single chunk</a> and remove it from memory.</p><p>Since this happens after writing a block, and writing a block happens in the middle of the chunk window (two hour slices aligned to the wall clock) the only memSeries this would find are the ones that are “orphaned” - they received samples before, but not anymore.</p>
    <div>
      <h3>What does this all mean?</h3>
      <a href="#what-does-this-all-mean">
        
      </a>
    </div>
    <p>TSDB used in Prometheus is a special kind of database that was highly optimized for a very specific workload:</p><ul><li><p>Time series scraped from applications are kept in memory.</p></li><li><p>Samples are compressed using encoding that works best if there are continuous updates.</p></li><li><p>Chunks that are a few hours old are written to disk and removed from memory.</p></li><li><p>When time series disappear from applications and are no longer scraped they still stay in memory until all chunks are written to disk and garbage collection removes them.</p></li></ul><p>This means that Prometheus is most efficient when continuously scraping the same time series over and over again. It’s least efficient when it scrapes a time series just once and never again - doing so comes with a significant memory usage overhead when compared to the amount of information stored using that memory.</p><p>If we try to visualize how the perfect type of data Prometheus was designed for looks like we’ll end up with this:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/58Sdl0opvl2leashpMABHR/2fbe55566085307e632f284c9f487695/blog-13.png" />
            
            </figure><p>A few continuous lines describing some observed properties.</p><p>If, on the other hand, we want to visualize the type of data that Prometheus is the least efficient when dealing with, we’ll end up with this instead:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/peimfAo5FJ8Z7U4N9pYtm/0779cb39815baea5a3ee3018d746c398/blog-14.png" />
            
            </figure><p>Here we have single data points, each for a different property that we measure.</p><p>Although you can tweak some of Prometheus' behavior and tweak it more for use with short lived time series, by passing one of <a href="https://github.com/prometheus/prometheus/blob/v2.42.0/cmd/prometheus/main.go#L300-L305">the hidden flags</a>, it’s generally discouraged to do so. These flags are only exposed for testing and might have a negative impact on other parts of Prometheus server.</p><p>To get a better understanding of the impact of a short lived time series on memory usage let’s take a look at another example.</p><p>Let’s see what happens if we start our application at 00:25, allow Prometheus to scrape it once while it exports:</p>
            <pre><code>prometheus_build_info{version="2.42.0"} 1</code></pre>
            <p>And then immediately after the first scrape we upgrade our application to a new version:</p>
            <pre><code>prometheus_build_info{version="2.43.0"} 1</code></pre>
            <p>At 00:25 Prometheus will create our memSeries, but we will have to wait until Prometheus writes a block that contains data for 00:00-01:59 and runs garbage collection before that memSeries is removed from memory, which will happen at 03:00.</p><p>This single sample (data point) will create a time series instance that will stay in memory for over two and a half hours using resources, just so that we have a single timestamp &amp; value pair.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/42ihJDKAWyj7hT4DicZoxV/abff5645102c8c22da0c1ad8e811445a/blog-8.png" />
            
            </figure><p>If we were to continuously scrape a lot of time series that only exist for a very brief period then we would be slowly accumulating a lot of memSeries in memory until the next garbage collection.</p><p>Looking at memory usage of such Prometheus server we would see this pattern repeating over time:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/qccLTfcv0dyJ6bFnEkkd6/ad547942173e2f6333412f694fbb0faa/blog-15.png" />
            
            </figure><p>The important information here is that <b>short lived time series are expensive</b>. A time series that was only scraped once is <b>guaranteed to live in Prometheus for one to three hours</b>, depending on the exact time of that scrape.</p>
    <div>
      <h2>The cost of cardinality</h2>
      <a href="#the-cost-of-cardinality">
        
      </a>
    </div>
    <p>At this point we should know a few things about Prometheus:</p><ul><li><p>We know what a metric, a sample and a time series is.</p></li><li><p>We know that the more labels on a metric, the more time series it can create.</p></li><li><p>We know that each time series will be kept in memory.</p></li><li><p>We know that time series will stay in memory for a while, even if they were scraped only once.</p></li></ul><p>With all of that in mind we can now see the problem - a <b>metric with high cardinality</b>, especially one with label values that come from the outside world, can easily create a huge number of <b>time series</b> in a very short time, causing <b>cardinality explosion</b>. This would inflate Prometheus memory usage, which can cause Prometheus server to crash, if it uses all available physical memory.</p><p>To get a better idea of this problem let’s adjust our example metric to track HTTP requests.</p><p>Our metric will have a single label that stores the request path.</p>
            <pre><code>from prometheus_client import Counter

c = Counter(http_requests_total, 'The total number of HTTP requests.', ['path'])

# HTTP request handler our web server will call
def handle_request(path):
  c.labels(path).inc()
  ...</code></pre>
            <p>If we make a single request using the curl command:</p>
            <pre><code>&gt; curl https://app.example.com/index.html</code></pre>
            <p>We should see these time series in our application:</p>
            <pre><code># HELP http_requests_total The total number of HTTP requests.
# TYPE http_requests_total counter
http_requests_total{path="/index.html"} 1</code></pre>
            <p>But what happens if an evil hacker decides to send a bunch of random requests to our application?</p>
            <pre><code>&gt; curl https://app.example.com/jdfhd5343
&gt; curl https://app.example.com/3434jf833
&gt; curl https://app.example.com/1333ds5
&gt; curl https://app.example.com/aaaa43321</code></pre>
            <p>Extra time series would be created:</p>
            <pre><code># HELP http_requests_total The total number of HTTP requests.
# TYPE http_requests_total counter
http_requests_total{path="/index.html"} 1
http_requests_total{path="/jdfhd5343"} 1
http_requests_total{path="/3434jf833"} 1
http_requests_total{path="/1333ds5"} 1
http_requests_total{path="/aaaa43321"} 1</code></pre>
            <p>With 1,000 random requests we would end up with 1,000 time series in Prometheus. If our metric had more labels and all of them were set based on the request payload (HTTP method name, IPs, headers, etc) we could easily end up with millions of time series.</p><p>Often it doesn’t require any malicious actor to cause cardinality related problems. A common class of mistakes is to have an error label on your metrics and pass raw error objects as values.</p>
            <pre><code>from prometheus_client import Counter

c = Counter(errors_total, 'The total number of errors.', [error])

def my_func:
  try:
    ...
  except Exception as err:
    c.labels(err).inc()</code></pre>
            <p>This works well if errors that need to be handled are generic, for example “Permission Denied”:</p>
            <pre><code>errors_total{error="Permission Denied"} 1</code></pre>
            <p>But if the error string contains some task specific information, for example the name of the file that our application didn’t have access to, or a TCP connection error, then we might easily end up with high cardinality metrics this way:</p>
            <pre><code>errors_total{error="file not found: /myfile.txt"} 1
errors_total{error="file not found: /other/file.txt"} 1
errors_total{error="read udp 127.0.0.1:12421-&gt;127.0.0.2:443: i/o timeout"} 1
errors_total{error="read udp 127.0.0.1:14743-&gt;127.0.0.2:443: i/o timeout"} 1</code></pre>
            <p>Once scraped all those time series will stay in memory for a minimum of one hour. It’s very easy to keep accumulating time series in Prometheus until you run out of memory.</p><p>Even Prometheus' own <a href="https://github.com/prometheus/client_golang/security/advisories/GHSA-cg3q-j54f-5p7p">client libraries had bugs</a> that could expose you to problems like this.</p>
    <div>
      <h2>How much memory does a time series need?</h2>
      <a href="#how-much-memory-does-a-time-series-need">
        
      </a>
    </div>
    <p>Each time series stored inside Prometheus (as a memSeries instance) consists of:</p><ul><li><p>Copy of all labels.</p></li><li><p>Chunks containing samples.</p></li><li><p>Extra fields needed by Prometheus internals.</p></li></ul><p>The amount of memory needed for labels will depend on the number and length of these. The more labels you have, or the longer the names and values are, the more memory it will use.</p><p>The way labels are stored internally by Prometheus also matters, but that’s something the user has no control over. There is an open pull request which improves memory usage of labels by <a href="https://github.com/prometheus/prometheus/pull/10991">storing all labels as a single string</a>.</p><p>Chunks will consume more memory as they slowly fill with more samples, after each scrape, and so the memory usage here will follow a cycle - we start with low memory usage when the first sample is appended, then memory usage slowly goes up until a new chunk is created and we start again.</p><p>You can calculate how much memory is needed for your time series by running this query on your Prometheus server:</p>
            <pre><code>go_memstats_alloc_bytes / prometheus_tsdb_head_series</code></pre>
            <p>Note that your Prometheus server must be configured to scrape itself for this to work.</p><p>Secondly this calculation is based on all memory used by Prometheus, not only time series data, so it’s just an approximation. Use it to get a rough idea of how much memory is used per time series and don’t assume it’s that exact number.</p><p>Thirdly Prometheus is written in <a href="https://go.dev/">Golang</a> which is a language with garbage collection. The actual amount of physical memory needed by Prometheus will usually be higher as a result, since it will include unused (garbage) memory that needs to be freed by Go runtime.</p>
    <div>
      <h2>Protecting Prometheus from cardinality explosions</h2>
      <a href="#protecting-prometheus-from-cardinality-explosions">
        
      </a>
    </div>
    <p>Prometheus does offer some options for dealing with high cardinality problems. There are a <a href="https://prometheus.io/docs/prometheus/latest/configuration/configuration/#scrape_config">number of options</a> you can set in your scrape configuration block. Here is the extract of the relevant options from Prometheus documentation:</p>
            <pre><code># An uncompressed response body larger than this many bytes will cause the
# scrape to fail. 0 means no limit. Example: 100MB.
# This is an experimental feature, this behaviour could
# change or be removed in the future.
[ body_size_limit: &lt;size&gt; | default = 0 ]
# Per-scrape limit on number of scraped samples that will be accepted.
# If more than this number of samples are present after metric relabeling
# the entire scrape will be treated as failed. 0 means no limit.
[ sample_limit: &lt;int&gt; | default = 0 ]

# Per-scrape limit on number of labels that will be accepted for a sample. If
# more than this number of labels are present post metric-relabeling, the
# entire scrape will be treated as failed. 0 means no limit.
[ label_limit: &lt;int&gt; | default = 0 ]

# Per-scrape limit on length of labels name that will be accepted for a sample.
# If a label name is longer than this number post metric-relabeling, the entire
# scrape will be treated as failed. 0 means no limit.
[ label_name_length_limit: &lt;int&gt; | default = 0 ]

# Per-scrape limit on length of labels value that will be accepted for a sample.
# If a label value is longer than this number post metric-relabeling, the
# entire scrape will be treated as failed. 0 means no limit.
[ label_value_length_limit: &lt;int&gt; | default = 0 ]

# Per-scrape config limit on number of unique targets that will be
# accepted. If more than this number of targets are present after target
# relabeling, Prometheus will mark the targets as failed without scraping them.
# 0 means no limit. This is an experimental feature, this behaviour could
# change in the future.
[ target_limit: &lt;int&gt; | default = 0 ]</code></pre>
            <p>Setting all the label length related limits allows you to avoid a situation where extremely long label names or values end up taking too much memory.</p><p>Going back to our metric with error labels we could imagine a scenario where some operation returns a huge error message, or even stack trace with hundreds of lines. If such a stack trace ended up as a label value it would take a lot more memory than other time series, potentially even megabytes. Since labels are copied around when Prometheus is handling queries this could cause significant memory usage increase.</p><p>Setting label_limit provides some cardinality protection, but even with just one label name and huge number of values we can see high cardinality. Passing sample_limit is the ultimate protection from high cardinality. It enables us to enforce a hard limit on the number of time series we can scrape from each application instance.</p><p>The downside of all these limits is that <b>breaching any of them will cause an error for the entire scrape</b>.</p><p>If we configure a sample_limit of 100 and our metrics response contains 101 samples, then Prometheus <b>won’t scrape anything at all</b>. This is a deliberate design decision made by Prometheus developers.</p><p>The main motivation seems to be that dealing with partially scraped metrics is difficult and you’re better off treating failed scrapes as incidents.</p>
    <div>
      <h2>How does Cloudflare deal with high cardinality?</h2>
      <a href="#how-does-cloudflare-deal-with-high-cardinality">
        
      </a>
    </div>
    <p>We have hundreds of data centers spread across the world, each with dedicated Prometheus servers responsible for scraping all metrics.</p><p>Each Prometheus is scraping a few hundred different applications, each running on a few hundred servers.</p><p>Combined that’s a lot of different metrics. It’s not difficult to accidentally cause cardinality problems and in the past we’ve dealt with a fair number of issues relating to it.</p>
    <div>
      <h3>Basic limits</h3>
      <a href="#basic-limits">
        
      </a>
    </div>
    <p>The most basic layer of protection that we deploy are scrape limits, which we enforce on all configured scrapes. These are the sane defaults that 99% of application exporting metrics would never exceed.</p><p>By default we allow up to 64 labels on each time series, which is way more than most metrics would use.</p><p>We also limit the length of label names and values to 128 and 512 characters, which again is more than enough for the vast majority of scrapes.</p><p>Finally we do, by default, set sample_limit to 200 - so each application can export up to 200 time series without any action.</p><p>What happens when somebody wants to export more time series or use longer labels? All they have to do is set it explicitly in their scrape configuration.</p><p>Those limits are there to catch accidents and also to make sure that if any application is exporting a high number of time series (more than 200) the team responsible for it knows about it. This helps us avoid a situation where applications are exporting thousands of times series that aren’t really needed. Once you cross the 200 time series mark, you should start thinking about your metrics more.</p>
    <div>
      <h3>CI validation</h3>
      <a href="#ci-validation">
        
      </a>
    </div>
    <p>The next layer of protection is checks that run in CI (Continuous Integration) when someone makes a pull request to add new or modify existing scrape configuration for their application.</p><p>These checks are designed to ensure that we have enough capacity on all Prometheus servers to accommodate extra time series, if that change would result in extra time series being collected.</p><p>For example, if someone wants to modify sample_limit, let’s say by changing existing limit of 500 to 2,000, for a scrape with 10 targets, that’s an increase of 1,500 per target, with 10 targets that’s 10*1,500=15,000 extra time series that might be scraped. Our CI would check that all Prometheus servers have spare capacity for at least 15,000 time series before the pull request is allowed to be merged.</p><p>This gives us confidence that we won’t overload any Prometheus server after applying changes.</p>
    <div>
      <h3>Our custom patches</h3>
      <a href="#our-custom-patches">
        
      </a>
    </div>
    <p>One of the most important layers of protection is a set of patches we maintain on top of Prometheus. There is an <a href="https://github.com/prometheus/prometheus/pull/11124">open pull request</a> on the Prometheus repository. This patchset consists of two main elements.</p><p>First is the patch that allows us to enforce a limit on the total number of time series TSDB can store at any time. There is no equivalent functionality in a standard build of Prometheus, if any scrape produces some samples they will be appended to time series inside TSDB, creating new time series if needed.</p><p>This is the standard flow with a scrape that doesn’t set any sample_limit:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4oxj5PicoVqieXzyDzlq2C/cbbf6a05d61adad33da574b05a297bde/blog-10.png" />
            
            </figure><p>With our patch we tell TSDB that it’s allowed to store up to N time series in total, from all scrapes, at any time. So when TSDB is asked to append a new sample by any scrape, it will first check how many time series are already present.</p><p>If the total number of stored time series is below the configured limit then we append the sample as usual.</p><p>The difference with standard Prometheus starts when a new sample is about to be appended, but TSDB already stores the maximum number of time series it’s allowed to have. Our patched logic will then check if the sample we’re about to append belongs to a time series that’s already stored inside TSDB or is it a new time series that needs to be created.</p><p>If the time series already exists inside TSDB then we allow the append to continue. If the time series doesn’t exist yet and our append would create it (a new memSeries instance would be created) then we skip this sample. We will also signal back to the scrape logic that some samples were skipped.</p><p>This is the modified flow with our patch:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3luOmih7ZayqWukDhgoC5S/98cade46c7b5da6bb3a85ae6877532bb/blog-11.png" />
            
            </figure><p>By running <i>“go_memstats_alloc_bytes / prometheus_tsdb_head_series”</i> query we know how much memory we need per single time series (on average), we also know how much physical memory we have available for Prometheus on each server, which means that we can easily calculate the rough number of time series we can store inside Prometheus, taking into account the fact the there’s garbage collection overhead since Prometheus is written in Go:</p><p><i>memory available to Prometheus / bytes per time series = our capacity</i></p><p>This doesn’t capture all complexities of Prometheus but gives us a rough estimate of how many time series we can expect to have capacity for.</p><p>By setting this limit on all our Prometheus servers we know that it will never scrape more time series than we have memory for. This is the last line of defense for us that avoids the risk of the Prometheus server crashing due to lack of memory.</p><p>The second patch modifies how Prometheus handles sample_limit - with our patch instead of failing the entire scrape it simply ignores excess time series. If we have a scrape with sample_limit set to 200 and the application exposes 201 time series, then all except one final time series will be accepted.</p><p>This is the standard Prometheus flow for a scrape that has the sample_limit option set:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2d8iEKXhwkrME4gR7IYEvm/71faa4cdb0781465b35e2c303e4c4e3e/blog-9.png" />
            
            </figure><p>The entire scrape either succeeds or fails. Prometheus simply counts how many samples are there in a scrape and if that’s more than sample_limit allows it will fail the scrape.</p><p>With our custom patch we don’t care how many samples are in a scrape. Instead we count time series as we append them to TSDB. Once we appended sample_limit number of samples we start to be selective.</p><p>Any excess samples (after reaching sample_limit) will only be appended if they belong to time series that are already stored inside TSDB.</p><p>The reason why we still allow appends for some samples even after we’re above sample_limit is that <b>appending samples to existing time series is cheap</b>, it’s just adding an extra timestamp &amp; value pair.</p><p><b>Creating new time series on the other hand is a lot more expensive</b> - we need to allocate new memSeries instances with a copy of all labels and keep it in memory for at least an hour.</p><p>This is how our modified flow looks:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3FstYUSQFx7G4uY1jIdlME/dbba3406fe4fcfea15410b4431bb01ca/blog-12.png" />
            
            </figure><p>Both patches give us two levels of protection.</p><p>The TSDB limit patch protects the entire Prometheus from being overloaded by too many time series.</p><p>This is because the only way to stop time series from eating memory is to prevent them from being appended to TSDB. Once they’re in TSDB it’s already too late.</p><p>While the sample_limit patch stops individual scrapes from using too much Prometheus capacity, which could lead to creating too many time series in total and exhausting total Prometheus capacity (enforced by the first patch), which would in turn affect all other scrapes since some new time series would have to be ignored. At the same time our patch gives us graceful degradation by capping time series from each scrape to a certain level, rather than failing hard and dropping all time series from affected scrape, which would mean losing all observability of affected applications.</p><p>It’s also worth mentioning that without our TSDB total limit patch we could keep adding new scrapes to Prometheus and that alone could lead to exhausting all available capacity, even if each scrape had sample_limit set and scraped fewer time series than this limit allows.</p><p>Extra metrics exported by Prometheus itself tell us if any scrape is exceeding the limit and if that happens we alert the team responsible for it.</p><p>This also has the benefit of allowing us to self-serve capacity management - there’s no need for a team that signs off on your allocations, if CI checks are passing then we have the capacity you need for your applications.</p><p>The main reason why we prefer graceful degradation is that we want our engineers to be able to deploy applications and their metrics with confidence without being subject matter experts in Prometheus. That way even the most inexperienced engineers can start exporting metrics without constantly wondering <i>“Will this cause an incident?”</i>.</p><p>Another reason is that trying to stay on top of your usage can be a challenging task. It might seem simple on the surface, after all you just need to stop yourself from creating too many metrics, adding too many labels or setting label values from untrusted sources.</p><p>In reality though this is as simple as trying to ensure your application doesn’t use too many resources, like CPU or memory - you can achieve this by simply allocating less memory and doing fewer computations. It doesn’t get easier than that, until you actually try to do it. The more any application does for you, the more useful it is, the more resources it might need. Your needs or your customers' needs will evolve over time and so you can’t just draw a line on how many bytes or cpu cycles it can consume. If you do that, the line will eventually be redrawn, many times over.</p><p>In general, having more labels on your metrics allows you to gain more insight, and so the more complicated the application you're trying to <a href="https://www.cloudflare.com/application-services/solutions/app-performance-monitoring/">monitor</a>, the more need for extra labels.</p><p>In addition to that in most cases we don’t see all possible label values at the same time, it’s usually a small subset of all possible combinations. For example our errors_total metric, which we used in example before, might not be present at all until we start seeing some errors, and even then it might be just one or two errors that will be recorded. This holds true for a lot of labels that we see are being used by engineers.</p><p>This means that looking at how many time series an application could potentially export, and how many it actually exports, gives us two completely different numbers, which makes capacity planning a lot harder.</p><p>Especially when dealing with big applications maintained in part by multiple different teams, each exporting some metrics from their part of the stack.</p><p>For that reason we do tolerate some percentage of short lived time series even if they are not a perfect fit for Prometheus and cost us more memory.</p>
    <div>
      <h3>Documentation</h3>
      <a href="#documentation">
        
      </a>
    </div>
    <p>Finally we maintain a set of internal documentation pages that try to guide engineers through the process of scraping and working with metrics, with a lot of information that’s specific to our environment.</p><p>Prometheus and PromQL (Prometheus Query Language) are conceptually very simple, but this means that all the complexity is hidden in the interactions between different elements of the whole metrics pipeline.</p><p>Managing the entire lifecycle of a metric from an engineering perspective is a complex process.</p><p>You must define your metrics in your application, with names and labels that will allow you to work with resulting time series easily. Then you must configure Prometheus scrapes in the correct way and deploy that to the right Prometheus server. Next you will likely need to create recording and/or alerting rules to make use of your time series. Finally you will want to create a dashboard to visualize all your metrics and be able to spot trends.</p><p>There will be traps and room for mistakes at all stages of this process. We covered some of the most basic pitfalls in our previous blog post on Prometheus - <a href="/monitoring-our-monitoring/">Monitoring our monitoring</a>. In the same blog post we also mention one of the tools we use to help our engineers write valid Prometheus alerting rules.</p><p>Having good internal documentation that covers all of the basics specific for our environment and most common tasks is very important. Being able to answer <i>“How do I X?”</i> yourself without having to wait for a subject matter expert allows everyone to be more productive and move faster, while also avoiding Prometheus experts from answering the same questions over and over again.</p>
    <div>
      <h2>Closing thoughts</h2>
      <a href="#closing-thoughts">
        
      </a>
    </div>
    <p>Prometheus is a great and reliable tool, but dealing with high cardinality issues, especially in an environment where a lot of different applications are scraped by the same Prometheus server, can be challenging.</p><p>We had a fair share of problems with overloaded Prometheus instances in the past and developed a number of tools that help us deal with them, including custom patches.</p><p>But the key to tackling high cardinality was better understanding how Prometheus works and what kind of usage patterns will be problematic.</p><p>Having better insight into Prometheus internals allows us to maintain a fast and reliable observability platform without too much red tape, and the tooling we’ve developed around it, some of which is <a href="https://github.com/cloudflare/pint">open sourced</a>, helps our engineers avoid most common pitfalls and deploy with confidence.</p> ]]></content:encoded>
            <category><![CDATA[Prometheus]]></category>
            <category><![CDATA[Observability]]></category>
            <category><![CDATA[Open Source]]></category>
            <category><![CDATA[Deep Dive]]></category>
            <guid isPermaLink="false">oXX2cdLdXFvg8ZtUNSaqk</guid>
            <dc:creator>Lukasz Mierzwa</dc:creator>
        </item>
        <item>
            <title><![CDATA[Monitoring our monitoring: how we validate our Prometheus alert rules]]></title>
            <link>https://blog.cloudflare.com/monitoring-our-monitoring/</link>
            <pubDate>Thu, 19 May 2022 15:39:19 GMT</pubDate>
            <description><![CDATA[ Pint is a tool we developed to validate our Prometheus alerting rules and ensure they are always working ]]></description>
            <content:encoded><![CDATA[ <p></p>
    <div>
      <h3>Background</h3>
      <a href="#background">
        
      </a>
    </div>
    <p>We use <a href="https://prometheus.io/">Prometheus</a> as our core monitoring system. We’ve been heavy Prometheus users since 2017 when we migrated off our previous monitoring system which used a customized <a href="https://www.nagios.org/">Nagios</a> setup. Despite growing our infrastructure a lot, adding tons of new products and learning some hard lessons about operating Prometheus at scale, our original architecture of Prometheus (see <a href="https://www.youtube.com/watch?v=ypWwvz5t_LE">Monitoring Cloudflare's Planet-Scale Edge Network with Prometheus</a> for an in depth walk through) remains virtually unchanged, proving that Prometheus is a solid foundation for building <a href="https://www.cloudflare.com/learning/performance/what-is-observability/">observability</a> into your services.</p><p>One of the key responsibilities of Prometheus is to alert us when something goes wrong and in this blog post we’ll talk about how we make those alerts more reliable - and we’ll introduce an open source tool we’ve developed to help us with that, and share how you can use it too. If you’re not familiar with Prometheus you might want to start by watching <a href="https://www.youtube.com/watch?v=PzFUwBflXYc">this video</a> to better understand the topic we’ll be covering here.</p><p>Prometheus works by collecting metrics from our services and storing those metrics inside its database, called TSDB. We can then <a href="https://prometheus.io/docs/prometheus/latest/querying/basics/">query these metrics</a> using Prometheus query language called PromQL using ad-hoc queries (for example to power <a href="https://grafana.com/grafana/">Grafana dashboards</a>) or via <a href="https://prometheus.io/docs/prometheus/latest/configuration/alerting_rules/">alerting</a> or <a href="https://prometheus.io/docs/prometheus/latest/configuration/recording_rules/">recording</a> rules. A rule is basically a query that Prometheus will run for us in a loop, and when that query returns any results it will either be recorded as new metrics (with recording rules) or trigger alerts (with alerting rules).</p>
    <div>
      <h3>Prometheus alerts</h3>
      <a href="#prometheus-alerts">
        
      </a>
    </div>
    <p>Since we’re talking about improving our alerting we’ll be focusing on alerting rules.</p><p>To create alerts we first need to have some metrics collected. For the purposes of this blog post let’s assume we’re working with http_requests_total metric, which is used on the examples <a href="https://prometheus.io/docs/prometheus/latest/querying/examples/">page</a>. Here are some examples of how our metrics will look:</p>
            <pre><code>http_requests_total{job="myserver", handler="/", method=”get”, status=”200”}
http_requests_total{job="myserver", handler="/", method=”get”, status=”500”}
http_requests_total{job="myserver", handler="/posts", method=”get”, status=”200”}
http_requests_total{job="myserver", handler="/posts", method=”get”, status=”500”}
http_requests_total{job="myserver", handler="/posts/new", method=”post”, status=”201”}
http_requests_total{job="myserver", handler="/posts/new", method=”post”, status=”401”}</code></pre>
            <p>Let’s say we want to alert if our HTTP server is returning errors to customers.</p><p>Since, all we need to do is check our metric that tracks how many responses with HTTP status code 500 there were, a simple alerting rule could like this:</p>
            <pre><code>- alert: Serving HTTP 500 errors
  expr: http_requests_total{status=”500”} &gt; 0</code></pre>
            <p>This will alert us if we have any 500 errors served to our customers. Prometheus will run our query looking for a time series named http_requests_total that also has a <b>status</b> label with value <b>“500”</b>. Then it will filter all those matched time series and only return ones with value greater than zero.</p><p>If our alert rule returns any results a fire will be triggered, one for each returned result.</p><p>If our rule doesn’t return anything, meaning there are no matched time series, then alert will not trigger.</p><p>The whole flow from metric to alert is pretty simple here as we can see on the diagram below.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/57mqOPxVQ847pKAkMden9K/8ba472d0359bc3fcfd09535cc04e82b7/1-3.png" />
            
            </figure><p>If we want to provide more information in the alert we can by setting additional labels and annotations, but alert and expr fields are all we need to get a working rule.</p><p>But the problem with the above rule is that our alert starts when we have our first error, and then it will never go away.</p><p>After all, our http_requests_total is a counter, so it gets incremented every time there’s a new request, which means that it will keep growing as we receive more requests. What this means for us is that our alert is really telling us “was there ever a 500 error?” and even if we fix the problem causing 500 errors we’ll keep getting this alert.</p><p>A better alert would be one that tells us if we’re serving errors <b>right now</b>.</p><p>For that we can use the <a href="https://prometheus.io/docs/prometheus/latest/querying/functions/#rate">rate()</a> function to calculate the per second rate of errors.</p><p>Our modified alert would be:</p>
            <pre><code>- alert: Serving HTTP 500 errors
  expr: rate(http_requests_total{status=”500”}[2m]) &gt; 0</code></pre>
            <p>The query above will calculate the rate of 500 errors in the last two minutes. If we start responding with errors to customers our alert will fire, but once errors stop so will this alert.</p><p>This is great because if the underlying issue is resolved the alert will resolve too.</p><p>We can improve our alert further by, for example, alerting on the percentage of errors, rather than absolute numbers, or even calculate error budget, but let’s stop here for now.</p><p>It’s all very simple, so what do we mean when we talk about improving the reliability of alerting? What could go wrong here?</p>
    <div>
      <h3>What could go wrong?</h3>
      <a href="#what-could-go-wrong">
        
      </a>
    </div>
    <p>We can craft a valid YAML file with a rule definition that has a perfectly valid query that will simply not work how we expect it to work. Which, when it comes to alerting rules, might mean that the alert we rely upon to tell us when something is not working correctly will fail to alert us when it should. To better understand why that might happen let’s first explain how querying works in Prometheus.</p>
    <div>
      <h3>Prometheus querying basics</h3>
      <a href="#prometheus-querying-basics">
        
      </a>
    </div>
    <p>There are two basic types of queries we can run against Prometheus. The first one is an <b>instant query</b>. It allows us to ask Prometheus for a point in time value of some time series. If we write our query as http_requests_total we’ll get all time series named http_requests_total along with the most recent value for each of them. We can further customize the query and filter results by adding label matchers, like http_requests_total{status=”500”}.</p><p>Let’s consider we have two instances of our server, green and red, each one is scraped (Prometheus collects metrics from it) every one minute (independently of each other).</p><p>This is what happens when we issue an instant query:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2KyKw0PTiNywwYY6euDbuH/7910f018372888b78a4609d16395f1d4/2-fixed.png" />
            
            </figure><p>There’s obviously more to it as we can use <a href="https://prometheus.io/docs/prometheus/latest/querying/functions/">functions</a> and build complex queries that utilize multiple metrics in one expression. But for the purposes of this blog post we’ll stop here.</p><p>The important thing to know about instant queries is that they return the most recent value of a matched time series, and they will look back for up to five minutes (by default) into the past to find it. If the last value is older than five minutes then it’s considered stale and Prometheus won’t return it anymore.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4V3s3dupV4RHtWcAwbPpYd/bd14209ea0ebb73807da506422ccc5f3/3-fixed.png" />
            
            </figure><p>The second type of query is a <b>range query</b> - it works similarly to instant queries, the difference is that instead of returning us the most recent value it gives us a list of values from the selected time range. That time range is always relative so instead of providing two timestamps we provide a range, like “20 minutes”. When we ask for a range query with a 20 minutes range it will return us all values collected for matching time series from 20 minutes ago until now.</p><p>An important distinction between those two types of queries is that range queries don’t have the same “look back for up to five minutes” behavior as instant queries. If Prometheus cannot find any values collected in the provided time range then it doesn’t return anything.</p><p>If we modify our example to request [3m] range query we should expect Prometheus to return three data points for each time series:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1dBwnYThdf3ioljbPdmSKV/54b428335adffe68f87cd40ff80e3ecc/4-fixed.png" />
            
            </figure>
    <div>
      <h3>When queries don’t return anything</h3>
      <a href="#when-queries-dont-return-anything">
        
      </a>
    </div>
    <p>Knowing a bit more about how queries work in Prometheus we can go back to our alerting rules and spot a potential problem: queries that don’t return anything.</p><p>If our query doesn’t match any time series or if they’re considered stale then Prometheus will return an empty result. This might be because we’ve made a typo in the metric name or label filter, the metric we ask for is no longer being exported, or it was never there in the first place, or we’ve added some condition that wasn’t satisfied, like value of being non-zero in our http_requests_total{status=”500”} &gt; 0 example.</p><p>Prometheus will not return any error in any of the scenarios above because none of them are really problems, it’s just how querying works. If you ask for something that doesn’t match your query then you get empty results. This means that there’s no distinction between “all systems are operational” and “you’ve made a typo in your query”. So if you’re not receiving any alerts from your service it’s either a sign that everything is working fine, or that you’ve made a typo, and you have no working monitoring at all, and it’s up to you to verify which one it is.</p><p>For example, we could be trying to query for http_requests_totals instead of http_requests_total (an extra “s” at the end) and although our query will look fine it won’t ever produce any alert.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2p2hNra24jqNgnfeaaLZ6S/9a07d508dcf2c6a1417fe5b3dc4ad46b/5-1.png" />
            
            </figure><p>Range queries can add another twist - they’re mostly used in Prometheus functions like rate(),  which we used in our example. This function will only work correctly if it receives a range query expression that returns at least two data points for each time series, after all it’s impossible to calculate rate from a single number.</p><p>Since the number of data points depends on the time range we passed to the range query, which we then pass to our rate() function, if we provide a time range that only contains a single value then rate won’t be able to calculate anything and once again we’ll return empty results.</p><p>The number of values collected in a given time range depends on the interval at which Prometheus collects all metrics, so to use rate() correctly you need to know how your Prometheus server is configured. You can read more about this <a href="https://www.robustperception.io/what-range-should-i-use-with-rate">here</a> and <a href="https://promlabs.com/blog/2021/01/29/how-exactly-does-promql-calculate-rates">here</a> if you want to better understand how rate() works in Prometheus.</p><p>For example if we collect our metrics every one minute then a range query http_requests_total[1m] will be able to find only one data point. Here’s a reminder of how this looks:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3ghp7yB5qlTe23SxtTKEm3/c66a1cbe0687303ea58e672783bfdef9/6-fixed.png" />
            
            </figure><p>Since, as we mentioned before, we can only calculate rate() if we have at least two data points, calling rate(http_requests_total[1m]) will never return anything and so our alerts will never work.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1xBQkUPr4OsGRpAbeXXtmc/093488f9d4c3c129897cb0dc62190031/7.png" />
            
            </figure><p>There are more potential problems we can run into when writing Prometheus queries, for example any operations between two metrics will only work if both have the same set of labels, you can read about this <a href="https://prometheus.io/docs/prometheus/latest/querying/operators/#vector-matching">here</a>. But for now we’ll stop here, listing all the gotchas could take a while. The point to remember is simple: if your alerting query doesn’t return anything then it might be that everything is ok and there’s no need to alert, but it might also be that you’ve mistyped your metrics name, your label filter cannot match anything, your metric disappeared from Prometheus, you are using too small time range for your range queries etc.</p>
    <div>
      <h3>Renaming metrics can be dangerous</h3>
      <a href="#renaming-metrics-can-be-dangerous">
        
      </a>
    </div>
    <p>We’ve been running Prometheus for a few years now and during that time we’ve grown our collection of alerting rules a lot. Plus we keep adding new products or modifying existing ones, which often includes adding and removing metrics, or modifying existing metrics, which may include renaming them or changing what labels are present on these metrics.</p><p>A lot of metrics come from metrics exporters maintained by the Prometheus community, like <a href="https://github.com/prometheus/node_exporter">node_exporter</a>, which we use to gather some operating system metrics from all of our servers. Those exporters also undergo changes which might mean that some metrics are deprecated and removed, or simply renamed.</p><p>A problem we’ve run into a few times is that sometimes our alerting rules wouldn’t be updated after such a change, for example when we upgraded node_exporter across our fleet. Or the addition of a new label on some metrics would suddenly cause Prometheus to no longer return anything for some of the alerting queries we have, making such an alerting rule no longer useful.</p><p>It’s worth noting that Prometheus does have a way of <a href="https://prometheus.io/docs/prometheus/latest/configuration/unit_testing_rules/">unit testing rules</a>, but since it works on mocked data it’s mostly useful to validate the logic of a query. Unit testing won’t tell us if, for example, a metric we rely on suddenly disappeared from Prometheus.</p>
    <div>
      <h3>Chaining rules</h3>
      <a href="#chaining-rules">
        
      </a>
    </div>
    <p>When writing alerting rules we try to limit <a href="https://en.wikipedia.org/wiki/Alarm_fatigue">alert fatigue</a> by ensuring that, among many things, alerts are only generated when there’s an action needed, they clearly describe the problem that needs addressing, they have a link to a runbook and a dashboard, and finally that we aggregate them as much as possible. This means that a lot of the alerts we have won’t trigger for each individual instance of a service that’s affected, but rather once per data center or even globally.</p><p>For example, we might alert if the rate of HTTP errors in a datacenter is above 1% of all requests. To do that we first need to calculate the overall rate of errors across all instances of our server. For that we would use a <a href="https://prometheus.io/docs/prometheus/latest/configuration/recording_rules/">recording rule</a>:</p>
            <pre><code>- record: job:http_requests_total:rate2m
  expr: sum(rate(http_requests_total[2m])) without(method, status, instance)

- record: job:http_requests_status500:rate2m
  expr: sum(rate(http_requests_total{status=”500”}[2m])) without(method, status, instance)</code></pre>
            <p>First rule will tell Prometheus to calculate per second rate of all requests and sum it across all instances of our server. Second rule does the same but only sums time series with status labels equal to “500”. Both rules will produce new metrics named after the value of the <b>record</b> field.</p><p>Now we can modify our alert rule to use those new metrics we’re generating with our recording rules:</p>
            <pre><code>- alert: Serving HTTP 500 errors
  expr: job:http_requests_status500:rate2m / job:http_requests_total:rate2m &gt; 0.01</code></pre>
            <p>If we have a data center wide problem then we will raise just one alert, rather than one per instance of our server, which can be a great quality of life improvement for our on-call engineers.</p><p>But at the same time we’ve added two new rules that we need to maintain and ensure they produce results. To make things more complicated we could have recording rules producing metrics based on other recording rules, and then we have even more rules that we need to ensure are working correctly.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3j3cCrMguixvR5m9ZTnhPH/b38180795e05fd9d266f3e90d0e731d7/8.png" />
            
            </figure><p>What if all those rules in our chain are maintained by different teams? What if the rule in the middle of the chain suddenly gets renamed because that’s needed by one of the teams? Problems like that can easily crop up now and then if your environment is sufficiently complex, and when they do, they’re not always obvious, after all the only sign that something stopped working is, well, silence - your alerts no longer trigger. If you’re lucky you’re plotting your metrics on a dashboard somewhere and hopefully someone will notice if they become empty, but it’s risky to rely on this.</p><p>We definitely felt that we needed something better than hope.</p>
    <div>
      <h3>Introducing pint: a Prometheus rule linter</h3>
      <a href="#introducing-pint-a-prometheus-rule-linter">
        
      </a>
    </div>
    <p>To avoid running into such problems in the future we’ve decided to write a tool that would help us do a better job of testing our alerting rules against live Prometheus servers, so we can spot missing metrics or typos easier. We also wanted to allow new engineers, who might not necessarily have all the in-depth knowledge of how Prometheus works, to be able to write rules with confidence without having to get feedback from more experienced team members.</p><p>Since we believe that such a tool will have value for the entire Prometheus community we’ve open-sourced it, and it’s available for anyone to use - say hello to pint!</p><p>You can find sources on <a href="https://github.com/cloudflare/pint">github</a>, there’s also <a href="https://cloudflare.github.io/pint/">online documentation</a> that should help you get started.</p><p>Pint works in 3 different ways:</p><ul><li><p>You can run it against a file(s) with Prometheus rules</p></li><li><p>It can run as a part of your CI pipeline</p></li><li><p>Or you can deploy it as a side-car to all your Prometheus servers</p></li></ul><p>It doesn’t require any configuration to run, but in most cases it will provide the most value if you create a configuration file for it and define some Prometheus servers it should use to validate all rules against. Running without any configured Prometheus servers will limit it to static analysis of all the rules, which can identify a range of problems, but won’t tell you if your rules are trying to query non-existent metrics.</p><p>First mode is where pint reads a file (or a directory containing multiple files), parses it, does all the basic syntax checks and then runs a series of checks for all Prometheus rules in those files.</p><p>Second mode is optimized for validating git based pull requests. Instead of testing all rules from all files pint will only test rules that were modified and report only problems affecting modified lines.</p><p>Third mode is where pint runs as a daemon and tests all rules on a regular basis. If it detects any problem it will expose those problems as metrics. You can then collect those metrics using Prometheus and alert on them as you would for any other problems. This way you can basically use Prometheus to monitor itself.</p><p>What kind of checks can it run for us and what kind of problems can it detect?</p><p>All the checks are documented <a href="https://cloudflare.github.io/pint/checks/">here</a>, along with some tips on how to deal with any detected problems. Let’s cover the most important ones briefly.</p><p>As mentioned above the main motivation was to catch rules that try to query metrics that are missing or when the query was simply mistyped. To do that pint will run each query from every alerting and recording rule to see if it returns any result, if it doesn’t then it will break down this query to identify all individual metrics and check for the existence of each of them. If any of them is missing or if the query tries to filter using labels that aren’t present on any time series for a given metric then it will report that back to us.</p><p>So if someone tries to add a new alerting rule with http_requests_totals typo in it, pint will detect that when running CI checks on the pull request and stop it from being merged. Which takes care of validating rules as they are being added to our configuration management system.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6pVy8R1PRoDvIt8jgooQJu/0bcf4d8f15e4eec4a61a854603153ffc/9.png" />
            
            </figure><p>Another useful check will try to estimate the number of times a given alerting rule would trigger an alert. Which is useful when raising a pull request that’s adding new alerting rules - nobody wants to be flooded with alerts from a rule that’s too sensitive so having this information on a pull request allows us to spot rules that could lead to alert fatigue.</p><p>Similarly, another check will provide information on how many new time series a recording rule adds to Prometheus. In our setup a single unique time series uses, on average, 4KiB of memory. So if a recording rule generates 10 thousand new time series it will increase Prometheus server memory usage by 10000*4KiB=40MiB. 40 megabytes might not sound like but our peak time series usage in the last year was around 30 million time series in a single Prometheus server, so we pay attention to anything that’s might add a substantial amount of new time series, which pint helps us to notice before such rule gets added to Prometheus.</p><p>On top of all the Prometheus query checks, pint allows us also to ensure that all the alerting rules comply with some policies we’ve set for ourselves. For example, we require everyone to write a runbook for their alerts and link to it in the alerting rule using annotations.</p><p>We also require all alerts to have priority labels, so that high priority alerts are generating pages for responsible teams, while low priority ones are only routed to <a href="https://github.com/prymitive/karma">karma dashboard</a> or create tickets using <a href="https://github.com/prometheus-community/jiralert">jiralert</a>. It’s easy to forget about one of these required fields and that’s not something which can be enforced using unit testing, but pint allows us to do that with a few configuration lines.</p><p>With pint running on all stages of our Prometheus rule life cycle, from initial pull request to monitoring rules deployed in our many data centers, we can rely on our Prometheus alerting rules to always work and notify us of any incident, large or small.</p><p>GitHub: <a href="https://github.com/cloudflare/pint">https://github.com/cloudflare/pint</a></p>
    <div>
      <h3>Putting it all together</h3>
      <a href="#putting-it-all-together">
        
      </a>
    </div>
    <p>Let’s see how we can use pint to validate our rules as we work on them.</p><p>We can begin by creating a file called “rules.yml” and adding both recording rules there.</p><p>The goal is to write new rules that we want to add to Prometheus, but before we actually add those, we want pint to validate it all for us.</p>
            <pre><code>groups:
- name: Demo recording rules
  rules:
  - record: job:http_requests_total:rate2m
    expr: sum(rate(http_requests_total[2m])) without(method, status, instance)

  - record: job:http_requests_status500:rate2m
    expr: sum(rate(http_requests_total{status="500"}[2m]) without(method, status, instance)</code></pre>
            <p>Next we’ll download the latest version of pint from <a href="https://github.com/cloudflare/pint/releases">GitHub</a> and run check our rules.</p>
            <pre><code>$ pint lint rules.yml 
level=info msg="File parsed" path=rules.yml rules=2
rules.yml:8: syntax error: unclosed left parenthesis (promql/syntax)
    expr: sum(rate(http_requests_total{status="500"}[2m]) without(method, status, instance)

level=info msg="Problems found" Fatal=1
level=fatal msg="Execution completed with error(s)" error="problems found"</code></pre>
            <p>Whoops, we have “sum(rate(...)” and so we’re missing one of the closing brackets. Let’s fix that and try again.</p>
            <pre><code>groups:
- name: Demo recording rules
  rules:
  - record: job:http_requests_total:rate2m
    expr: sum(rate(http_requests_total[2m])) without(method, status, instance)

  - record: job:http_requests_status500:rate2m
    expr: sum(rate(http_requests_total{status="500"}[2m])) without(method, status, instance)</code></pre>
            
            <pre><code>$ pint lint rules.yml 
level=info msg="File parsed" path=rules.yml rules=2</code></pre>
            <p>Our rule now passes the most basic checks, so we know it’s valid. But to know if it works with a real Prometheus server we need to tell pint how to talk to Prometheus. For that we’ll need a config file that defines a Prometheus server we test our rule against, it should be the same server we’re planning to deploy our rule to. Here we’ll be using a test instance running on localhost. Let’s create a “pint.hcl” file and define our Prometheus server there:</p>
            <pre><code>prometheus "prom1" {
  uri     = "http://localhost:9090"
  timeout = "1m"
}</code></pre>
            <p>Now we can re-run our check using this configuration file:</p>
            <pre><code>$ pint -c pint.hcl lint rules.yml 
level=info msg="Loading configuration file" path=pint.hcl
level=info msg="File parsed" path=rules.yml rules=2
rules.yml:5: prometheus "prom1" at http://localhost:9090 didn't have any series for "http_requests_total" metric in the last 1w (promql/series)
    expr: sum(rate(http_requests_total[2m])) without(method, status, instance)

rules.yml:8: prometheus "prom1" at http://localhost:9090 didn't have any series for "http_requests_total" metric in the last 1w (promql/series)
    expr: sum(rate(http_requests_total{status="500"}[2m])) without(method, status, instance)

level=info msg="Problems found" Bug=2
level=fatal msg="Execution completed with error(s)" error="problems found"</code></pre>
            <p>Yikes! It’s a test Prometheus instance, and we forgot to collect any metrics from it.</p><p>Let’s fix that by starting our server locally on port 8080 and configuring Prometheus to collect metrics from it:</p>
            <pre><code>scrape_configs:
  - job_name: webserver
    static_configs:
      - targets: ['localhost:8080’]</code></pre>
            <p>Let’ re-run our checks once more:</p>
            <pre><code>$ pint -c pint.hcl lint rules.yml 
level=info msg="Loading configuration file" path=pint.hcl
level=info msg="File parsed" path=rules.yml rules=2</code></pre>
            <p>This time everything works!</p><p>Now let’s add our alerting rule to our file, so it now looks like this:</p>
            <pre><code>groups:
- name: Demo recording rules
  rules:
  - record: job:http_requests_total:rate2m
    expr: sum(rate(http_requests_total[2m])) without(method, status, instance)

  - record: job:http_requests_status500:rate2m
    expr: sum(rate(http_requests_total{status="500"}[2m])) without(method, status, instance)

- name: Demo alerting rules
  rules:
  - alert: Serving HTTP 500 errors
    expr: job:http_requests_status500:rate2m / job:http_requests_total:rate2m &gt; 0.01</code></pre>
            <p>And let’s re-run pint once again:</p>
            <pre><code>$ pint -c pint.hcl lint rules.yml 
level=info msg="Loading configuration file" path=pint.hcl
level=info msg="File parsed" path=rules.yml rules=3
rules.yml:13: prometheus "prom1" at http://localhost:9090 didn't have any series for "job:http_requests_status500:rate2m" metric in the last 1w but found recording rule that generates it, skipping further checks (promql/series)
    expr: job:http_requests_status500:rate2m / job:http_requests_total:rate2m &gt; 0.01

rules.yml:13: prometheus "prom1" at http://localhost:9090 didn't have any series for "job:http_requests_total:rate2m" metric in the last 1w but found recording rule that generates it, skipping further checks (promql/series)
    expr: job:http_requests_status500:rate2m / job:http_requests_total:rate2m &gt; 0.01

level=info msg="Problems found" Information=2</code></pre>
            <p>It all works according to pint, and so we now can safely deploy our new rules file to Prometheus.</p><p>Notice that pint recognised that both metrics used in our alert come from recording rules, which aren’t yet added to Prometheus, so there’s no point querying Prometheus to verify if they exist there.</p><p>Now what happens if we deploy a new version of our server that renames the “status” label to something else, like “code”?</p>
            <pre><code>$ pint -c pint.hcl lint rules.yml 
level=info msg="Loading configuration file" path=pint.hcl
level=info msg="File parsed" path=rules.yml rules=3
rules.yml:8: prometheus "prom1" at http://localhost:9090 has "http_requests_total" metric but there are no series with "status" label in the last 1w (promql/series)
    expr: sum(rate(http_requests_total{status="500"}[2m])) without(method, status, instance)

rules.yml:13: prometheus "prom1" at http://localhost:9090 didn't have any series for "job:http_requests_status500:rate2m" metric in the last 1w but found recording rule that generates it, skipping further checks (promql/series)
    expr: job:http_requests_status500:rate2m / job:http_requests_total:rate2m &gt; 0.01

level=info msg="Problems found" Bug=1 Information=1
level=fatal msg="Execution completed with error(s)" error="problems found"</code></pre>
            <p>Luckily pint will notice this and report it, so we can adopt our rule to match the new name.</p><p>But what if that happens after we deploy our rule? For that we can use the “pint watch” command that runs pint as a daemon periodically checking all rules.</p><p>Please note that validating all metrics used in a query will eventually produce some false positives. In our example metrics with status=”500” label might not be exported by our server until there’s at least one request ending in HTTP 500 error.</p><p>The <a href="https://cloudflare.github.io/pint/checks/promql/series.html">promql/series</a> check responsible for validating presence of all metrics has some documentation on how to deal with this problem. In most cases you’ll want to add a comment that instructs pint to ignore some missing metrics entirely or stop checking label values (only check if there’s “status” label present, without checking if there are time series with status=”500”).</p>
    <div>
      <h3>Summary</h3>
      <a href="#summary">
        
      </a>
    </div>
    <p>Prometheus metrics don’t follow any strict schema, whatever services expose will be collected. At the same time a lot of problems with queries hide behind empty results, which makes noticing these problems non-trivial.</p><p>We use pint to find such problems and report them to engineers, so that our global network is always monitored correctly, and we have confidence that lack of alerts proves how reliable our infrastructure is.</p> ]]></content:encoded>
            <category><![CDATA[Monitoring]]></category>
            <category><![CDATA[Prometheus]]></category>
            <category><![CDATA[Observability]]></category>
            <category><![CDATA[Speed & Reliability]]></category>
            <guid isPermaLink="false">YZO15EI2Aw87tTU31VtF0</guid>
            <dc:creator>Lukasz Mierzwa</dc:creator>
        </item>
        <item>
            <title><![CDATA[Improving your monitoring setup by integrating Cloudflare’s analytics data into Prometheus and Grafana]]></title>
            <link>https://blog.cloudflare.com/improving-your-monitoring-setup-by-integrating-cloudflares-analytics-data-into-prometheus-and-grafana/</link>
            <pubDate>Thu, 20 May 2021 13:00:15 GMT</pubDate>
            <description><![CDATA[ Here at Labyrinth Labs, we put great emphasis on monitoring. Having a working monitoring setup is a critical part of the work we do for our clients.
Improving your monitoring setup by integrating Cloudflare’s analytics data into Prometheus and Grafana ]]></description>
            <content:encoded><![CDATA[ <p><i>The following is a guest post by Martin Hauskrecht, DevOps Engineer at Labyrinth Labs.</i></p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1No9XvnguRN5bpcRgbSZBe/5e776b72c64573b51921171da88be6c1/image1-7.png" />
            
            </figure><p>Here at Labyrinth Labs, we put great emphasis on monitoring. Having a working monitoring setup is a critical part of the work we do for our clients.</p><p>Cloudflare's Analytics dashboard provides a lot of useful information for debugging and analytics purposes for our customer Pixel Federation. However, it doesn’t automatically integrate with existing monitoring tools such as Grafana and Prometheus, which our DevOps engineers use every day to monitor our infrastructure.</p><p>Cloudflare provides a Logs API, but the amount of logs we’d need to analyze is so vast, it would be simply inefficient and too pricey to do so. Luckily, Cloudflare already does the hard work of aggregating our thousands of events per second and exposes them in an <a href="https://developers.cloudflare.com/analytics/graphql-api/">easy-to-use API</a>.</p><p>Having Cloudflare’s data from our zones integrated with other systems’ metrics would give us a better understanding of our systems and the ability to correlate metrics and create more useful alerts, making our Day-2 operations (e.g. debugging incidents or analyzing the usage of our systems) more efficient.</p><p>Since our monitoring stack is primarily based on Prometheus and Grafana, we decided to implement our own Prometheus exporter that pulls data from Cloudflare’s GraphQL Analytics API.</p>
    <div>
      <h3>Design</h3>
      <a href="#design">
        
      </a>
    </div>
    <p>Based on current cloud trends and our intention to use the exporter in Kubernetes, writing the code in Go was the obvious choice. Cloudflare provides an <a href="https://github.com/cloudflare/cloudflare-go">API SDK for Golang</a>, so the common API tasks were made easy to start with.</p><p>We take advantage of Cloudflare’s GraphQL API to obtain analytics data about each of our zones and transform them into Prometheus metrics that are then exposed on a metrics endpoint.</p><p>We are able to obtain data about the total number and rate of requests, bandwidth, cache utilization, threats, SSL usage, and HTTP response codes. In addition, we are also able to monitor what type of content is being transmitted and what countries and locations the requests originate from.</p><p>All of this information is provided through the <i>http1mGroups</i> node in Cloudflare’s GraphQL API. If you want to see what Datasets are available, you can find a brief description at <a href="https://developers.cloudflare.com/analytics/graphql-api/features/data-sets">https://developers.cloudflare.com/analytics/graphql-api/features/data-sets</a>.</p><p>On top of all of these, we can also obtain data for Cloudflare’s data centers. Our graphs can easily show the distribution of traffic among them, further helping in our evaluations. The data is obtained from the <code><i>httpRequestsAdaptiveGroups</i></code> node in GraphQL.</p><p>After running the queries against the GraphQL API, we simply format the results to follow the Prometheus metrics format and expose them on the /metrics endpoint. To make things faster, we use Goroutines and make the requests in parallel.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1d6eIT9Tv7F4p5iLP4bYQR/1fc81995d3304539456ace57170af022/image4-9.png" />
            
            </figure>
    <div>
      <h3>Deployment</h3>
      <a href="#deployment">
        
      </a>
    </div>
    <p>Our primary intention was to use the exporter in Kubernetes. Therefore, it comes with a <a href="https://hub.docker.com/repository/docker/lablabs/cloudflare_exporter">Docker image</a> and <a href="https://github.com/lablabs/cloudflare-exporter/tree/master/charts/cloudflare-exporter">Helm chart</a> to make deployments easier. You might need to adjust the Service annotations to match your Prometheus configuration.</p><p>The exporter itself exposes the gathered metrics on the /metrics endpoint. Therefore setting the Prometheus annotations either on the pod or a Kubernetes service will do the job.</p>
            <pre><code>apiVersion: v1
kind: Service
metadata:
  annotations:
    prometheus.io/path: /metrics
    prometheus.io/scrape: "true"</code></pre>
            <p>We plan on adding a Prometheus ServiceMonitor to the Helm chart to make scraping the exporter even easier for those who use the Prometheus operator in Kubernetes.</p><p>The configuration is quite easy, you just provide your API email and key. Optionally you can limit the scraping to selected zones only. Refer to our docs in the <a href="https://github.com/lablabs/cloudflare-exporter">GitHub repo</a> or see the example below.</p>
            <pre><code> env:
   - name: CF_API_EMAIL
     value: &lt;YOUR_API_EMAIL&gt;
   - name: CF_API_KEY
     value: &lt;YOUR_API_KEY&gt;

  # Optionally, you can filter zones by adding IDs following the example below.
  # - name: ZONE_XYZ
  #   value: &lt;zone_id&gt;</code></pre>
            <p>To deploy the exporter with Helm you simply need to run:</p>
            <pre><code>helm repo add lablabs-cloudflare-exporter https://lablabs.github.io/cloudflare-exporter
helm repo update

helm install cloudflare-exporter lablabs-cloudflare-exporter/cloudflare-exporter \
--set env[0].CF_API_EMAIL=&lt;API_EMAIL&gt; \
--set env[1].CF_API_KEY=&lt;API_KEY&gt;</code></pre>
            <p>We also provide a <a href="https://github.com/lablabs/cloudflare-exporter/blob/master/examples/helmfile/cloudflare-exporter.yaml">Helmfile</a> in our repo to make deployments easier, you just need to add your credentials to make it work.</p>
    <div>
      <h3>Visualizing the data</h3>
      <a href="#visualizing-the-data">
        
      </a>
    </div>
    <p>I’ve already explained how the exporter works and how you can get it running. As I mentioned before, we use Grafana to visualize our metrics from Prometheus. We’ve created a <a href="https://grafana.com/grafana/dashboards/13133">dashboard</a> that takes the data from Prometheus and puts it into use.</p><p>The dashboard is divided into several rows, which group individual panels for easier navigation. It allows you to target individual zones for metrics visualization.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4zGurfsIFldJAl7JWn1Ugm/f7e1173d0d3ccadb91de95d49902b6fc/image2-5.png" />
            
            </figure>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2Wui3JqzgAU1GAU5J7XXj3/b5fa5aab63702997b97d7be3cc5ed7f0/image3-6.png" />
            
            </figure><p>To make things even more beneficial for the operations team, you can use the gathered metrics to create alerts. These can be created either in Grafana directly or using Prometheus alert rules.</p><p>Furthermore, if you integrate <a href="https://github.com/thanos-io/thanos">Thanos</a> or <a href="https://grafana.com/oss/cortex/">Cortex</a> into your monitoring setup, you can store these metrics indefinitely.</p>
    <div>
      <h3>Future work</h3>
      <a href="#future-work">
        
      </a>
    </div>
    <p>We’d like to integrate even more analytics data into our exporters, eventually reaching every metric that Cloudflare’s GraphQL can provide. We plan on creating new metrics for firewall analytics, DoS analytics, and Network analytics soon.</p><p>Feel free to create a GitHub issue if you have any questions, problems, or suggestions. Any pull request is greatly appreciated.</p>
    <div>
      <h3>About us</h3>
      <a href="#about-us">
        
      </a>
    </div>
    <p><a href="https://lablabs.io/">Labyrinth Labs</a> helps companies build, run, deploy and scale software and infrastructure by embracing the right technologies and principles.</p> ]]></content:encoded>
            <category><![CDATA[Analytics]]></category>
            <category><![CDATA[Customers]]></category>
            <category><![CDATA[Prometheus]]></category>
            <category><![CDATA[Grafana]]></category>
            <category><![CDATA[Monitoring]]></category>
            <guid isPermaLink="false">4IQoisV5GHmCHWVLWy6LNK</guid>
            <dc:creator>Martin Hauskrecht</dc:creator>
        </item>
    </channel>
</rss>