
<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:media="http://search.yahoo.com/mrss/">
    <channel>
        <title><![CDATA[ The Cloudflare Blog ]]></title>
        <description><![CDATA[ Get the latest news on how products at Cloudflare are built, technologies used, and join the teams helping to build a better Internet. ]]></description>
        <link>https://blog.cloudflare.com</link>
        <atom:link href="https://blog.cloudflare.com/" rel="self" type="application/rss+xml"/>
        <language>en-us</language>
        
        <lastBuildDate>Mon, 13 Apr 2026 22:23:55 GMT</lastBuildDate>
        <item>
            <title><![CDATA[Evaluating image segmentation models for background removal for Images]]></title>
            <link>https://blog.cloudflare.com/background-removal/</link>
            <pubDate>Thu, 28 Aug 2025 14:00:00 GMT</pubDate>
            <description><![CDATA[ An inside look at how the Images team compared dichotomous image segmentation models to identify and isolate subjects in an image from the background. ]]></description>
            <content:encoded><![CDATA[ <p>Last week, we wrote about <a href="https://blog.cloudflare.com/ai-face-cropping-for-images/"><u>face cropping for Images</u></a>, which runs an open-source face detection model in <a href="https://developers.cloudflare.com/workers-ai/"><u>Workers AI</u></a> to automatically crop images of people at scale.</p><p>It wasn’t too long ago when deploying AI workloads was prohibitively complex. Real-time inference previously required specialized (and costly) hardware, and we didn’t always have standard abstractions for deployment. We also didn’t always have Workers AI to enable developers — including ourselves — to ship AI features without this additional overhead.</p><p>And whether you’re skeptical or celebratory of AI, you’ve likely seen its explosive progression. New benchmark-breaking computational models are released every week. We now expect a fairly high degree of accuracy — the more important differentiators are how well a model fits within a product’s infrastructure and what developers do with its predictions.</p><p>This week, we’re introducing <a href="https://developers.cloudflare.com/images/transform-images/transform-via-url/#segment">background removal for Images</a>. This feature runs a dichotomous image segmentation model on Workers AI to isolate subjects in an image from their backgrounds. We took a controlled, deliberate approach to testing models for efficiency and accuracy.</p><p>Here’s how we evaluated various image segmentation models to develop background removal.</p>
    <div>
      <h2>A primer on image segmentation</h2>
      <a href="#a-primer-on-image-segmentation">
        
      </a>
    </div>
    <p>In computer vision, image segmentation is the process of splitting an image into meaningful parts.</p><p>Segmentation models produce a mask that assigns each pixel to a specific category. This differs from detection models, which don’t classify every pixel but instead mark regions of interest. A face detection model, such as the one that informs <a href="https://developers.cloudflare.com/images/transform-images/transform-via-url/#gravity"><u>face cropping</u></a>, draws bounding boxes based on where it thinks there are faces. (If you’re curious, <a href="https://blog.cloudflare.com/ai-face-cropping-for-images/#from-pixels-to-people"><u>our post on face cropping</u></a> discusses how we use these bounding boxes to perform crop and zoom operations.)</p><p>Salient object detection is a type of segmentation that highlights the parts of an image that most stand out. Most salient detection models create a binary mask that categorizes the most prominent (or salient) pixels as the “foreground” and all other pixels as the “background”. In contrast, a multi-class mask considers the broader context and labels each pixel as one of several possible classes, like “dog” or “chair”. These multi-class masks are the basis of content analysis models, which distinguish which pixels belong to specific objects or types of objects.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/qV2QVZYEdqdigCTuqBuHu/cf4873dddf3b30503aac6643ded1a5ab/image3.png" />
          </figure><p><sub>In this photograph of my dog, a detection model predicts that a bounding box contains a dog; a segmentation model predicts that some pixels belong to a dog, while all other pixels don’t.</sub></p><p>For our use case, we needed a model that could produce a soft saliency mask, which predicts how strongly each pixel belongs to either the foreground (objects of interest) or the background. That is, each pixel is assigned a value on a scale of 0–255, where 0 is completely transparent and 255 is fully opaque. Most background pixels are labeled at (or near) 0; foreground pixels may vary in opacity, depending on its degree of saliency.</p><p>In principle, a background removal feature must be able to accurately predict saliency across a broad range of contexts. For example, e-commerce and retail vendors want to display all products on a uniform, white background; in creative and image editing applications, developers want to enable users to create stickers and cutouts from uploaded content, including images of people or avatars.</p><p>In our research, we focused primarily on the following four image segmentation models:</p><ul><li><p><a href="https://arxiv.org/abs/2005.09007"><b><u>U</u></b><b><u><sup>2</sup></u></b><b><u>-Net (U Square Net)</u></b></a>: Trained on the largest saliency dataset (<a href="https://saliencydetection.net/duts/"><u>DUST-TR</u></a>) of 10,553 images, which were then horizontally flipped to reach a total of 21,106 training images.</p></li><li><p><a href="https://arxiv.org/abs/2203.03041"><b><u>IS-Net (Intermediate Supervision Network)</u></b></a>: A novel, two-step approach from the same authors of U2-Net; this model produces cleaner boundaries for images with noisy, cluttered backgrounds.</p></li><li><p><a href="https://arxiv.org/abs/2401.03407"><b><u>BiRefNet (Bilateral Reference Network)</u></b></a>: Specifically designed to segment complex and high-resolution images with accuracy by checking that the small details match the big picture.</p></li><li><p><a href="https://arxiv.org/abs/2304.02643"><b><u>SAM (Segment Anything Model)</u></b></a>: Developed by Meta to allow segmentation by providing prompts and input points.</p></li></ul><p>Different scales of information allow computational models to build a holistic view of an image. Global context considers the overall shape of objects and how areas of pixels relate to the entire image, while local context traces fine details like edges, corners, and textures. If local context focuses on the trees and their leaves, then global context represents the entire forest.</p><p><a href="https://github.com/xuebinqin/U-2-Net"><u>U</u><u><sup>2</sup></u><u>-Net</u></a> extracts information using a multi-scale approach, where it analyzes an image at different zoom levels, then combines its predictions in a single step. The model analyzes global and local context at the same time, so it works well on images with multiple objects of varying sizes.</p><p><a href="https://github.com/xuebinqin/DIS"><u>IS-Net</u></a> introduces a new, two-step strategy called intermediate supervision. First, the model separates the foreground from the background, identifying potential areas that likely belong to objects of interest — all other pixels are labeled as the background. Second, it refines the boundaries of the highlighted objects to produce a final pixel-level mask.</p><p>The initial suppression of the background results in cleaner, more precise edges, as the segmentation focuses only on the highlighted objects of interest and is less likely to mistakenly include background pixels in the final mask. This model especially excels when dealing with complex images with cluttered backgrounds.</p><p>Both models output their predictions in a single direction for scale. U<sup>2</sup>-Net interprets the global and local context in one pass, while Is-Net begins with the global context, then focuses on the local context.</p><p>In contrast, <a href="https://github.com/ZhengPeng7/BiRefNet"><u>BiRefNet</u></a> refines its predictions over multiple passes, moving in both contextual directions. Like Is-Net, it initially creates a map that roughly highlights the salient object, then traces the finer details. However, BiRefNet moves from global to local context, then from local context back to global. In other words, after refining the edges of the object, it feeds the output back to the large-scale view. This way, the model can check that the small-scale details align with the broader image structure, providing higher accuracy on high-resolution images.</p><p>U<sup>2</sup>-Net, IS-Net, and BiRefNet are exclusively saliency detection models, producing masks that distinguish foreground pixels from background pixels. However, <a href="https://github.com/facebookresearch/segment-anything"><u>SAM</u></a> was designed to be more extensible and general; its primary goal is to segment any object based on specified inputs, not only salient objects. This means that the model can also be used to create multi-class masks that label various objects within an image, even if they aren’t the primary focus of an image.</p>
    <div>
      <h2>How we measure segmentation accuracy</h2>
      <a href="#how-we-measure-segmentation-accuracy">
        
      </a>
    </div>
    <p>In most saliency datasets, the actual location of the object is known as the ground-truth area. These regions are typically defined by human annotators, who manually trace objects of interest in each image. This provides a reliable reference to evaluate model predictions.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6wAV8lQcsZHosKoFyEIce1/495b3d70960027b795ec1a62f2d46a59/BLOG-2928_3.png" />
          </figure><p><sub>Photograph by </sub><a href="https://www.linkedin.com/in/fang-allen"><sub><u>Allen Fang</u></sub></a></p><p>Each model outputs a predicted area (where it thinks the foreground pixels are), which can be compared against the ground-truth area (where the foreground pixels actually are).</p><p>Models are evaluated for segmentation accuracy based on common metrics like Intersection over Union, Dice coefficient, and pixel accuracy. Each score takes a slightly different approach to quantify the alignment between the predicted and ground-truth areas (“P” and “G”, respectively, in the formulas below).</p>
    <div>
      <h3>Intersection over Union</h3>
      <a href="#intersection-over-union">
        
      </a>
    </div>
    <p>Intersection over Union (IoU), also called the Jaccard index, measures how well the predicted area matches the true object. That is, it counts the number of foreground pixels that are shared in both the predicted and ground-truth masks. Mathematically, IoU is written as:</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6zVQSLlKaFuVUQrDcAlf0Y/4254010745caf0d207d8f8e8181f4c9c/BLOG-2928_4.png" />
          </figure><p><sub>Jaccard formula</sub></p><p>The formula divides the intersection (P∩G), or the pixels where the predicted and ground-truth areas overlap, by the union (P∪G), or the total area of pixels that belong to either area, counting the overlapping pixels only once.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7KFLB15btpCQuKuTqakBjp/91e78ec6d565e3723c5d76b3a65a441d/unnamed__23_.png" />
          </figure><p>IoU produces a score between 0 and 1. A higher value indicates a closer overlap between the predicted and ground-truth areas. A perfect match, although rare, would score 1, while a smaller overlapping area brings the score closer to 0.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/oe82x3rPo8XoNnwG3KBRy/22f591adb6ab27b3ad05f91b13eddff7/BLOG-2928_6.png" />
          </figure>
    <div>
      <h3>Dice coefficient</h3>
      <a href="#dice-coefficient">
        
      </a>
    </div>
    <p>The Dice coefficient, also called the Sørensen–Dice index, similarly compares how well the model’s prediction matches reality, but is much more forgiving than the IoU score. It gives more weight to the shared pixels between the predicted and actual foreground, even if the areas differ in size. Mathematically, the Dice coefficient is written as:</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4UiJUJrjagwkmQNvdkiPC3/e17eaa8f22f57114a91f1e58fc3a76fb/BLOG-2928_7.png" />
          </figure><p><sub>Sørensen–Dice formula</sub></p><p>The formula divides twice the intersection (P∩G) by the sum of pixels in both predicted and ground-truth areas (P+G), counting any overlapping pixels twice.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7vcFBAoRJ9wpyAt8m4Sn7x/8b1962de717701ff348e90ec8b86286e/BLOG-2928_8.png" />
          </figure><p>Like IoU, the Dice coefficient also produces a value between 0 and 1, indicating a more accurate match as it approaches 1.</p>
    <div>
      <h3>Pixel accuracy</h3>
      <a href="#pixel-accuracy">
        
      </a>
    </div>
    <p>Pixel accuracy measures the percentage of pixels that were correctly labeled as either the foreground or the background. Mathematically, pixel accuracy is written as:</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/40HkiVe1a2i1dSguDk1TxO/990e49cd4d40a4eaa29078948bc9d7e8/unnamed__24_.png" />
          </figure><p><sub>Pixel accuracy formula</sub></p><p>The formula divides the number of correctly predicted pixels by the total number of pixels in the image.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1GX83EmXBSLhGlHvGFLqnn/f65fbd110f4b1d201f7585723ced0f34/image10.png" />
          </figure><p>The total area of correctly predicted pixels is the sum of foreground and background pixels that accurately match the ground-truth areas.</p><p>The correctly predicted foreground is the intersection of the predicted and ground-truth areas (P∩G). The inverse of the predicted area (P’, or 1–P) represents the pixels that the model identifies as the background; the inverse of the ground-truth area (G’, or 1–G) represents the actual boundaries of the background. When these two inverted areas overlap (P’∩G’, or (1–P)∩(1–G)), this intersection is the correctly predicted background.</p>
    <div>
      <h2>Interpreting the metrics</h2>
      <a href="#interpreting-the-metrics">
        
      </a>
    </div>
    <p>Of the three metrics, IoU is the most conservative measure of segmentation accuracy. Small mistakes, such as including extra background pixels in the predicted foreground, reduce the score noticeably. This metric is most valuable for applications that require precise boundaries, such as autonomous driving systems.</p><p>Meanwhile, the Dice coefficient rewards the overlapping pixels more heavily, and subsequently tends to be higher than the IoU score for the same prediction. In model evaluations, this metric is favored over IoU when it’s more important to capture the object than to penalize mistakes. For example, in medical imaging, the risk of missing a true positive substantially outweighs the inconvenience of flagging a false positive.</p><p>In the context of background removal, we biased toward the IoU score and Dice coefficient over pixel accuracy. Pixel accuracy can be misleading, especially when processing an image where background pixels comprise the majority of pixels.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7K8TWmRLdJNIza43UoXhD8/c9a42ed7074ce975afd8f7e783db5849/BLOG-2928_11.png" />
          </figure><p>For example, consider an image with 900 background pixels and 100 foreground pixels. A model that correctly predicts only 5 foreground pixels — 5% of all foreground pixels — will score deceptively high in pixel accuracy. Intuitively, we’d likely say that this model performed poorly. However, assuming all 900 background pixels were correctly predicted, the model maintains 90.5% pixel accuracy, despite missing the subject almost entirely.</p>
    <div>
      <h2>Pixels, predictions, and patterns</h2>
      <a href="#pixels-predictions-and-patterns">
        
      </a>
    </div>
    <p>To determine the most suitable model for the Images API, we performed a series of tests using the open-source <a href="https://github.com/danielgatis/rembg"><u>rembg</u></a> library, which combines all relevant models in a single interface.</p><p>Each model was tasked with outputting a prediction mask to label foreground versus background pixels. We pulled images from two saliency datasets: <a href="https://huggingface.co/datasets/schirrmacher/humans"><b><u>Humans</u></b></a> contains over 7,000 images of people with varying skin tones, clothing, and hairstyles, while <a href="https://xuebinqin.github.io/dis/index.html#overview"><b><u>DIS5K</u></b></a> (version 1.5) spans a vast range of objects and scenes. If a model contained variants that were pre-trained on specific types of segmentation (e.g. clothes, humans), then we repeated the tests for the generalized model and each variant.</p><p>Our experiments were executed on a GPU with 23 GB VRAM to mirror realistic hardware constraints, similar to the environment where we already run a face detection model. We also replicated the same tests on a larger GPU instance with 94 GB VRAM; this served as an upper-bound reference point to benchmark potential speed gains if additional compute were available. Cloudflare typically reserves larger GPUs for more compute-intensive <a href="https://developers.cloudflare.com/workers-ai/models/"><u>AI workloads</u></a> — we viewed these tests more as an exploration for comparison than as a production scenario.</p><p>During our analysis, we started to see key trends emerge:</p><p>On the smaller GPU, inference times were generally faster for lightweight models like U<sup>2</sup>-Net (176 MB) and Is-Net (179 MB). The average speed across both datasets were 307 milliseconds for U<sup>2</sup>-Net and 351 milliseconds for Is-Net. On the opposite end, BiRefNet (973 MB) had noticeably slower output times, averaging 821 milliseconds across its two generalized variants.</p><p>BiRefNet ran 2.4 times faster on the larger GPU, reducing its average inference time to 351 milliseconds — comparable to the other models, despite its larger size. In contrast, the lighter models did not show any notable speed gain with additional compute, suggesting that scaling hardware configurations primarily benefits heavier models. In <a href="https://blog.cloudflare.com/background-removal/#appendix-1-inference-time-in-milliseconds">Appendix 1</a> (“Inference Time in Milliseconds”), we compare speed across models and GPU instances.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/55Tk0RjbvoffPVQT85UJQe/ca1f2280768495f3be52425e642fdd25/BLOG-2928_12.png" />
          </figure><p>We also observed distinct patterns when comparing model performance across the two saliency datasets. Most notably, all models ran faster on the Humans dataset, where images of people tend to be single-subject and relatively uniform. The DIS5K dataset, in contrast, includes images with higher complexity — that is, images with more objects, cluttered backgrounds, or multiple objects of varying scales.</p><p>Slower predictions suggest a relationship between visual complexity and the computation needed to identify the important parts of an image. In other words, datasets with simpler, well-separated objects can be analyzed more quickly, while complex scenes require more computation to generate accurate masks.</p><p>Similarly, complexity challenges accuracy as much as it does efficiency. In our tests, all models demonstrated higher segmentation accuracy with the Humans dataset. In <a href="https://blog.cloudflare.com/background-removal/#appendix-2-measures-of-model-accuracy">Appendix 2</a> (“Measures of Model Accuracy”), we present our results for segmentation accuracy across both datasets.</p><p>Specialized variants scored slightly higher in accuracy compared to their generalized counterparts. But in broad, practical applications, selecting a specialized model for every input isn’t realistic, at least for our initial beta version. We favored general-purpose models that can produce accurate predictions without prior classification. For this reason, we excluded SAM — while powerful in its intended use cases, SAM is designed to work with additional inputs. On unprompted segmentation tasks, it produced lower accuracy scores (and much higher inference times) amongst the models we tested.</p><p>All BiRefNet variants showed greater accuracy compared to other models. The generalized variants (<code>-genera</code>l and <code>-dis</code>) were just as accurate as its more specialized variants like <code>-portrait</code>. The <code>birefnet-general</code> variant, in particular, achieved a high IoU score of 0.87 and Dice coefficient of 0.92, averaged across both datasets.</p><p>In contrast, the generalized U<sup>2</sup>-Net model showed high accuracy on the Humans dataset, reaching an IoU score of 0.89 and a Dice coefficient of 0.94, but received a low IoU score of 0.39 and Dice coefficient of 0.52 on the DIS5K dataset. The <code>isnet-general-use</code> model performed substantially better, obtaining an average IoU score of 0.82 and Dice coefficient of 0.89 across both datasets.</p><p>We observed whether models could interpret both the global and local context of an image. In some scenarios, the U<sup>2</sup>-Net and Is-Net models captured the overall gist of an image, but couldn’t accurately trace fine edges. We designed one test around measuring how well each model could isolate bicycle wheels; for variety, we included images across both interior and exterior backgrounds. Lower scoring models, while correctly labeling the area surrounding the wheel, struggled with the pixels between the thin spokes and produced prediction masks that included these background pixels.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6mzRTqXhZRk0GuzwuIRu4p/b251aa4f3dbeecc11dbba931623607e5/BLOG-2928_13.png" />
          </figure><p><sub>Photograph by </sub><a href="https://unsplash.com/photos/person-near-bike-p6OU_gENRL0"><sub><u>Yomex Owo on Unsplash</u></sub></a><sub></sub></p><p>In other scenarios, the models showed the opposite limitation: they produced masks with clean edges, but failed to identify the focus of the image. We ran another test using a photograph of a gray T-shirt against black gym flooring. Both generalized U<sup>2</sup>-Net and Is-Net models labeled only the logo as the salient object, creating a mask that omitted the rest of the shirt entirely. </p><p>Meanwhile, the BiRefNet model achieved high accuracy across both types of tests. Its architecture passes information bidirectionally, allowing details at the pixel level to be informed by the larger scene (and vice versa). In practice, this means that BiRefNet interprets how fine-grained edges fit into the broader object. For our beta version, we opted to use the BiRefNet model to drive decisions for background removal.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/741GSfhMn8MPykb6NkWUJV/1ef5006aea8f67a4faeec73862d97ced/BLOG-2928_14.png" />
          </figure><p><sub>Unlike lower scoring models, the BiRefNet model understood that the entire shirt is the true subject of the image.</sub></p>
    <div>
      <h2>Applying background removal with the Images API</h2>
      <a href="#applying-background-removal-with-the-images-api">
        
      </a>
    </div>
    <p>The Images API now supports <a href="https://developers.cloudflare.com/images/transform-images/transform-via-url/#segment">automatic background removal</a> for <a href="https://developers.cloudflare.com/images/upload-images/"><u>hosted</u></a> and <a href="https://developers.cloudflare.com/images/transform-images/"><u>remote</u></a> images. This feature is available in open beta to all Cloudflare users on <a href="https://developers.cloudflare.com/images/pricing/"><u>Free and Paid plans</u></a>.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3iglNDllwEMvg6ygDvTRNc/a354422efd166cb3b48ee10995e78aa4/unnamed__25_.png" />
          </figure><p>Use the <code>segment</code> parameter when optimizing an image through a <a href="https://developers.cloudflare.com/images/transform-images/transform-via-url/"><u>specially-formatted Images URL</u></a> or a <a href="https://developers.cloudflare.com/images/transform-images/transform-via-workers/"><u>worker</u></a>, and Cloudflare will isolate the subject of your image and convert the background into transparent pixels. This can be combined with <a href="https://developers.cloudflare.com/images/transform-images/transform-via-url/"><u>other optimization operations</u></a>, as shown in the transformation URL below: </p>
            <pre><code>example.com/cdn-cgi/image/gravity=face,zoom=0.5,segment=foreground,background=white/image.png</code></pre>
            <p>This request will:</p><ul><li><p>Crop the image toward the <a href="https://developers.cloudflare.com/images/transform-images/transform-via-url/#gravity"><u>detected face</u></a>.</p></li><li><p>Isolate the subject in the image, replacing the background with transparent pixels.</p></li><li><p><a href="https://developers.cloudflare.com/images/transform-images/transform-via-url/#background"><u>Fill the transparent pixels</u></a> with a solid white color (<code>#FFFFFF</code>).</p></li></ul><p>You can also <a href="https://developers.cloudflare.com/images/transform-images/bindings/"><u>bind the Images API</u></a> to your worker to build programmatic workflows that give more fine-grained control over how images will be optimized. To demonstrate how this works, I made a <a href="https://studio.yaydeanna.workers.dev/"><u>simple image editing app</u></a> for creating cutouts and overlays, built entirely on Images and <a href="https://developers.cloudflare.com/workers/"><u>Workers</u></a>. This can be used to create images <a href="https://studio.yaydeanna.workers.dev/?order=0%2C1%2C2&amp;i0=icecream&amp;vertEdge0=bottom&amp;vertVal0=0&amp;horEdge0=left&amp;h0=400&amp;bg0=1&amp;i1=pete&amp;vertEdge1=top&amp;horEdge1=left&amp;h1=700&amp;bg1=1&amp;i2=iceland&amp;vertEdge2=top&amp;horEdge2=left"><u>like the one below</u></a>. Here, we apply background removal to isolate the dog and ice cream cone, then overlay them on a landscape image.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6Z6t9ov1t3fbbQojbYbGDh/961cef0f06780bfd8c088772a7add796/image11.png" />
          </figure><p><sub>Photographs by </sub><a href="https://www.pexels.com/@guyjoben/"><sub><u>Guy Hurst</u></sub></a><sub> (landscape), </sub><a href="https://www.pexels.com/@oskar-gackowski-2150870625/"><sub><u>Oskar Gackowski</u></sub></a><sub> (ice cream), and me (dog)</sub></p><p>Here is a snippet that you can use to overlay images in a worker:</p>
            <pre><code>export default {
  async fetch(request,env) {
    const baseURL = "{image-url}";
    const overlayURL = "{image-url}";
    
    // Fetch responses from image URLs
    const [base, overlay] = await Promise.all([fetch(baseURL),fetch(overlayURL)]);

    return (
      await env.IMAGES
        .input(base.body)
        .draw(
          env.IMAGES.input(overlay.body)
            .transform({segment: "foreground"}), // Optimize the overlay image
            {top: 0} // Position the overlay
        )
        .output({format:"image/webp"})
    ).response();
  }
};</code></pre>
            <p>Background removal is another step in our ongoing effort to enable developers to build interactive and imaginative products. These features are an iterative process, and we’ll continue to refine our approach even further. We’re looking forward to sharing our progress with you.</p><p>Read more about applying background removal in our <a href="https://developers.cloudflare.com/images/transform-images/transform-via-url/#segment"><u>documentation</u></a>.</p>
    <div>
      <h3>Appendix 1: Inference Time in Milliseconds</h3>
      <a href="#appendix-1-inference-time-in-milliseconds">
        
      </a>
    </div>
    
    <div>
      <h4>23 GB VRAM GPU</h4>
      <a href="#23-gb-vram-gpu">
        
      </a>
    </div>
    
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2e97UAIgglJ3kP3ozm8lZT/6a44de14aa5179071eb7bbb3c8f31feb/BLOG-2928_17.png" />
          </figure>
    <div>
      <h4>94 GB VRAM GPU</h4>
      <a href="#94-gb-vram-gpu">
        
      </a>
    </div>
    
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2viOyCtbzsloUAvY8kXPJV/378feb50a1dd822d7c848133fbac6a3f/BLOG-2928_18.png" />
          </figure>
    <div>
      <h3>Appendix 2: Measures of Model Accuracy</h3>
      <a href="#appendix-2-measures-of-model-accuracy">
        
      </a>
    </div>
    
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2G9hwnFrlT4eF2isWyaEjk/d3418df56dff686c27f46d96fc86c37f/BLOG-2928_19.png" />
          </figure><p></p> ]]></content:encoded>
            <category><![CDATA[AI Week]]></category>
            <category><![CDATA[AI]]></category>
            <category><![CDATA[Image Optimization]]></category>
            <category><![CDATA[Cloudflare Images]]></category>
            <category><![CDATA[Developers]]></category>
            <category><![CDATA[Developer Platform]]></category>
            <category><![CDATA[Cloudflare Workers]]></category>
            <guid isPermaLink="false">q17H7D8gSkyNAPELuTHl9</guid>
            <dc:creator>Deanna Lam</dc:creator>
            <dc:creator>Diretnan Domnan</dc:creator>
        </item>
        <item>
            <title><![CDATA[How we built AI face cropping for Images]]></title>
            <link>https://blog.cloudflare.com/ai-face-cropping-for-images/</link>
            <pubDate>Wed, 20 Aug 2025 14:00:00 GMT</pubDate>
            <description><![CDATA[ AI face cropping for Images automatically crops around faces in an image. Here’s how we built this feature on Workers AI to scale for general availability. ]]></description>
            <content:encoded><![CDATA[ <p>During Developer Week 2024, we introduced <a href="https://blog.cloudflare.com/whats-next-for-cloudflare-media/"><u>AI face cropping in private beta</u></a>. This feature automatically crops images around detected faces, and marks the first release in our upcoming suite of AI image manipulation capabilities.</p><p><a href="https://developers.cloudflare.com/images/transform-images/transform-via-url/#gravity"><u>AI face cropping</u></a> is now available in <a href="https://developers.cloudflare.com/images/"><u>Images</u></a> for everyone. To bring this feature to general availability, we moved our CPU-based prototype to a GPU-based implementation in Workers AI, enabling us to address a number of technical challenges, including memory leaks that could hamper large-scale use.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1uwRmMEA9LSDoeZgMbYjcM/71b941d57605b0a5286f6f0ccc7dd5e9/1.png" />
          </figure><p><sup><i>Photograph by </i></sup><a href="https://unsplash.com/photos/woman-in-black-cardigan-standing-beside-pink-flowers-UO-82DJ3rcc"><sup><i><u>Suad Kamardeen (@suadkamardeen) on Unsplash</u></i></sup></a></p>
    <div>
      <h2>Turning raw images into production-ready assets</h2>
      <a href="#turning-raw-images-into-production-ready-assets">
        
      </a>
    </div>
    <p>We developed face cropping with two particular use cases in mind:</p><p><b>Social media platforms and AI chatbots.</b> We observed a lot of traffic from customers who use Images to turn unedited images of people into smaller profile pictures in neat, fixed shapes.</p><p><b>E-commerce platforms.</b> The same product photo might appear in a grid of thumbnails on a gallery page, then again on an individual product page with a larger view. The following example illustrates how cropping can change the emphasis from the model’s shirt to their sunglasses.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/zj35mxsUccGShpq5YGHAD/7ffb7b2f8c517be06e2bab6f42aa9a06/2.png" />
          </figure><p><sup><i>Photograph by </i></sup><a href="https://unsplash.com/photos/a-man-wearing-sunglasses-IJozQuMbo3M"><sup><i><u>Media Modifier (@mediamodifier) on Unsplash</u></i></sup></a></p><p>When handling high volumes of media content, preparing images for production can be tedious. With Images, you don’t need to manually generate and store multiple versions of the same image. Instead, we serve copies of each image, each optimized to your specifications, while you continue to <a href="https://developers.cloudflare.com/images/upload-images/"><u>store only the original image</u></a>.</p>
    <div>
      <h2>Crop everything, everywhere, all at once</h2>
      <a href="#crop-everything-everywhere-all-at-once">
        
      </a>
    </div>
    <p>Cloudflare provides a <a href="https://developers.cloudflare.com/images/transform-images/transform-via-url/"><u>library of parameters</u></a> to manipulate how an image is served to the end user. For example, you can crop an image to a square by setting its <code>width</code> and <code>height</code> dimensions to 100x100.</p><p>By default, images are cropped toward the center coordinates of the original image. The <code>gravity</code> parameter can affect how an image gets cropped by changing its focal point. You can specify coordinates to use as the focal point of an image or allow Cloudflare to automatically determine a new focal point.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/78jngtcSCwB80JcZgDsW4d/44bcaf9aa61c6c5c66eb281fba91a472/3.png" />
          </figure><p><sup><i>The gravity parameter is useful when cropping images with off-centered subjects. Photograph by </i></sup><a href="https://unsplash.com/photos/selective-focus-photography-of-pink-petaled-flower-EfhCUc_fjrU"><sup><i><u>Andrew Small (@andsmall) on Unsplash</u></i></sup></a></p><p>The <code>gravity=auto</code> option uses a saliency algorithm to pick the most optimal focal point of an image. Saliency detection identifies the parts of an image that are most visually important; the cropping operation is then applied toward this region of interest. Our algorithm analyzes images using visual cues such as color, luminance, and texture, but doesn’t consider context within an image. While this setting works well on images with inanimate objects like plants and skyscrapers, it doesn’t reliably account for subjects as contextually meaningful as people’s faces.</p><p>And yet, images of people comprise the majority of bandwidth usage for many applications, such as an AI chatbot platform that uses Images to serve over 45 million unique transformations each month. This presented an opportunity for us to improve how developers can optimize images of people.</p><p>AI face cropping can be performed by using the <code>gravity=face</code> option, which automatically detects which pixels represent the face (or faces) and uses this information to crop the image. You can also affect how closely the image is cropped toward the face; the <code>zoom</code> parameter controls the threshold for how much of the surrounding area around the face will be included in the image.</p><p>We carefully designed our model pipeline with privacy and confidentiality top of mind. This feature doesn’t support facial identification or recognition. In other words, when you optimize with Cloudflare, we’ll never know that two different images depict the same person, or identify the specific people in a given image. Instead, AI face cropping with Images is intentionally limited to face detection, or identifying the pixels that represent a human face.</p>
    <div>
      <h2>From pixels to people</h2>
      <a href="#from-pixels-to-people">
        
      </a>
    </div>
    <p>Our first step was to select an open-source model that met our requirements. Behind the scenes, our AI face cropping uses <a href="https://github.com/serengil/retinaface"><u>RetinaFace</u></a>, a convolutional neural network model that classifies images with human faces.</p><p>A <a href="https://www.cloudflare.com/learning/ai/what-is-neural-network/"><u>neural network</u></a> is a type of machine learning process that loosely resembles how the human brain works. A basic neural network has three parts: an input layer, one or more hidden layers, and an output layer. Nodes in each layer form an interconnected network to transmit and process data, where each input node is connected to nodes in the next layer.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6bm2f6z6XoV7KncKSXTTmG/01f9fa9da23a3fb792883d90f180780c/4.png" />
          </figure><p><sup><i>A fully connected layer passes data from one layer to the next.</i></sup></p><p>Data enters through the input layer, where it is analyzed before being passed to the first hidden layer. All of the computation is done in the hidden layers, where a result is eventually delivered through the output layer.</p><p>A convolutional neural network (CNN) mirrors how humans look at things. When we look at other people, we start with abstract features, like the outline of their body, before we process specific features, like the color of their eyes or the shape of their lips.</p><p>Similarly, a CNN processes an image piece-by-piece before delivering the final result. Earlier layers look for abstract features like edges and colors and lines; subsequent layers become more complex and are each responsible for identifying the various features that comprise a human face. The last fully connected layer combines all categorized features to produce one final classification of the entire image. In other words, if an image contains all of the individual features that define a human face (e.g. eyes, nose), then the CNN concludes that the image contains a human face.</p><p>We needed a model that could determine whether an image depicts a person (image classification), as well as exactly where they are in the image (object detection). When selecting a model, some factors we considered were:</p><ul><li><p><b>Performance on the </b><a href="http://shuoyang1213.me/WIDERFACE/index.html"><b><u>WIDERFACE</u></b></a><b> dataset.</b> This is the state-of-the-art face detection benchmark dataset, which contains 32,203 images of 393,703 labeled faces with a high degree of variability in scale, pose, and occlusion.</p></li><li><p><b>Speed (in frames per second).</b> Most of our image optimization requests occur on delivery (rather than before an image gets uploaded to storage), so we prioritized performance for end-user delivery.</p></li><li><p><b>Model size.</b> Smaller model sizes run more efficiently.</p></li><li><p><b>Quality</b>. The performance boost from smaller models often gets traded for the quality—the key is balancing speed with results.</p></li></ul><p>Our initial test sample contained 500 images with varying factors like the number of faces in the image, face size, lighting, sharpness, and angle. We tested various models, including <a href="https://github.com/hollance/BlazeFace-PyTorch"><u>BlazeFast</u></a>, <a href="https://arxiv.org/abs/1311.2524"><u>R-CNN</u></a> (and its successors <a href="https://arxiv.org/abs/1504.08083"><u>Fast R-CNN</u></a> and <a href="https://arxiv.org/abs/1506.01497"><u>Faster R-CNN</u></a>), <a href="https://github.com/serengil/retinaface"><u>RetinaFace</u></a>, and <a href="https://arxiv.org/abs/1506.02640"><u>YOLO</u></a> (You Only Look Once).</p><p>Two-stage detectors like BlazeFast and R-CNN propose potential object locations in an image, then identify objects in those regions of interest. One-stage detectors like RetinaFace and YOLO predict object locations and classes in a single pass. In our research, we observed that two-stage detector methods provided higher accuracy, but performed too slowly to be practical for real traffic. On the other hand, one-stage detector methods were efficient and performant while still highly accurate.</p><p>Ultimately, we selected RetinaFace, which showed the highest precision of 99.4% and performed faster than other models with comparable values. We found that RetinaFace delivered strong results even with images containing multiple blurry faces:</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5cCiUcM7S7f1XRo5e8f1L5/696dde02a2de76e176f49f99fc784e11/5.png" />
          </figure><p><sup><i>Photograph by </i></sup><a href="https://unsplash.com/photos/people-in-green-life-vest-on-water-during-daytime-1Ltm4zrGSVg"><sup><i><u>Anne Nygård (@polarmermaid) on Unsplash</u></i></sup></a></p><p><a href="https://www.cloudflare.com/learning/ai/inference-vs-training/"><u>Inference</u></a>—the process of using training models to make decisions—can be computationally demanding, especially with very large images. To maintain efficiency, we set a maximum size limit of 1024x1024 pixels when sending images to the model.</p><p>We pass images within these dimensions directly to the model for analysis. But if either width or height dimension exceeds 1024 pixels, then we instead create an inference image to send to the model; this is a smaller copy that retains the same aspect ratio as the original image and does not exceed 1024 pixels in either dimension. For example, a 125x2000 image will be downscaled to 64x1024. Creating this resized, temporary version reduces the amount of data that the model needs to analyze, enabling faster processing.</p><p>The model draws all of the bounding boxes, or the regions within an image that define the detected faces. From there, we construct a new, outer bounding box that encompasses all of the individual boxes, calculating its <code>top-left</code> and <code>bottom-right</code> points based on the boxes that are closest to the top, left, bottom, and right edges of the image.</p><p>The <code>top-left</code> point uses the <code>x</code> coordinate from the left-most box and the <code>y</code> coordinate from the top-most box. Similarly, the <code>bottom-right</code> point uses the <code>x</code> coordinate from the right-most box and the <code>y</code> coordinate from the bottom-most box. These coordinates can be taken from the same bounding boxes; if a single box is closest to both the top and left edges, then we would use its top-left corner as the <code>top-left</code> point of the outer bounding box.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2vchhlXYoakCiy7S2MglHb/6e5b3c1a36c5fa20cd45122a0a966777/6.png" />
          </figure><p><sup><i>AI face cropping identifies regions that represent faces, then determines an outer bounding box and focal point based on the top-most, left-most, right-most, and bottom-most bounding boxes.</i></sup></p><p>Once we define the outer bounding box, we use its center coordinates as the focal point when cropping the image. From our experiments, we found that this produced better and more balanced results for images with multiple faces compared to other methods, like establishing the new focal point around the largest detected face.</p><p>The cropped image area is calculated based on the dimensions of the outer bounding box (“d”) and a specified zoom level (“z”) in the formula (1 ÷ z) × d. The <code>zoom</code> parameter accepts floating points between 0 and 1, where we crop the image to the bounding box when <code>zoom=1</code> and include more of the area around the box as <code>zoom</code> trends toward <code>0</code>.</p><p>Consider an original image that is 2048x2048. First, we create an inference image that is 1024x1024 to meet our size limits for face detection. Second, we define the outer bounding box using the model’s predictions—we’ll use 100x500 for this example. At <code>zoom=0.5</code>, our formula generates a crop area that is twice as large as the bounding box, with new width (“w”) and height (“h”) dimensions of 200x1000:</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5KHvyiB2EYWJ7ZGLKm5q99/7fd39d99324b0fce9148fa1d861cc7fa/7.png" />
          </figure><p>We also apply a <code>min</code> function that chooses the smaller number between the input dimensions and the calculated dimensions, ensuring that the new width and height never exceed the dimensions of the image itself. In other words, if you try to zoom out too much, then we use the full width or height of the image instead of defining a crop area that will extend beyond the edge of the image. For example, at <code>zoom=0.25</code>, our formula yields an initial crop area of 400x2000. Here, since the calculated height (2000) is larger than the input height (1024), we use the input height to set the crop area to 400x1024.</p><p>Finally, we need to scale the crop area back to the size of the original image. This applies only when a smaller inference image is created.</p><p>We initially downscaled the original 2048x2048 image by a factor of 2 to create the 1024x1024 inference image. This means that we need to multiply the dimensions of the crop area—400x1024 in our latest example—by 2 to produce our final result: a cropped image that is 800x2048.</p>
    <div>
      <h2>The architecture behind the earliest build</h2>
      <a href="#the-architecture-behind-the-earliest-build">
        
      </a>
    </div>
    <p>In the beta version, we rewrote the model using <a href="https://github.com/tensorflow/rust"><u>TensorFlow Rust</u></a> to make it compatible with our existing Rust-based stack. All of the computations for inference—where the model classifies and locates human faces—were executed on CPUs within our network.</p><p>Initially, this worked well and we saw near-realtime results.</p><p>However, the underlying limitations of our implementation became apparent when we started receiving consistent alerts that our underlying Images service was nearing its limits for memory usage. The increased memory usage didn’t line up with any recent deployments around this time, but a hunch led us to discover that the face cropping compute time graph had an uptick that matched the uptick in memory usage. Further tracing confirmed that AI face cropping was at the root of the problem.</p><p>When a service runs out of memory, it terminates its processes to free up memory and prevent the system from crashing. Since CPU-based implementations share RAM with other processes, this can potentially cause errors for other image optimization operations. In response, we switched our memory allocator from <a href="https://github.com/iromise/glibc"><u>glibc malloc</u></a> to <a href="https://github.com/jemalloc/jemalloc"><u>jemalloc</u></a>. This allowed us to use less memory at runtime, saving about 20 TiB of RAM globally. We also started culling the number of face cropping requests to limit CPU usage.</p><p>At this point, AI face cropping was already limited to our own internal uses and a small number of beta customers. These steps only temporarily reduced our memory consumption. They weren’t sufficient for handling global traffic, so we looked toward a more scalable design for long-term use.</p>
    <div>
      <h2>Doing more with less (memory)</h2>
      <a href="#doing-more-with-less-memory">
        
      </a>
    </div>
    <p>With memory usage alerts looming in the distance, it became clear that we needed to move to a GPU-based approach.</p><p>Unlike with CPUs, a GPU-based implementation avoids contention with other processes because memory access is typically dedicated and managed more tightly. We partnered with the <a href="https://developers.cloudflare.com/workers-ai/"><u>Workers AI</u></a> team, who created a framework for internal teams to integrate payloads into their model catalog for GPU access.</p><p>Some Workers AI models have their own standalone containers; this isn’t practical for every model, as routing traffic to multiple containers can be expensive. When using a GPU through Workers AI, the data needs to travel over the network, which can introduce latency. This is where model size is especially relevant, as network transport overhead becomes more noticeable with larger models.</p><p>To address this, Workers AI wraps smaller models in a single container and utilizes a latency-sensitive routing algorithm to identify the best instance to serve each payload. This means that models can be offloaded when there is no traffic.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4MGV5N7dnK9H5DleDm1gsf/d9888c3e6f1f9057d69509560967abfd/8.png" />
          </figure><p><sup><i>A scheduler is used to optimize how—and when—models in the same container interact with GPUs.</i></sup></p><p>RetinaFace runs on 1 GB of VRAM on the smallest GPU; it’s small enough that it can be hot swapped at runtime alongside similarly sized models. If there is a call for the RetinaFace model, then the Python code will be loaded into the environment and executed.</p><p>As expected, we saw a significant drop in memory usage after we moved the feature to Workers AI. Now, each instance of our Images service consumes about 150 MiB of memory.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/n8UuvaiShes8W19fn1d4Y/d73d3e5f7e030c5da602af8dba54ed18/9.png" />
          </figure><p>With this new approach, memory leaks pose less concern to the overall availability of our service. Workers AI executes models within containers, so they can be terminated and restarted as needed without impacting other processes. Since face cropping runs separately from our Images service, restarting it won’t halt our other image optimization operations.</p>
    <div>
      <h2>Applying AI face cropping to our blog</h2>
      <a href="#applying-ai-face-cropping-to-our-blog">
        
      </a>
    </div>
    <p>As part of our beta launch, we updated the <a href="https://blog.cloudflare.com/"><u>Cloudflare blog</u></a> to apply AI face cropping on author images.</p><p>Authors can submit their own images, which appear as circular profile pictures in both the main blog feed and individual blog posts. By default, CSS centers images within their containers, making off-centered head positions more obvious. When two profile pictures include different amounts of negative space, this can also lead to a visual imbalance where authors’ faces appear at different scales:</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7K6ENlSUrQj3StJYbke65l/63ff7b00802377f8e7913907aef36dd8/10.png" />
          </figure><p><sup><i>AI face cropping makes posts with multiple authors appear more balanced.</i></sup></p><p>In the example above, Austin’s original image is cropped tightly around his face. On the other hand, Taylor’s original image includes his torso and a larger margin of the background. As a result, Austin’s face appears larger and closer to the center than Taylor’s does. After we applied AI face cropping to profile pictures on the blog, their faces appear more similar in size, creating more balance and cohesion on their co-authored post.</p>
    <div>
      <h2>A new era of image editing, now in Images</h2>
      <a href="#a-new-era-of-image-editing-now-in-images">
        
      </a>
    </div>
    <p>Many developers already use Images to build scalable media pipelines. Our goal is to accelerate image workflows by automating rote, manual tasks.</p><p>For the Images team, this is only the beginning. We plan to release new AI capabilities, including features like background removal and generative upscale. You can try AI face cropping for free by <a href="https://dash.cloudflare.com/?to=/:account/images/delivery-zones"><u>enabling transformations in the Images dashboard</u></a>.</p> ]]></content:encoded>
            <category><![CDATA[Developers]]></category>
            <category><![CDATA[Cloudflare Workers]]></category>
            <category><![CDATA[Cloudflare Images]]></category>
            <category><![CDATA[Image Optimization]]></category>
            <category><![CDATA[AI]]></category>
            <guid isPermaLink="false">5j8iAw1mBIHhVkaj0UcbSZ</guid>
            <dc:creator>Deanna Lam</dc:creator>
            <dc:creator>Diretnan Domnan</dc:creator>
        </item>
        <item>
            <title><![CDATA[Improve your media pipelines with the Images binding for Cloudflare Workers]]></title>
            <link>https://blog.cloudflare.com/improve-your-media-pipelines-with-the-images-binding-for-cloudflare-workers/</link>
            <pubDate>Thu, 03 Apr 2025 13:00:00 GMT</pubDate>
            <description><![CDATA[ Media-rich applications require image and video pipelines that integrate seamlessly with the rest of your technology stack. Here’s how the Images binding enables you to build more flexible workflows. ]]></description>
            <content:encoded><![CDATA[ <p>When building a full-stack application, many developers spend a surprising amount of time trying to make sure that the various services they use can communicate and interact with each other. Media-rich applications require image and video pipelines that can integrate seamlessly with the rest of your technology stack.</p><p>With this in mind, we’re excited to introduce the <a href="https://developers.cloudflare.com/images/transform-images/bindings"><u>Images binding</u></a>, a way to connect the <a href="https://developers.cloudflare.com/images/transform-images/transform-via-workers/"><u>Images API</u></a> directly to your <a href="https://developers.cloudflare.com/workers/"><u>Worker</u></a> and enable new, programmatic workflows. The binding removes unnecessary friction from application development by allowing you to transform, overlay, and encode images within the Cloudflare Developer Platform ecosystem.</p><p>In this post, we’ll explain how the Images binding works, as well as the decisions behind <a href="https://developers.cloudflare.com/workers/local-development/"><u>local development support</u></a>. We’ll also walk through an example app that watermarks and encodes a user-uploaded image, then uploads the output directly to an <a href="https://developers.cloudflare.com/r2/"><u>R2</u></a> bucket.</p>
    <div>
      <h2>The challenges of <code>fetch()</code></h2>
      <a href="#the-challenges-of-fetch">
        
      </a>
    </div>
    <p><a href="https://developers.cloudflare.com/images/"><u>Cloudflare Images</u></a> was designed to help developers build scalable, cost-effective, and reliable image pipelines. You can deliver multiple copies of an image — each resized, manipulated, and encoded based on your needs. Only the original image needs to be stored; different versions are generated dynamically, or as requested by a user’s browser, then subsequently served from cache.</p><p>With Images, you have the flexibility to <a href="https://developers.cloudflare.com/images/transform-images/"><u>transform images</u></a> that are stored outside the Images product. Previously, the Images API was based on the <code>fetch()</code> method, which posed three challenges for developers:</p><p>First, when transforming a remote image, the original image must be retrieved from a URL. This isn’t applicable for every scenario, like resizing and compressing images as users upload them from their local machine to your app. We wanted to extend the Images API to broader use cases where images might not be accessible from a URL.</p><p>Second, the optimization operation — the changes you want to make to an image, like resizing it — is coupled with the delivery operation. If you wanted to crop an image, watermark it, then resize the watermarked image, then you’d need to serve one transformation to the browser, retrieve the output URL, and transform it again. This adds overhead to your code, and can be tedious and inefficient to maintain. Decoupling these operations means that you no longer need to manage multiple requests for consecutive transformations.</p><p>Third, optimization parameters — the way that you specify how an image should be manipulated — follow a fixed order. For example, cropping is performed before resizing. It’s difficult to build a flow that doesn’t align with the established hierarchy — like resizing first, then cropping — without a lot of time, trial, and effort.</p><p>But complex workflows shouldn’t require complex logic. In February, we <a href="https://developers.cloudflare.com/changelog/2025-02-21-images-bindings-in-workers/"><u>released the Images binding in Workers</u></a> to make the development experience more accessible, intuitive, and user-friendly. The binding helps you work more productively by simplifying how you connect the Images API to your Worker and providing more fine-grained control over how images are optimized.</p>
    <div>
      <h2>Extending the Images workflow</h2>
      <a href="#extending-the-images-workflow">
        
      </a>
    </div>
    
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/424FXX9vM9cYlIfLMGUk5Z/e2db32589a3ded75801909ab4611747a/image1.png" />
          </figure><p><sup><i>Since optimization parameters follow a fixed order, we’d need to output the image to resize it after watermarking. The binding eliminates this step.</i></sup></p><p><a href="https://developers.cloudflare.com/workers/runtime-apis/bindings/"><u>Bindings</u></a> connect your Workers to external resources on the Developer Platform, allowing you to manage interactions between services in a few lines of code. When you bind the Images API to your Worker, you can create more flexible, programmatic workflows to transform, resize, and encode your images — without requiring them to be accessible from a URL.</p><p>Within a Worker, the Images binding supports the following functions:</p><ul><li><p><code>.transform()</code>: Accepts optimization parameters that specify how an image should be manipulated</p></li><li><p><code>.draw()</code>: Overlays an image over the original image. The overlaid image can be optimized through a child <code>transform()</code> function.</p></li><li><p><code>.output()</code>: Defines the output format for the transformed image.</p></li><li><p><code>.info()</code>: Outputs information about the original image, like its format, file size, and dimensions.</p></li></ul>
    <div>
      <h2>The life of a binding request</h2>
      <a href="#the-life-of-a-binding-request">
        
      </a>
    </div>
    <p>At a high level, a binding works by establishing a communication channel between a Worker and the binding’s backend services.</p><p>To do this, the Workers runtime needs to know exactly which objects to construct when the Worker is instantiated. Our control plane layer translates between a given Worker’s code and each binding’s backend services. When a developer runs <code>wrangler deploy</code>, any invoked bindings are converted into a dependency graph. This describes the objects and their dependencies that will be injected into the <code>env</code> of the Worker when it runs. Then, the runtime loads the graph, builds the objects, and runs the Worker.</p><p>In most cases, the binding makes a remote procedure call to the backend services of the binding. The mechanism that makes this call must be constructed and injected into the binding object; for Images, this is implemented as a JavaScript wrapper object that makes HTTP calls to the Images API.</p><p>These calls contain the sequence of operations that are required to build the final image, represented as a tree structure. Each <code>.transform()</code> function adds a new node to the tree, describing the operations that should be performed on the image. The <code>.draw()</code> function adds a subtree, where child <code>.transform()</code> functions create additional nodes that represent the operations required to build the overlay image. When <code>.output()</code> is called, the tree is flattened into a list of operations; this list, along with the input image itself, is sent to the backend of the Images binding.</p><p>For example, let’s say we had the following commands:</p>
            <pre><code>env.IMAGES.input(image)
  .transform(rotate:90})
  .draw(
    env.IMAGES.input(watermark)
      .transform({width:32})
  )
  .transform({blur:5})
  .output({format:"image/png"})</code></pre>
            <p>Put together, the request would look something like this:</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/495j0HjS1lIxaKY7Dnyf67/bd80e9a4bf277313e90ade13df2f9870/image2.png" />
          </figure><p>To communicate with the backend, we chose to send multipart forms. Each binding request is inherently expensive, as it can involve decoding, transforming, and encoding. Binary formats may offer slightly lower overhead per request, but given the bulk of the work in each request is the image processing itself, any gains would be nominal. Instead, we stuck with a well-supported, safe approach that our team had successfully implemented in the past.</p>
    <div>
      <h2>Meeting developers where they are</h2>
      <a href="#meeting-developers-where-they-are">
        
      </a>
    </div>
    <p>Beyond the core capabilities of the binding, we knew that we needed to consider the entire developer lifecycle. The ability to test, debug, and iterate is a crucial part of the development process.</p><p>Developers won’t use what they can’t test; they need to be able to validate exactly how image optimization will affect the user experience and performance of their application. That’s why we made the Images binding available in local development without incurring any usage charges.</p><p>As we scoped out this feature, we reached a crossroad with how we wanted the binding to work when developing locally. At first, we considered making requests to our production backend services for both unit and end-to-end testing. This would require open-sourcing the components of the binding and building them for all Wrangler-supported platforms and Node versions.</p><p>Instead, we focused our efforts on targeting individual use cases by providing two different methods. In <a href="https://developers.cloudflare.com/workers/wrangler/"><u>Wrangler</u></a>, Cloudflare’s command-line tool, developers can choose between an online and offline mode of the Images binding. The online mode makes requests to the real Images API; this requires Internet access and authentication to the Cloudflare API. Meanwhile, the offline mode requests a lower fidelity <a href="https://testing.googleblog.com/2013/06/testing-on-toilet-fake-your-way-to.html"><u>fake</u></a>, which is a mock API implementation that supports a limited subset of features. This is primarily used for <a href="https://developers.cloudflare.com/workers/testing/vitest-integration/"><u>unit tests</u></a>, as it doesn’t require Internet access or authentication. By default, <code>wrangler dev</code> uses the online mode, mirroring the same version that Cloudflare runs in production.</p>
    <div>
      <h2>See the binding in action</h2>
      <a href="#see-the-binding-in-action">
        
      </a>
    </div>
    <p>Let’s look at an <a href="https://developers.cloudflare.com/images/tutorials/optimize-user-uploaded-image/"><u>example app</u></a> that transforms a user-uploaded image, then uploads it directly to an R2 bucket.</p><p>To start, we <a href="https://developers.cloudflare.com/learning-paths/workers/get-started/first-worker/"><u>created a Worker application</u></a> and configured our <code>wrangler.toml</code> file to add the Images, R2, and assets bindings:</p>
            <pre><code>[images]
binding = "IMAGES"

[[r2_buckets]]
binding = "R2"
bucket_name = "&lt;BUCKET&gt;"

[assets]
directory = "./&lt;DIRECTORY&gt;"
binding = "ASSETS"</code></pre>
            <p>In our Worker project, the assets directory contains the image that we want to use as our watermark.</p><p>Our frontend has a <code>&lt;form&gt;</code> element that accepts image uploads:</p>
            <pre><code>const html = `
&lt;!DOCTYPE html&gt;
        &lt;html&gt;
          &lt;head&gt;
            &lt;meta charset="UTF-8"&gt;
            &lt;title&gt;Upload Image&lt;/title&gt;
          &lt;/head&gt;
          &lt;body&gt;
            &lt;h1&gt;Upload an image&lt;/h1&gt;
            &lt;form method="POST" enctype="multipart/form-data"&gt;
              &lt;input type="file" name="image" accept="image/*" required /&gt;
              &lt;button type="submit"&gt;Upload&lt;/button&gt;
            &lt;/form&gt;
          &lt;/body&gt;
        &lt;/html&gt;
`;

export default {
  async fetch(request, env) {
    if (request.method === "GET") {
      return new Response(html, {headers:{'Content-Type':'text/html'},})
    }
    if (request.method ==="POST") {
      // This is called when the user submits the form
    }
  }
};</code></pre>
            <p>Next, we set up our Worker to handle the optimization.</p><p>The user will upload images directly through the browser; since there isn’t an existing image URL, we won’t be able to use <code>fetch()</code> to get the uploaded image. Instead, we can transform the uploaded image directly, operating on its body as a stream of bytes.</p><p>Once we read the image, we can manipulate the image. Here, we apply our watermark and encode the image to AVIF before uploading the transformed image to our R2 bucket: </p>
            <pre><code>var __defProp = Object.defineProperty;
var __name = (target, value) =&gt; __defProp(target, "name", { value, configurable: true });

function assetUrl(request, path) {
	const url = new URL(request.url);
	url.pathname = path;
	return url;
}
__name(assetUrl, "assetUrl");

export default {
  async fetch(request, env) {
    if (request.method === "GET") {
      return new Response(html, {headers:{'Content-Type':'text/html'},})
    }
    if (request.method === "POST") {
      try {
        // Parse form data
        const formData = await request.formData();
        const file = formData.get("image");
        if (!file || typeof file.arrayBuffer !== "function") {
          return new Response("No image file provided", { status: 400 });
        }
        
        // Read uploaded image as array buffer
        const fileBuffer = await file.arrayBuffer();

	     // Fetch image as watermark
        let watermarkStream = (await env.ASSETS.fetch(assetUrl(request, "watermark.png"))).body;

        // Apply watermark and convert to AVIF
        const imageResponse = (
          await env.IMAGES.input(fileBuffer)
              // Draw the watermark on top of the image
              .draw(
                env.IMAGES.input(watermarkStream)
                  .transform({ width: 100, height: 100 }),
                { bottom: 10, right: 10, opacity: 0.75 }
              )
              // Output the final image as AVIF
              .output({ format: "image/avif" })
          ).response();

          // Add timestamp to file name
          const fileName = `image-${Date.now()}.avif`;
          
          // Upload to R2
          await env.R2.put(fileName, imageResponse.body)
         
          return new Response(`Image uploaded successfully as ${fileName}`, { status: 200 });
      } catch (err) {
        console.log(err.message)
      }
    }
  }
};</code></pre>
            <p>We’ve also created a <a href="https://developers.cloudflare.com/images/examples/"><u>gallery</u></a> in our documentation to demonstrate ways that you can use the Images binding. For example, you can <a href="https://developers.cloudflare.com/images/examples/transcode-from-workers-ai"><u>transcode images from Workers AI</u></a> or <a href="https://developers.cloudflare.com/images/examples/watermark-from-kv"><u>draw a watermark from KV</u></a> on an image that is stored in R2.</p><p>Looking ahead, the Images binding unlocks many exciting possibilities to seamlessly transform and manipulate images directly in Workers. We aim to create an even deeper connection between all the primitives that developers use to build AI and full-stack applications.</p><p>Have some feedback for this release? Let us know in the <a href="https://community.cloudflare.com/c/developers/images/63"><u>Community</u></a> forum.</p> ]]></content:encoded>
            <category><![CDATA[Developers]]></category>
            <category><![CDATA[Cloudflare Workers]]></category>
            <category><![CDATA[Cloudflare Images]]></category>
            <category><![CDATA[Image Optimization]]></category>
            <guid isPermaLink="false">PKC5RU7wcrNRfwoLnBjZX</guid>
            <dc:creator>Deanna Lam</dc:creator>
            <dc:creator>Nick Skehin</dc:creator>
        </item>
        <item>
            <title><![CDATA[Preserving content provenance by integrating Content Credentials into Cloudflare Images]]></title>
            <link>https://blog.cloudflare.com/preserve-content-credentials-with-cloudflare-images/</link>
            <pubDate>Mon, 03 Feb 2025 14:00:00 GMT</pubDate>
            <description><![CDATA[ Media and journalism companies can now build end-to-end workflows that support Content Credentials by using Cloudflare Images.  ]]></description>
            <content:encoded><![CDATA[ <p>Today, we are thrilled to announce the integration of the <a href="https://c2pa.org/about/"><u>Coalition for Content Provenance and Authenticity (C2PA)</u></a> provenance standard into Cloudflare Images. Content creators and publishers can seamlessly preserve the entire provenance chain — from how an image was created and by whom, to every subsequent edit — across the Cloudflare network.</p>
    <div>
      <h3>What is the C2PA and the Content Authenticity Initiative?</h3>
      <a href="#what-is-the-c2pa-and-the-content-authenticity-initiative">
        
      </a>
    </div>
    <p>When you hear the word provenance, you might have flashbacks to your high school Art History class. In that context, it means that the artwork you see at the <a href="https://www.metmuseum.org/"><u>Met</u></a> in New York really came from the artist in question and isn’t a fake. Its provenance is how that piece of physical art changed possession over time, from the original artist all the way to the museum. </p><p>Digital content provenance builds upon this concept. It helps you understand how a piece of digital media — images, videos, PDFs, and more — was created and subsequently edited. The provenance of a photo I posted on Instagram might look like this: I took the picture with my iPhone, performed an auto-magic edit using Apple Photos’ editing tools, uploaded it to Instagram, cropped it using Instagram’s editing tools, and then posted it. </p><p>Why does digital content provenance matter? At a fundamental level, it’s an important way to give content creators credit for their work. Many photographers have had the experience of seeing their photograph or video go viral online, but with their name and attribution stripped away. In that scenario, the opportunities that might have accrued to the creator once the world saw their work don’t materialize. If you help ensure an artist or content creator gets credit for their work, that exposure could result in more career opportunities. </p><p>Digital content provenance can also be an important tool in understanding the world around us. If you see a video or a photo of a newsworthy event, you’d like to know if that photo was really taken at that particular location, or if it was from years prior at a different location. If you see a grainy picture of a UFO flying over New Jersey, knowing when and where that photo was taken is helpful information in understanding what is actually happening. </p><p>The C2PA is a project of the non-profit <a href="https://jointdevelopment.org/"><u>Joint Development Foundation</u></a> and has developed technical specifications for attaching digital content provenance to a piece of media. The standards also specify how to cryptographically sign that manifest, thereby allowing anyone to verify that the manifest hasn't been tampered with. The JSON manifests and the associated signatures are together referred to as<b> </b><a href="https://contentcredentials.org/"><b><u>Content Credentials</u></b></a>.</p><p>The Adobe-led <a href="https://contentauthenticity.org/"><u>Content Authenticity Initiative</u></a>, which has thousands of members across a variety of industries, aims to drive global adoption of Content Credentials. </p>
    <div>
      <h3>Why integrate Content Credentials into Cloudflare Images?</h3>
      <a href="#why-integrate-content-credentials-into-cloudflare-images">
        
      </a>
    </div>
    <p><a href="https://www.cloudflare.com/developer-platform/products/cloudflare-images/"><u>Cloudflare Images</u></a> allows you to build an effortlessly scalable and cost-effective image pipeline. With our new Content Credentials integration, you can now preserve existing Content Credentials, ensuring they remain intact from creation all the way to end-user delivery.</p><p>Many media organizations across the globe, such as the BBC, the New York Times, and Dow Jones, are members of the <a href="https://contentauthenticity.org/"><u>Content Authenticity Initiative</u></a>. Imagine one of these news organizations wanted to include the Content Credentials of their photojournalist’s photos and allow anyone to verify the provenance of that image. Before now, even if the news organization was using a C2PA-compliant <a href="https://www.nikonusa.com/press-room/nikon-develops-firmware-that-adds-function-compliant-with-cp2a-standards-to-z6iii"><u>camera</u></a> and <a href="https://helpx.adobe.com/lightroom-cc/using/content-credentials.html"><u>editing flow</u></a>, these credentials would frequently be stripped if the image was transformed by their <a href="https://www.cloudflare.com/learning/cdn/what-is-a-cdn/"><u>CDN</u></a>.</p><p>If you use Cloudflare, that is now a solved problem. In Cloudflare Images, you can now preserve Content Credentials when transforming images from remote sources. Enabling this integration will retain any existing Content Credentials that are embedded in the image.</p><p>When you use Images to resize or change the file format to your images, these transformations will be cryptographically signed by Cloudflare. This ensures, for example, that the end-user who sees the photograph on your website can use an open-source verification service such as <a href="https://contentcredentials.org/verify"><u>contentcredentials.org/verify </u></a> to verify the full provenance chain. </p>
    <div>
      <h3>How it works</h3>
      <a href="#how-it-works">
        
      </a>
    </div>
    <p>Imagine you are a photojournalist using <a href="https://www.nikonusa.com/press-room/nikon-develops-firmware-that-adds-function-compliant-with-cp2a-standards-to-z6iii"><u>a Nikon camera that has C2PA-compliant signing</u></a>. That photojournalist  could opt to attach Content Credentials to their photo, identifying the key elements of the photograph such as the camera model, the original image size, and aperture settings. </p><p>Below is a simplified example of what a <a href="https://c2pa.org/specifications/specifications/2.1/index.html"><u>C2PA-compliant </u></a>Content Credential for a photograph taken with that Nikon camera could look like. </p><p>Content Credentials are stored using <a href="https://www.iso.org/standard/84635.html"><u>JUMBF</u></a> (JPEG Universal Metadata Box Format), which serves as a standardized container format for embedding metadata within files. You can think of it as an envelope system that packages together both the data about where a piece of digital content came from and how it changed, as well as the cryptographic signatures that can be used to verify that data.</p><p>The assertions, or facts about the content provenance, are typically written in JSON for a better developer experience. Note that this example deliberately simplifies the <a href="https://c2pa.org/specifications/specifications/1.0/specs/C2PA_Specification.html#_use_of_jumbf"><u>JUMBF box nesting</u></a> and adds comments to make it easier to follow.</p>
            <pre><code>{
  "jumbf": {
    "c2pa.manifest": {
      "claim_generator": "Nikon Z9 Firmware v1.2",
      "assertions": [
        {
          "label": "c2pa.actions",
          "data": {
            "actions": [
              {
                "action": "c2pa.captured",
                "when": "2025-01-10T12:00:00Z",
                "softwareAgent": "Nikon Z9",
                "parameters": {
                  "captureDevice": "NIKON Z9",
                  "serialNumber": "7DX12345",
                  "exposure": "1/250",
                  "aperture": "f/2.8",
                  "iso": 100,
                  "focalLength": "70mm"
                }
              }
            ]
          }
        }
      ],
      "signature_info": {
        "issuer": "Nikon",
        "time": "2025-01-10T12:00:00Z",
        "cert_fingerprint": "01234567890abcdef"
      },
      "claim_metadata": {
        "claim_id": "nikon_z9_123"
      }
    }
  }
}</code></pre>
            <p>Now imagine that you want to use this photograph on your website. </p><p>If you’ve enabled the Preserve Content Credentials setting in Cloudflare, then that metadata is now preserved in Cloudflare Images. </p><p>If you use Cloudflare Images to dynamically resize or transform this image, then Cloudflare automatically appends and cryptographically signs any additional actions in that same manifest. Below we show what the new Content Credentials could look like. </p>
            <pre><code>{
  "jumbf": {
    // Original Nikon manifest
    "c2pa.manifest.nikon": {
  /*unchanged*/      
},

    // New Cloudflare manifest
    "c2pa.manifest.cloudflare": {
      "claim_generator": "Cloudflare Images",
      "assertions": [
        {
          "label": "c2pa.actions",
          "data": {
            "actions": [
              {
                "action": "c2pa.resized",
                "when": "2025-01-10T12:05:00Z",
                "softwareAgent": "Cloudflare Images",
                "parameters": {
                  "originalDimensions": {
                    "width": 8256,
                    "height": 5504
                  },
                  "newDimensions": {
                    "width": 800,
                    "height": 533
                  }
                }
              }
            ]
          }
        }
      ],
      "signature_info": {
        "issuer": "Cloudflare, Inc",
        "time": "2025-01-10T12:05:00Z",
        "cert_fingerprint": "fedcba9876543210"
      },
      "claim_metadata": {
        "claim_id": "cf_resize_123",
        "parent_claim_id": "nikon_z9_123"
      }
    }
  }
}</code></pre>
            <p>In this example, the <code>c2pa.action.resized</code> entry describes a non-destructive transformation from one set of dimensions to another. This is included as a separate, independent assertion about this particular photograph. </p><p>Notice how there are two cryptographic signatures in this manifest, each referenced by <code>signature_info</code>. Since there were two entities involved in this example image — Nikon for the image’s creation, then Cloudflare for resizing it — both Nikon and Cloudflare independently signed their respective assertions about the content provenance.  </p><p>In this example, the signature reference looks like this: </p>
            <pre><code>"signature_info": {
        "issuer": "Cloudflare, Inc",
        "time": "2025-01-10T12:05:00Z",
        "cert_fingerprint": "fedcba9876543210"</code></pre>
            <p>During the creation, editing, and resizing process of a piece of digital content, a unique hash of metadata is created for each action and then signed using a private key. The signature, along with the signer’s public certificate or reference to it, are contained in the JUMBF container as referenced by this JSON. </p><p>These hashes and signatures allow any open source verification tool to recalculate the hash, validate it against the signature, and check the certificate chain to ensure trustworthiness for each action taken on the image. This is what is meant by Content Credentials being tamper-evident: if any of these hashes and signatures fail to validate, it means that the metadata has been tampered with. </p><p>Each cryptographic signature is part of a <a href="https://opensource.contentauthenticity.org/docs/verify-known-cert-list/"><u>Trust List</u></a>, allowing anyone to verify the provenance chain across various entities, such as from a camera manufacturer to photo editing software to distribution across Cloudflare. <a href="https://opensource.contentauthenticity.org/docs/manifest/signing-manifests"><u>More from the Content Authenticity Initiative</u></a>: </p><blockquote><p><i>Trust lists connect the end-entity certificate that signed a manifest back to the originating root CA. This is accomplished by supplying the subordinate public X.509 certificates forming the trust chain (the public X.509 certificate chain).</i></p></blockquote><p>In order for Cloudflare to append the Content Credentials with any transformations, we needed to have a publicly available end-entity certificate and join this Trust List. Here we used DigiCert for our end-entity certificate and reference this certificate in the JSON manifests that we are now creating in production:</p>
            <pre><code>"signature_info": {
        "alg": "sha256",
        "issuer": "Cloudflare, Inc",
        "cert_serial_number": "073E9F61ADE599BE128B02EDC5BD2BDE",
        "time": "2024-01-06T22:42:36+00:00"
      },</code></pre>
            <p>The end result is that news organizations, journalists, and content companies can now create an auditable chain of digital provenance whose claims can be verified using public-key cryptography.  </p>
    <div>
      <h3>Let’s see an example</h3>
      <a href="#lets-see-an-example">
        
      </a>
    </div>
    <p>Earlier this year, OpenAI announced their <a href="https://openai.com/index/understanding-the-source-of-what-we-see-and-hear-online/"><u>support for including Content Credentials in DALL-E</u></a>. I recently created an image in DALL-E (with ski season on my mind).</p><p>Cloudflare Images allows you to transform any image <a href="https://developers.cloudflare.com/images/transform-images/transform-via-url/"><u>by URL</u></a>. You can do so by simply changing the URL structure using this syntax: </p>
            <pre><code>https://&lt;ZONE&gt;/cdn-cgi/image/&lt;OPTIONS&gt;/&lt;SOURCE-IMAGE&gt;</code></pre>
            <p>We can break down each of these parameters:</p><ul><li><p>ZONE is your particular domain.</p></li><li><p>cdn-cgi/image is a fixed prefix that identifies that this is a special path handled by a built-in Worker.</p></li><li><p>The OPTIONS parameter allows you to then transform the image — rotating it, changing the width, compressing it, and more. </p></li><li><p>SOURCE-IMAGE is the URL where your image is currently hosted.</p></li></ul><p>To tie these together, I then have a new URL structure where I want to change the width and quality of the image I created in DALL-E and display this on my personal website. After uploading the image from DALL-E to one of my R2 buckets, I can create this URL: </p>
            <pre><code>https://williamallen.com/cdn-cgi/image/width=1000,quality=75,format=webp/https://pub-3d2658f6f7004dc38a4dd6be147b6a86.r2.dev/dalle.webp</code></pre>
            <p>Anyone can now verify its provenance using the <a href="https://contentcredentials.org/verify?source=https://williamallen.com/cdn-cgi/image/width=1000,quality=75,format=webp/https://pub-3d2658f6f7004dc38a4dd6be147b6a86.r2.dev/dalle.webp"><u>Content Credentials Verify</u></a> tool to see the result. The provenance chain is fully intact, even after using the Cloudflare Images transformation shown above to resize the image. </p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/p3qbySgWdTzk3IJTevbiU/bae7969708006369a70f8e6cfd7ae2bf/Screenshot_2025-01-29_at_19.00.56.png" />
          </figure><p>There are numerous open source command line tools that allow you to explore the full details of the Content Credentials. <a href="https://opensource.contentauthenticity.org/docs/c2patool/"><u>The C2PA Tool</u></a> is created and maintained by the Content Authenticity Initiative. You can read more about the <a href="https://opensource.contentauthenticity.org/docs/c2patool/docs/usage"><u>tool here</u></a> and view the source code for it on <a href="https://github.com/contentauth/c2pa-rs/tree/main/cli"><u>GitHub</u></a>.</p><p>There are two ways to <a href="https://github.com/contentauth/c2pa-rs/tree/main/cli#installation"><u>install the tool</u></a>: through a pre-built binary executable, or using <a href="https://lib.rs/crates/cargo-binstall"><u>Cargo Binstall</u></a> if you have already installed Rust. Once installed, the C2PA Tool uses this syntax in your command line: </p>
            <pre><code>c2patool [OPTIONS] &lt;PATH&gt; [COMMAND]</code></pre>
            <p>If I navigate to the <a href="https://williamallen.com/cdn-cgi/image/width=1000,quality=75/https://pub-3d2658f6f7004dc38a4dd6be147b6a86.r2.dev/dalle.webp"><u>link of the image</u></a> in my browser and save it to my downloads folder on my Mac, then I simply need to use the command -d (short for -detailed) to see the full details of the JSON manifest. Of course, you should change <i>yourusername</i> to your actual Mac username.</p>
            <pre><code>c2patool /Users/yourusername/Downloads/dalle.webp -d</code></pre>
            <p>And if you wanted to output this to a JSON file that you can review in VSCode or Cursor, use this command instead:</p>
            <pre><code>c2patool /Users/yourusername/Downloads/dalle.webp -d &gt; manifest.json</code></pre>
            <p>This allows you to not just trust, but verify, the details of the image transformation yourself. </p>
    <div>
      <h3>How to start using Cloudflare Images with Content Credentials </h3>
      <a href="#how-to-start-using-cloudflare-images-with-content-credentials">
        
      </a>
    </div>
    <p>It’s straightforward to start preserving Content Credentials. Log in to your Cloudflare dashboard and navigate to Images in the dashboard. From there, choose Transformations and choose a Zone where you want to enable this feature. Then toggle this option to on:</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6R24DfomWKBsb5SKMFmZU7/99049bd789ea89b4869bef9c8f0040f2/image2.png" />
          </figure><p>If the images you are transforming do not contain any Content Credentials, no action is taken. But if they do, we preserve those Content Credentials and attest to any transformations. </p>
    <div>
      <h3>Looking ahead</h3>
      <a href="#looking-ahead">
        
      </a>
    </div>
    <p>We are excited to continue to partner with Adobe and many other organizations to extend support for preserving Content Credentials across our products and services. If you are interested in learning more, we’d love to hear from you: I’m <a href="https://x.com/williamallen"><u>@williamallen</u></a> on X or on <a href="https://www.linkedin.com/in/williamallen2050/"><u>LinkedIn</u></a>.</p> ]]></content:encoded>
            <category><![CDATA[Cloudflare Images]]></category>
            <category><![CDATA[Image Resizing]]></category>
            <category><![CDATA[Image Storage]]></category>
            <guid isPermaLink="false">6ARFFVfePEkB83ZjH6H8vg</guid>
            <dc:creator>Will Allen</dc:creator>
        </item>
        <item>
            <title><![CDATA[Builder Day 2024: 18 big updates to the Workers platform]]></title>
            <link>https://blog.cloudflare.com/builder-day-2024-announcements/</link>
            <pubDate>Thu, 26 Sep 2024 21:00:00 GMT</pubDate>
            <description><![CDATA[ To celebrate Builder Day 2024, we’re shipping 18 updates inspired by direct feedback from developers building on Cloudflare. This includes new capabilities, like running evals with AI Gateway, beta  ]]></description>
            <content:encoded><![CDATA[ <p>To celebrate <a href="https://builderday.pages.dev/"><u>Builder Day 2024</u></a>, we’re shipping 18 updates inspired by direct feedback from developers building on Cloudflare. Choosing a platform isn't just about current technologies and services — it's about betting on a partner that will evolve with your needs as your project grows and the tech landscape shifts. We’re in it for the long haul with you.</p>
    <div>
      <h3>Watch on Cloudflare TV</h3>
      <a href="#watch-on-cloudflare-tv">
        
      </a>
    </div>
    <div>
  
</div><p><b>Starting today, you can:</b></p><ul><li><p><a href="#logs-for-every-worker">Persist logs from your Worker and query them directly on the Cloudflare dashboard</a></p></li><li><p><a href="#connect-to-private-databases-from-workers">Connect your Worker to private databases (isolated in VPCs) using Hyperdrive</a></p></li><li><p><a href="#improved-node.js-compatibility-is-now-ga">Use a wider set of NPM packages on Cloudflare Workers, via improved Node.js compatibility</a></p></li><li><p><a href="#cloudflare-joins-opennext">Deploy Next.js apps that use the Node.js runtime to Cloudflare, via OpenNext</a></p></li><li><p><a href="https://blog.cloudflare.com/workers-ai-bigger-better-faster/">Run Evals with AI Gateway, now in Open Beta</a></p></li><li><p><a href="https://blog.cloudflare.com/sqlite-in-durable-objects">Read from and write to SQLite with zero-latency from every Durable Object</a></p></li></ul><p><b>We’ve brought key features from </b><a href="https://blog.cloudflare.com/pages-and-workers-are-converging-into-one-experience/"><b><u>Pages to Workers</u></b></a><b>, allowing you to: </b></p><ul><li><p><a href="#static-asset-hosting">Upload and serve static assets as part of your Worker, and use popular frameworks with Workers</a></p></li><li><p><a href="#continuous-integration-and-delivery">Automatically build and deploy each pull request to your Worker’s git repository</a></p></li><li><p><a href="#workers-preview-urls">Get back a preview deployment URL for each version of your Worker</a></p></li></ul><p><b>Four things are going GA and are officially production-ready:</b></p><ul><li><p><a href="#gradual-deployments">Gradual Deployments</a>: Deploy changes to your Worker gradually, on a percentage basis of traffic</p></li><li><p><a href="#queues-is-ga">Cloudflare Queues</a><b>:</b> Now with much higher throughput and concurrency limits</p></li><li><p><a href="#event-notifications-for-r2-is-now-ga">R2 Event Notifications</a><b>:</b> Tightly integrated with Queues for event-driven applications</p></li><li><p><a href="https://blog.cloudflare.com/workers-ai-bigger-better-faster/">Vectorize</a>: Globally distributed vector database, now faster, with larger indexes, and new pricing</p></li></ul><p><b>The Workers platform is getting faster:</b></p><ul><li><p><a href="https://blog.cloudflare.com/faster-workers-kv">We made Workers KV up to 3x faster.</a> Which makes serving static assets from Workers and Pages faster!</p></li><li><p><a href="https://blog.cloudflare.com/making-workers-ai-faster/">Workers AI now has much faster Time-to-First-Token (TTFT)</a>, backed by more powerful GPUs</p></li></ul><p><b>And we’re lowering the cost of building on Cloudflare:</b></p><ul><li><p><a href="#removing-serverless-microservices-tax">Requests made through Service Bindings and to Tail Workers are now free</a></p></li><li><p><a href="#image-optimization-free-for-everyone">Cloudflare Images is introducing a free tier for everyone with a Cloudflare account</a></p></li><li><p>We’ve <a href="https://blog.cloudflare.com/workers-ai-bigger-better-faster">simplified Workers AI pricing</a> to use industry standard units of measure</p></li></ul><p>Everything in this post is available for you to use today. Keep reading to learn more, and watch the <a href="https://cloudflare.tv/event/builder-day-live-stream/xvm4qdgm"><u>Builder Day Live Stream</u></a> for demos and more.</p><h2>Persistent Logs for every Worker</h2><p>Starting today in open beta, you can automatically retain logs from your Worker, with full search, query, and filtering capabilities available directly within the Cloudflare dashboard. All newly created Workers will have this setting automatically enabled. This marks the first step in the development of our observability platform, following <a href="https://blog.cloudflare.com/cloudflare-acquires-baselime-expands-observability-capabilities/"><u>Cloudflare’s acquisition of Baselime</u></a>.</p><p>Getting started is easy – just add two lines to your Worker’s wrangler.toml and redeploy:</p>
            <pre><code>[observability]
enabled = true
</code></pre>
            <p>Workers Logs allow you to view all logs emitted from your Worker. When enabled, each <code>console.log</code> message, error, and exception is published as a separate event. Every Worker invocation (i.e. requests, alarms, rpc, etc.) also publishes an enriched execution log that contains invocation metadata. You can view logs in the <code>Logs</code> tab of your Worker in the dashboard, where you can filter on any event field, such as time, error code, message, or your own custom field.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3rPKtYlXEgN1u8utUuXxJR/c2fc4dcff2a7574d8ad9f92edbe867fe/image2.png" />
          </figure><p>If you’ve ever had to piece together the puzzle of unusual metrics, such as a spike in errors or latency, you know how frustrating it is to connect metrics to traces and logs that often live in independent data silos. Workers Logs is the first piece of a new observability platform we are building that helps you easily correlate telemetry data, and surfaces insights to help you <i>understand</i>. We’ll structure your telemetry data so you have the full context to ask the right questions, and can quickly and easily analyze the behavior of your applications. This is just the beginning for observability tools for Workers. We are already working on automatically emitting distributed traces from Workers, with real time errors and wide, high dimensionality events coming soon as well. </p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/XiRuQjqzVEld2eCIVVHPh/7c8938479e1f254699487dfe23caade4/Screenshot_2024-09-25_at_3.06.00_PM.png" />
          </figure><p>Starting November 1, 2024, Workers Logs will cost $0.60 per million log lines written after the included volume, as shown in the table below. Querying your logs is free. This makes it easy to estimate and forecast your costs — we think you shouldn’t have to calculate the number of ‘Gigabytes Ingested’ to understand what you’ll pay.</p>
<div><table><thead>
  <tr>
    <th></th>
    <th><span>Workers Free</span></th>
    <th><span>Workers Paid</span></th>
  </tr></thead>
<tbody>
  <tr>
    <td><span>Included Volume</span></td>
    <td><span>200,000 logs per day</span></td>
    <td><span>20,000,000 logs per month</span></td>
  </tr>
  <tr>
    <td><span>Additional Events</span></td>
    <td><span>N/A</span></td>
    <td><span>$0.60 per million logs</span></td>
  </tr>
  <tr>
    <td><span>Retention</span></td>
    <td><span>3 days</span></td>
    <td><span>7 days</span></td>
  </tr>
</tbody></table></div><p>Try out Workers Logs today. You can learn more from our <a href="https://developers.cloudflare.com/workers/observability/logs/workers-logs/"><u>developer documentation</u></a>, and give us feedback directly in the #workers-observability channel on <a href="https://discord.cloudflare.com/"><u>Discord</u></a>.</p><h2>Connect to private databases from Workers</h2><p>Starting today, you can now use <a href="https://developers.cloudflare.com/hyperdrive/"><u>Hyperdrive</u></a>, <a href="https://www.cloudflare.com/en-ca/products/tunnel/"><u>Cloudflare Tunnels</u></a> and <a href="https://www.cloudflare.com/zero-trust/products/access/"><u>Access</u></a> together to securely connect to databases that are isolated in a private network. </p><p><a href="https://developers.cloudflare.com/hyperdrive/"><u>Hyperdrive</u></a> enables you to build on Workers with your existing regional databases. It accelerates database queries using Cloudflare’s network, caching data close to end users and pooling connections close to the database. But there’s been a major blocker preventing you from building with Hyperdrive: network isolation.</p><p>The majority of databases today aren’t publicly accessible on the Internet. Data is highly sensitive and placing databases within private networks like a <a href="https://www.cloudflare.com/learning/cloud/what-is-a-virtual-private-cloud/"><u>virtual private cloud (VPC)</u></a> keeps data secure. But to date, that has also meant that your data is held captive within your cloud provider, preventing you from building on Workers. </p><p>Today, we’re enabling Hyperdrive to <a href="https://developers.cloudflare.com/hyperdrive/configuration/connect-to-private-database/"><u>securely connect to private databases</u></a> using <a href="https://www.cloudflare.com/en-ca/products/tunnel/"><u>Cloudflare Tunnels</u></a> and <a href="https://www.cloudflare.com/zero-trust/products/access/"><u>Cloudflare Access</u></a>. With a Cloudflare Tunnel running in your private network, Hyperdrive can securely connect to your database and start speeding up your queries.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3ozsfXdsWFJlfRhhulMClT/61ec772a843880370e81eeec190000fa/BLOG-2517_4.png" />
          </figure><p>With this update, Hyperdrive makes it possible for you to build full-stack applications on Workers with your existing databases, network-isolated or not. Whether you’re using <a href="https://developers.cloudflare.com/hyperdrive/examples/aws-rds-aurora/"><u>Amazon RDS</u></a>, <a href="https://developers.cloudflare.com/hyperdrive/examples/aws-rds-aurora/"><u>Amazon Aurora</u></a>, <a href="https://developers.cloudflare.com/hyperdrive/examples/google-cloud-sql/"><u>Google Cloud SQL</u></a>, <a href="https://azure.microsoft.com/en-gb/products/category/databases"><u>Azure Database</u></a>, or any other provider, Hyperdrive can connect to your databases and optimize your database connections to provide the fast performance you’ve come to expect with building on Workers.</p><h2>Improved Node.js compatibility is now GA</h2><p>Earlier this month, we <a href="https://blog.cloudflare.com/more-npm-packages-on-cloudflare-workers-combining-polyfills-and-native-code/"><u>overhauled our support for Node.js APIs in the Workers runtime</u></a>. With <a href="https://workers-nodejs-compat-matrix.pages.dev/"><u>twice as many Node APIs</u></a> now supported on Workers, you can now use a wider set of NPM packages to build a broader range of applications. Today, we’re happy to announce that improved Node.js compatibility is GA.</p><p>To give it a try, enable the nodejs_compat compatibility flag, and set your compatibility date to on or after 2024-09-23:</p>
            <pre><code>compatibility_flags = ["nodejs_compat"]
compatibility_date = "2024-09-23"
</code></pre>
            <p>Read the <a href="https://developers.cloudflare.com/workers/runtime-apis/nodejs/"><u>developer documentation</u></a> to learn more about how to opt-in your Workers to try it today. If you encounter any bugs or want to report feedback, <a href="https://github.com/cloudflare/workers-sdk/issues/new?assignees=&amp;labels=bug&amp;projects=&amp;template=bug-template.yaml&amp;title=%F0%9F%90%9B+BUG%3A"><u>open an issue</u></a>.</p><h2>Build frontend applications on Workers with Static Asset Hosting</h2><p>Starting today in open beta, you now can upload and serve HTML, CSS, and client-side JavaScript directly as part of your Worker. This means you can build dynamic, server-side rendered applications on Workers using popular frameworks such as Astro, Remix, Next.js and Svelte (full list <a href="https://developers.cloudflare.com/workers/frameworks"><u>here</u></a>), with more coming soon.</p><p>You can now deploy applications to Workers that previously could only be deployed to Cloudflare Pages and use features that are not yet supported in Pages, including <a href="https://developers.cloudflare.com/workers/observability/logging/logpush/"><u>Logpush</u></a>, <a href="https://developers.cloudflare.com/hyperdrive/#_top"><u>Hyperdrive</u></a>, <a href="https://developers.cloudflare.com/workers/configuration/cron-triggers/"><u>Cron Triggers</u></a>, <a href="https://developers.cloudflare.com/queues/configuration/configure-queues/#consumer"><u>Queue Consumers</u></a>, and <a href="https://developers.cloudflare.com/workers/configuration/versions-and-deployments/"><u>Gradual Deployments</u></a>. </p><p>To get started, create a new project with <a href="https://developers.cloudflare.com/workers/frameworks"><u>create-cloudflare</u></a>. For example, to create a new Astro project:  </p>
            <pre><code>npm create cloudflare@latest -- my-astro-app --framework=astro --experimental
</code></pre>
            <p>Visit our <a href="https://developers.cloudflare.com/workers/static-assets/"><u>developer documentation</u></a> to learn more about setting up a new front-end application on Workers and watch a <a href="https://youtu.be/W45MIi_t_go"><u>quick demo</u></a> to learn about how you can deploy an existing application to Workers. Static assets aren’t just for Workers written in JavaScript! You can serve static assets from <a href="https://developers.cloudflare.com/workers/languages/python/"><u>Workers written in Python</u></a> or even <a href="https://github.com/cloudflare/workers-rs/tree/main/templates/leptos/README.md"><u>deploy a Leptos app using workers-rs</u></a>.</p><p>If you’re wondering “<i>What about Pages?” </i>— rest assured, Pages will remain fully supported. We’ve heard from developers that as we’ve added new features to Workers and Pages, the choice of which product to use has become challenging. We’re closing this gap by bringing asset hosting, CI/CD and Preview URLs to Workers this Birthday Week.</p><p>To make the upfront choice Cloudflare Workers and Pages more transparent, we’ve created a <a href="https://developers.cloudflare.com/workers/static-assets/compatibility-matrix/"><u>compatibility matrix</u></a>. Looking ahead, we plan to bridge the remaining gaps between Workers and Pages and provide ways to migrate your Pages projects to Workers.</p><h2>Cloudflare joins OpenNext to deploy Next.js apps to Workers</h2><p>Starting today, as an early developer preview, you can use <a href="https://opennext.js.org//cloudflare"><u>OpenNext</u></a> to deploy Next.js apps to Cloudflare Workers via <a href="https://npmjs.org/@opennextjs/cloudflare"><u>@opennextjs/cloudflare</u></a>, a new npm package that lets you use the <a href="https://nextjs.org/docs/app/building-your-application/rendering/edge-and-nodejs-runtimes"><u>Node.js “runtime” in Next.js</u></a> on Workers.</p><p>This new adapter is powered by our <a href="https://blog.cloudflare.com/more-npm-packages-on-cloudflare-workers-combining-polyfills-and-native-code/"><u>new Node.js compatibility layer</u></a>, newly introduced <a href="#static-asset-hosting"><u>Static Assets for Workers</u></a>, and Workers KV, which is <a href="https://blog.cloudflare.com/faster-workers-kv"><u>now up to 3x faster</u></a>. It unlocks support for <a href="https://nextjs.org/docs/app/building-your-application/data-fetching/incremental-static-regeneration"><u>Incremental Static Regeneration (ISR)</u></a>, <a href="https://nextjs.org/docs/pages/building-your-application/routing/custom-error"><u>custom error pages</u></a>, and other Next.js features that our previous adapter, <a href="https://developers.cloudflare.com/pages/framework-guides/nextjs/ssr/get-started/"><u>@cloudflare/next-on-pages</u></a>, could not support, as it was only compatible with the Edge “runtime” in Next.js.</p><p><a href="https://blog.cloudflare.com/aws-egregious-egress/"><u>Cloud providers shouldn’t lock you in</u></a>. Like cloud compute and storage, open source frameworks should be portable — you should be able to deploy them to different cloud providers. The goal of the OpenNext project is to make sure you can deploy Next.js apps to any cloud platform, originally to AWS, and now Cloudflare. We’re excited to contribute to the OpenNext community, and give developers the freedom to run on the cloud that fits their applications needs (and <a href="https://blog.cloudflare.com/workers-pricing-scale-to-zero/"><u>budget</u></a>) best.</p><p>To get started by reading the <a href="https://opennext.js.org//cloudflare/get-started"><u>OpenNext docs</u></a>, which provide examples and a guide on how to add <a href="https://npmjs.org/@opennextjs/cloudflare"><u>@opennextjs/cloudflare</u></a> to your Next.js app.</p><p>We want your feedback! Report issues and contribute code at <a href="https://github.com/opennextjs/opennextjs-cloudflare/"><u>opennextjs/opennextjs-cloudflare on GitHub</u></a>, and join the discussion on the <a href="https://discord.gg/WUNsBM69"><u>OpenNext Discord</u></a>.</p>
            <pre><code>npm create cloudflare@latest -- my-next-app --framework=next --experimental
</code></pre>
            <h2>Continuous Integration &amp; Delivery (CI/CD) with Workers Builds</h2><p>Now in open beta, you can connect a GitHub or GitLab repository to a Worker, and Cloudflare will automatically build and deploy your changes each time you push a commit. Workers Builds provides an integrated CI/CD workflow you can use to build and deploy everything from full-stack applications built with the most popular frameworks to simple static websites. Just add your build command and let Workers Builds take care of the rest. </p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1K9izEbBxIlA0nXbNKJ1Od/55ecf9e56ecbc33aeb88df7ede1afddc/BLOG-2517_5.png" />
          </figure><p>While in open beta, Workers Builds is free to use, with a limit of one concurrent build per account, and unlimited build minutes per month. Once Workers Builds is Generally Available in early 2025, you will be billed based on the number of build minutes you use each month, and have a higher number of concurrent builds.</p>
<div><table><thead>
  <tr>
    <th></th>
    <th><span>Workers Free</span></th>
    <th><span>Workers Paid</span></th>
  </tr></thead>
<tbody>
  <tr>
    <td><span>Build minutes, </span><span>open beta</span></td>
    <td><span>Unlimited</span></td>
    <td><span>Unlimited</span></td>
  </tr>
  <tr>
    <td><span>Concurrent builds, </span><span>open beta</span></td>
    <td><span>1</span></td>
    <td><span>1</span></td>
  </tr>
  <tr>
    <td><span>Build minutes, </span><span>general availability</span></td>
    <td><span>3,000 minutes included per month</span></td>
    <td><span>6,000 minutes included per month </span><br /><span>+$0.005 per additional build minute</span></td>
  </tr>
  <tr>
    <td><span>Concurrent builds, </span><span>general availability</span></td>
    <td><span>1</span></td>
    <td><span>6</span></td>
  </tr>
</tbody></table></div><p><a href="https://developers.cloudflare.com/workers/ci-cd/builds/"><u>Read the docs</u></a> to learn more about how to deploy your first project with Workers Builds.</p><h2>Workers preview URLs</h2><p>Each newly uploaded version of a Worker now automatically generates a preview URL. Preview URLs make it easier for you to collaborate with your team during development, and can be used to test and identify issues in a preview environment before they are deployed to production.</p><p>When you upload a version of your Worker via the Wrangler CLI, Wrangler will display the preview URL once your upload succeeds. You can also find preview URLs for each version of your Worker in the Cloudflare dashboard:</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/29iDm0x8QQex5ryatk23e1/ecfdba5b98b6e0c22350087a6035442d/BLOG-2517_6.png" />
          </figure><p>Preview URLs for Workers are similar to Pages <a href="https://developers.cloudflare.com/pages/configuration/preview-deployments/"><u>preview deployments</u></a> — they run on your Worker’s <code>workers.dev</code> subdomain and allow you to view changes applied on a new version of your application before the changes are deployed.</p><p>Learn more about preview URLs by visiting our <a href="https://developers.cloudflare.com/workers/configuration/previews"><u>developer documentation</u></a>. </p><h2>Safely release to production with Gradual Deployments</h2><p>At Developer Week, we launched <a href="https://developers.cloudflare.com/workers/configuration/versions-and-deployments/gradual-deployments/#_top"><u>Gradual Deployments</u></a> for Workers and Durable Objects to make it safer and easier to deploy changes to your applications. Gradual Deployments is now GA — we have been using it ourselves at Cloudflare for mission-critical services built on Workers since early 2024.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2FOHnaYqTyhuJRZVdERWdh/52df3d29622ccca9118d1cb49de19ae8/BLOG-2517_7.png" />
          </figure><p>Gradual deployments can help you stay on top of availability SLAs and minimize application downtime by surfacing issues early. Internally at Cloudflare, every single service built on Workers uses gradual deployments to roll out new changes. Each new version gets released in stages —– 0.05%, 0.5%, 3%, 10%, 25%, 50%, 75% and 100% with time to soak between each stage. Throughout the roll-out, we keep an eye on metrics (which are often instrumented with <a href="https://developers.cloudflare.com/analytics/analytics-engine/"><u>Workers Analytics Engine</u></a>!) and we <a href="https://developers.cloudflare.com/workers/configuration/versions-and-deployments/rollbacks/"><u>roll back</u></a> if we encounter issues. </p><p>Using gradual deployments is as simple as swapping out the <a href="https://developers.cloudflare.com/workers/wrangler/commands/#versions"><u>wrangler commands</u></a>, <a href="https://developers.cloudflare.com/api/operations/worker-versions-upload-version"><u>API endpoints</u></a>, and/or using “Save version” in the code editor that is built into the Workers dashboard. Read the <a href="https://developers.cloudflare.com/workers/configuration/versions-and-deployments/"><u>developer documentation</u></a> to learn more and get started. </p><h2>Queues is GA, with higher throughput and concurrency limits</h2><p><a href="https://developers.cloudflare.com/queues/"><u>Cloudflare Queues</u></a> is now generally available with higher limits. </p><p>Queues let a developer decouple their Workers into event driven services. <i>Producer </i>Workers write events to a Queue, and <i>consumer </i>Workers are invoked to take actions on the events. For example, you can use a Queue to decouple an e-commerce website from a service which sends purchase confirmation emails to users.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3cpkxgIQSYLrwbfhDSL2A5/97818131c1f4f7d2e8b8d76dcc8c7f9a/BLOG-2517_8.png" />
          </figure><p>Throughput and concurrency limits for Queues are now significantly higher, which means you can push more messages through a Queue, and consume them faster.</p><ul><li><p><b>Throughput:</b> Each queue can now process 5000 messages per second (previously 400 per second).</p></li><li><p><b>Concurrency:</b> Each queue can now have up to 250 <a href="https://developers.cloudflare.com/queues/configuration/consumer-concurrency/"><u>concurrent consumers</u></a> (previously 20 concurrent consumers). </p></li></ul><p>Since we <a href="https://blog.cloudflare.com/introducing-cloudflare-queues/"><u>announced Queues in beta</u></a>, we’ve added the following functionality:</p><ul><li><p><a href="https://developers.cloudflare.com/queues/configuration/batching-retries/#batching"><u>Batch sizes can be customized</u></a>, to reduce the number of consumer Worker invocations and thus reduce cost.</p></li><li><p><a href="https://developers.cloudflare.com/queues/configuration/batching-retries/#delay-messages"><u>Individual messages can be delayed</u></a>, so you can back off due to external API rate limits.</p></li><li><p><a href="https://developers.cloudflare.com/queues/configuration/pull-consumers/"><u>HTTP Pull consumers</u></a> allow messages to be consumed outside Workers, with zero data egress costs.</p></li></ul><p>Queues can be used by any developer on a Workers Paid plan. Head over to our <a href="https://developers.cloudflare.com/queues/get-started/"><u>getting started</u><i><u> </u></i><u>guide</u></a> to start building with Queues.</p><h2>Event notifications for R2 is now GA</h2><p>We’re excited to announce that event notifications for R2 is now generally available. Whether it’s kicking off image processing after a user uploads a file or triggering a sync to an external data warehouse when new analytics data is generated, many applications need to be able to reliably respond when events happen. <a href="https://blog.cloudflare.com/r2-events-gcs-migration-infrequent-access/#event-notifications-open-beta"><u>Event notifications</u></a> for <a href="https://developers.cloudflare.com/r2/"><u>Cloudflare R2</u></a> give you the ability to build event-driven applications and workflows that react to changes in your data.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/73t1PtQg576iv7m95HHjGL/26cd028004f5b669e41a89a7265c5a14/BLOG-2517_9.png" />
          </figure><p>Here’s how it works: When data in your R2 bucket changes, event notifications are sent to your queue. You can consume these notifications with a <a href="https://developers.cloudflare.com/queues/reference/how-queues-works/#create-a-consumer-worker"><u>consumer Worker </u></a>or <a href="https://developers.cloudflare.com/queues/configuration/pull-consumers/"><u>pull them over HTTP</u></a> from outside of Cloudflare Workers.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4NSN5r40rmXy0FMGOKvdAd/7d0c0637ccc478881528339304942948/BLOG-2517_10.png" />
          </figure><p>Since we introduced event notifications in <a href="https://blog.cloudflare.com/r2-events-gcs-migration-infrequent-access/#event-notifications-open-beta"><u>open beta</u></a> earlier this year, we’ve made significant improvements based on your feedback:</p><ul><li><p>We increased reliability of event notifications with throughput improvements from Queues. R2 event notifications can now scale to thousands of writes per second.</p></li><li><p>You can now configure event notifications directly from the Cloudflare dashboard (in addition to <a href="https://developers.cloudflare.com/workers/wrangler/commands/#notification-create"><u>Wrangler</u></a>).</p></li><li><p>There is now support for receiving notifications triggered by <a href="https://developers.cloudflare.com/r2/buckets/object-lifecycles/"><u>object lifecycle deletes</u></a>.</p></li><li><p>You can now set up multiple notification rules for a single queue on a bucket.</p></li></ul><p>Visit <a href="https://developers.cloudflare.com/r2/buckets/event-notifications/"><u>our documentation</u></a> to learn about how to set up event notifications for your R2 buckets.</p><h2>Removing the serverless microservices tax: No more request fees for Service Bindings and Tail Workers</h2><p>Earlier this year, we quietly changed Workers pricing to lower your costs. As of July 2024, you are no longer charged for requests between Workers on your account made via <a href="https://developers.cloudflare.com/workers/runtime-apis/bindings/service-bindings/"><u>Service Bindings</u></a>, or for invocations of <a href="https://developers.cloudflare.com/workers/observability/logging/tail-workers/"><u>Tail Workers.</u></a> For example, let’s say you have the following chain of Workers: </p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1PTgWu9XiGJNoWrHTduQdB/84e1f6ee0788f99684440a9db7b4e6c1/BLOG-2517_11.png" />
          </figure><p>Each request from a client results in three Workers invocations. Previously, we charged you for each of these invocations, plus the CPU time for each of these Workers. With this change, we only charge you for the first request from the client, plus the CPU time used by each Worker.</p><p>This eliminates the additional cost of breaking a monolithic serverless app into microservices. In 2023, we introduced new <a href="https://blog.cloudflare.com/workers-pricing-scale-to-zero/"><u>pricing based on CPU time</u></a>, rather than duration, so you don’t have to worry about being billed for time spent waiting on I/O. This includes I/O to other Workers. With this change, you’re only billed for the first request in the chain, eliminating the other additional cost of using multiple Workers.</p><p>When you build microservices on Workers, you face fewer trade offs than on other compute platforms. Service bindings have <a href="https://developers.cloudflare.com/workers/runtime-apis/bindings/service-bindings/"><u>zero network overhead</u></a> by default, a built-in <a href="https://blog.cloudflare.com/javascript-native-rpc/"><u>JavaScript RPC system</u></a>, and a security model with <a href="https://blog.cloudflare.com/workers-environment-live-object-bindings/"><u>fewer footguns and simpler configuration</u></a>. We’re excited to improve this further with this pricing change.</p><h2>Image optimization is available to everyone for free — no subscription needed</h2><p>Starting today, you can use <a href="https://developers.cloudflare.com/images/transform-images/transform-via-url/"><u>Cloudflare Images</u></a> for free to optimize your images with up to 5,000 transformations per month.</p><p>Large, oversized images can throttle your application speed and page load times. We built <a href="https://developers.cloudflare.com/images/"><u>Cloudflare Images</u></a> to let you dynamically optimize images in the correct dimensions and formats for each use case, all while storing only the original image.</p><p>In the spirit of Birthday Week, we’re making image optimization available to everyone with a Cloudflare account, no subscription needed. You’ll be able to use Images to transform images that are stored outside of Images, such as in R2.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/49UPZRpeAp79qugqstqbT7/5e23fb4c7a458f5d00b401383bd6e777/BLOG-2517_12.png" />
          </figure><p>Transformations are served from your zone through a specially formatted URL with parameters that specify how an image should be optimized. For example, the transformation URL below uses the <code>format</code> parameter to automatically serve the image in the most optimal format for the requesting browser:</p>
            <pre><code>https://example.com/cdn-cgi/image/format=auto/thumbnail.png</code></pre>
            <p>This means that the original PNG image may be served as AVIF to one user and WebP to another. Without a subscription, transforming images from remote sources is free up to 5,000 unique transformations per month. Once you exceed this limit, any already cached transformations will continue to be served, but you’ll need a <a href="https://dash.cloudflare.com/?to=/:account/images"><u>paid Images plan</u></a> to request new transformations or to purchase storage within Images.</p><p>To get started, navigate to <a href="https://dash.cloudflare.com/?to=/:account/images"><u>Images in the dashboard</u></a> to enable transformations on your zone.</p>
    <div>
      <h2>Dive deep into more announcements from Builder Day</h2>
      <a href="#dive-deep-into-more-announcements-from-builder-day">
        
      </a>
    </div>
    <p>We shipped so much that we couldn’t possibly fit it all in one blog post. These posts dive into the technical details of what we’re announcing at Builder Day:</p><ul><li><p><a href="https://blog.cloudflare.com/workers-ai-bigger-better-faster"><u>Cloudflare’s Bigger, Better, Faster AI platform</u></a></p></li><li><p><a href="https://blog.cloudflare.com/making-workers-ai-faster"><u>Making Workers AI faster with KV cache compression, speculative decoding, and upgraded hardware</u></a></p></li><li><p><a href="https://blog.cloudflare.com/faster-workers-kv"><u>We made Workers KV up to 3x faster — here’s the data</u></a></p></li><li><p><a href="https://blog.cloudflare.com/sqlite-in-durable-objects"><u>Zero-latency SQLite storage in every Durable Object</u></a></p></li></ul>
    <div>
      <h2>Build the next big thing on Cloudflare</h2>
      <a href="#build-the-next-big-thing-on-cloudflare">
        
      </a>
    </div>
    <p>Cloudflare is for builders, and everything we’re announcing at Builder Day, you can start building with right away. We’re now offering <a href="http://blog.cloudflare.com/startup-program-250k-credits"><u>$250,000 in credits to use on our Developer Platform to qualified startups</u></a>, so that you can get going even faster, and become the next company to reach hypergrowth scale with a small team, and not waste time provisioning infrastructure and doing undifferentiated heavy lifting. Focus on shipping, and we’ll take care of the rest.</p><p>Apply to the startup program <a href="https://www.cloudflare.com/forstartups/"><u>here</u></a>, or stop by and say hello in the <a href="https://discord.cloudflare.com/"><u>Cloudflare Developers Discord</u></a>.</p> ]]></content:encoded>
            <category><![CDATA[Birthday Week]]></category>
            <category><![CDATA[Developer Platform]]></category>
            <category><![CDATA[Developers]]></category>
            <category><![CDATA[Cloudflare Workers]]></category>
            <category><![CDATA[undefined]]></category>
            <category><![CDATA[Queues]]></category>
            <category><![CDATA[Cloudflare Images]]></category>
            <guid isPermaLink="false">6ct91ZmJYzPu9n9pt8sNBm</guid>
            <dc:creator>Tanushree Sharma</dc:creator>
            <dc:creator>Rohin Lohe</dc:creator>
            <dc:creator>Anni Wang</dc:creator>
            <dc:creator>Nevi Shah</dc:creator>
        </item>
        <item>
            <title><![CDATA[What’s new with Cloudflare Media: updates for Calls, Stream, and Images]]></title>
            <link>https://blog.cloudflare.com/whats-next-for-cloudflare-media/</link>
            <pubDate>Thu, 04 Apr 2024 13:00:40 GMT</pubDate>
            <description><![CDATA[ With Cloudflare Calls in open beta, you can build real-time, serverless video and audio applications. Cloudflare Stream lets your viewers instantly clip from ongoing streams ]]></description>
            <content:encoded><![CDATA[ <p></p><p>Our customers use Cloudflare Calls, Stream, and Images to build live, interactive, and real-time experiences for their users. We want to reduce friction by making it easier to get data into our products. This also means providing transparent pricing, so customers can be confident that costs make economic sense for their business, especially as they scale.</p><p>Today, we’re introducing four new improvements to help you build media applications with Cloudflare:</p><ul><li><p>Cloudflare Calls is in open beta with transparent pricing</p></li><li><p>Cloudflare Stream has a Live Clipping API to let your viewers instantly clip from ongoing streams</p></li><li><p>Cloudflare Images has a pre-built upload widget that you can embed in your application to accept uploads from your users</p></li><li><p>Cloudflare Images lets you crop and resize images of people at scale with automatic face cropping</p></li></ul>
    <div>
      <h3>Build real-time video and audio applications with Cloudflare Calls</h3>
      <a href="#build-real-time-video-and-audio-applications-with-cloudflare-calls">
        
      </a>
    </div>
    <p>Cloudflare Calls is now in open beta, and you can activate it from your dashboard. Your usage will be free until May 15, 2024. Starting May 15, 2024, customers with a Calls subscription will receive the first terabyte each month for free, with any usage beyond that charged at $0.05 per real-time gigabyte. Additionally, there are no charges for inbound traffic to Cloudflare.</p><p>To get started, read the <a href="https://developers.cloudflare.com/calls/">developer documentation for Cloudflare Calls</a>.</p>
    <div>
      <h3>Live Instant Clipping: create clips from live streams and recordings</h3>
      <a href="#live-instant-clipping-create-clips-from-live-streams-and-recordings">
        
      </a>
    </div>
    <p>Live broadcasts often include short bursts of highly engaging content within a longer stream. Creators and viewers alike enjoy being able to make a “clip” of these moments to share across multiple channels. Being able to generate that clip rapidly enables our customers to offer instant replays, showcase key pieces of recordings, and build audiences on social media in real-time.</p><p>Today, <a href="https://www.cloudflare.com/products/cloudflare-stream/">Cloudflare Stream</a> is launching Live Instant Clipping in open beta for all customers. With the new Live Clipping API, you can let your viewers instantly clip and share moments from an ongoing stream - without re-encoding the video.</p><p>When planning this feature, we considered a typical user flow for generating clips from live events. Consider users watching a stream of a video game: something wild happens and users want to save and share a clip of it to social media. What will they do?</p><p>First, they’ll need to be able to review the preceding few minutes of the broadcast, so they know what to clip. Next, they need to select a start time and clip duration or end time, possibly as a visualization on a timeline or by scrubbing the video player. Finally, the clip must be available quickly in a way that can be replayed or shared across multiple platforms, even after the original broadcast has ended.</p><p>That ideal user flow implies some heavy lifting in the background. We now offer a manifest to preview recent live content in a rolling window, and we provide the timing information in that response to determine the start and end times of the requested clip relative to the whole broadcast. Finally, on request, we will generate on-the-fly that clip as a standalone video file for easy sharing as well as an HLS manifest for embedding into players.</p><p>Live Instant Clipping is available in beta to all customers starting today! Live clips are free to make; they do not count toward storage quotas, and playback is billed just like minutes of video delivered. To get started, check out the <a href="https://developers.cloudflare.com/stream/stream-live/live-instant-clipping/">Live Clipping API in developer documentation</a>.</p>
    <div>
      <h3>Integrate Cloudflare Images into your application with only a few lines of code</h3>
      <a href="#integrate-cloudflare-images-into-your-application-with-only-a-few-lines-of-code">
        
      </a>
    </div>
    <p>Building applications with user-uploaded images is even easier with the upload widget, a pre-built, interactive UI that lets users upload images directly into your Cloudflare Images account.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1MVN5ibd1UGnokaEm7f1Vq/8efedb285ec93d52867d78ca63cb454b/image3-9.png" />
            
            </figure><p>Many developers use <a href="https://www.cloudflare.com/developer-platform/cloudflare-images/">Cloudflare Images</a> as an end-to-end image management solution to support applications that center around user-generated content, from AI photo editors to social media platforms. Our APIs connect the frontend experience – where users upload their images – to the storage, optimization, and delivery operations in the backend.</p><p>But building an application can take time. Our team saw a huge opportunity to take away as much extra work as possible, and we wanted to provide off-the-shelf integration to speed up the development process.</p><p>With the upload widget, you can seamlessly integrate Cloudflare Images into your application within minutes. The widget can be integrated in two ways: by embedding a script into a static HTML page or by installing a package that works with your favorite framework. We provide a ready-made Worker template that you can deploy directly to your account to connect your frontend application with Cloudflare Images and authorize users to upload through the widget.</p><p>To try out the upload widget, <a href="https://forms.gle/vBu47y3638k8fkGF8">sign up for our closed beta</a>.</p>
    <div>
      <h3>Optimize images of people with automatic face cropping for Cloudflare Images</h3>
      <a href="#optimize-images-of-people-with-automatic-face-cropping-for-cloudflare-images">
        
      </a>
    </div>
    <p>Cloudflare Images lets you dynamically manipulate images in different aspect ratios and dimensions for various use cases. With face cropping for Cloudflare Images, you can now crop and resize images of people’s faces at scale. For example, if you’re building a social media application, you can apply automatic face cropping to generate profile picture thumbnails from user-uploaded images.</p><p>Our existing gravity parameter uses saliency detection to set the focal point of an image based on the most visually interesting pixels, which determines how the image will be cropped. We expanded this feature by using a machine learning model called RetinaFace, which classifies images that have human faces. We’re also introducing a new zoom parameter that you can combine with face cropping to specify how closely an image should be cropped toward the face.</p><p>To apply face cropping to your image optimization, <a href="https://forms.gle/2bPbuijRoqGi6Qn36">sign up for our closed beta</a>.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6JFNk182dDZHu0sxIySMC5/d3821e2f911b7e31bb411addcc10bdb6/image2-10.png" />
            
            </figure><p><i>Photo by</i> <a href="https://unsplash.com/@eyeforebony?utm_content=creditCopyText&amp;utm_medium=referral&amp;utm_source=unsplash"><i>Eye for Ebony</i></a> <i>on</i> <a href="https://unsplash.com/photos/photo-of-woman-wearing-purple-lipstick-and-black-crew-neck-shirt-vYpbBtkDhNE?utm_content=creditCopyText&amp;utm_medium=referral&amp;utm_source=unsplash"><i>Unsplash</i></a></p>
            <pre><code>https://example.com/cdn-cgi/image/fit=crop,width=500,height=500,gravity=face,zoom=0.6/https://example.com/images/picture.jpg</code></pre>
            
    <div>
      <h3>Meet the Media team over Discord</h3>
      <a href="#meet-the-media-team-over-discord">
        
      </a>
    </div>
    <p>As we’re working to build the next set of media tools, we’d love to hear what you’re building for your users. Come <a href="https://discord.gg/cloudflaredev">say hi to us on Discord</a>. You can also learn more by visiting our developer documentation for <a href="https://developers.cloudflare.com/calls/">Calls</a>, <a href="https://developers.cloudflare.com/stream/">Stream</a>, and <a href="https://developers.cloudflare.com/images/">Images</a>.</p> ]]></content:encoded>
            <category><![CDATA[Developer Week]]></category>
            <category><![CDATA[Cloudflare Stream]]></category>
            <category><![CDATA[Live Streaming]]></category>
            <category><![CDATA[Cloudflare Images]]></category>
            <category><![CDATA[Image Optimization]]></category>
            <category><![CDATA[Image Resizing]]></category>
            <category><![CDATA[Image Storage]]></category>
            <category><![CDATA[Cloudflare Calls]]></category>
            <category><![CDATA[Developers]]></category>
            <guid isPermaLink="false">4fOMOrJU6Bg9JNkRAThc7c</guid>
            <dc:creator>Deanna Lam</dc:creator>
            <dc:creator>Taylor Smith</dc:creator>
            <dc:creator>Zaid Farooqui</dc:creator>
        </item>
        <item>
            <title><![CDATA[Image optimization made simpler and more predictable: we’re merging Cloudflare Images and Image Resizing]]></title>
            <link>https://blog.cloudflare.com/merging-images-and-image-resizing/</link>
            <pubDate>Tue, 26 Sep 2023 13:00:20 GMT</pubDate>
            <description><![CDATA[ We’re changing how we bill for Image Resizing to let you calculate your monthly costs more accurately and reliably. All Image Resizing features will be available under Cloudflare Images ]]></description>
            <content:encoded><![CDATA[ <p></p><p>Starting November 15, 2023, we’re merging Cloudflare Images and Image Resizing.</p><p>All Image Resizing features will be available as part of the Cloudflare Images product. To let you calculate your monthly costs more accurately and reliably, we’re changing how we bill to resize images that aren’t stored at Cloudflare. Our new pricing model will cost $0.50 per 1,000 unique transformations.</p><p>For existing Image Resizing customers, you can continue to use the legacy version of Image Resizing. When the merge is live, then you can opt into the new pricing model for more predictable pricing.</p><p>In this post, we'll cover why we came to this decision, what's changing, and how these changes might impact you.</p>
    <div>
      <h3>Simplifying our products</h3>
      <a href="#simplifying-our-products">
        
      </a>
    </div>
    <p>When you build an application with images, you need to think about three separate operations: storage, optimization, and delivery.</p><p>In 2019, we <a href="/announcing-cloudflare-image-resizing-simplifying-optimal-image-delivery/">launched Image Resizing</a>, which can optimize and transform any publicly available image on the Internet based on a set of parameters. This enables our customers to deliver variants of a single image for each use case without creating and storing additional copies.</p><p>For example, an e-commerce platform for furniture retailers might use the same image of a lamp on both the individual product page and the gallery page for all lamps. They can use Image Resizing to optimize the image in its original aspect for a slider view, or manipulate and crop the image for a thumbnail view.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3Y4ulhBvK94McoD65FN5xy/b1a5889758db19d366e429a355802e84/image1-14.png" />
            
            </figure><p>Two years later, we <a href="/announcing-cloudflare-images/">released Images</a> to let developers build an end-to-end image management pipeline. Developers no longer need to use different vendors to handle storage, optimization, and delivery. With Images, customers can store and deliver their images from a single bucket at Cloudflare to streamline their workflow and <a href="https://www.cloudflare.com/learning/cloud/what-are-data-egress-fees/">eliminate egress fees</a>.</p><p>Both products have overlapping features to optimize and manipulate images, which can be confusing for customers. Over the years, we've received numerous questions about which product is optimal for which use cases.</p><p>To simplify our products, we're merging Cloudflare Images and Image Resizing to let customers store, optimize, and deliver their images all from one product. Customers can continue to <a href="https://www.cloudflare.com/developer-platform/cloudflare-images/">optimize their images</a> without using Cloudflare for storage or purchase storage to manage their entire image pipeline through Cloudflare.</p>
    <div>
      <h3>Transparent and predictable pricing</h3>
      <a href="#transparent-and-predictable-pricing">
        
      </a>
    </div>
    <p>Pricing can cause headaches for Image Resizing customers.</p><p>We often hear from customers seeking guidance for calculating how much Image Resizing will cost each month. Today, you are billed for Image Resizing by the number of uncached requests to transform an image. However, caching behavior is often unpredictable, and you can't guarantee how long a given image stays cached. This means that you can't reliably predict their costs.</p><p>If you make 1M total requests to Image Resizing each month, then you won't know whether you'll be billed for 10K or 100K of these requests because our pricing model relies on cache. Since assets can be ejected from cache for a variety of reasons, bills for Image Resizing are unpredictable month over month. In some cases, the monthly bills are inconsistent even when traffic remains constant. In other cases, the monthly bill is much higher than our customers expected.</p><p>With the new Cloudflare Images, you will be billed only once per 30 days for each unique request to transform an image stored outside of Cloudflare, whether or not the transformation is cached. Customers will be billed $0.50 per 1,000 unique transformations per month.</p><p>In other words, if you resize one image to 100x100, then our new pricing model guarantees that you will be billed only once per month, whether there are 10K or 100K uncached requests to deliver the image at this size. If you resize 200 images to 100x100, then you will be billed for only 200 unique transformations — one for each image at this size — each month.</p><p>This change aligns more closely with how our customers think about their usage, as well as ensures that our customers can accurately estimate their costs with confidence. You won't need to consider how your cache hit ratio will affect your bill. To estimate your costs, you'll need to know only the number of unique images and the number of different ways that you need to transform those images each month.</p>
    <div>
      <h3>Resize without storage with Cloudflare Images</h3>
      <a href="#resize-without-storage-with-cloudflare-images">
        
      </a>
    </div>
    <p>For developers who only want to resize and optimize images, Cloudflare Images now offers a zero-storage plan. This new plan enables you to transform images while keeping your existing storage and delivery solution unchanged (just like the current Image Resizing product does).</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5HzBEvn54XJo7eKCbod8i3/7efb5bb0420157aa44e545010ed576ee/image3-21.png" />
            
            </figure><p>If you want to store your images with Cloudflare Images, then you can always upgrade your plan to purchase storage at any time.</p><p>Image Resizing is currently available only for accounts with a Pro or higher plan. The merged Cloudflare Images product will be available for all customers, with pricing plans that are tailored to meet specific use cases.</p>
    <div>
      <h3>Existing customers can opt into new pricing</h3>
      <a href="#existing-customers-can-opt-into-new-pricing">
        
      </a>
    </div>
    <p>The new version of Cloudflare Images is available on November 15, 2023.</p><p>If you currently use Image Resizing, you will have the option to migrate to the new Cloudflare Images at no cost, or continue using Image Resizing.</p><p>The functionality and usability of the product will remain the same. You will still manage stored images under the Cloudflare Images tab and can enable transformations from the Speed tab.</p><p>As we execute, we'll continue to make improvements in the Dashboard to bring a more centralized and unified experience for Cloudflare Images.</p><p>You can learn more about our current image optimization capabilities in the <a href="https://developers.cloudflare.com/images/image-resizing/url-format/">Developer Docs</a>. If you have feedback or thoughts, we'd love to hear from you on the <a href="https://discord.com/invite/cloudflaredev">Cloudflare Developers Discord</a>.</p> ]]></content:encoded>
            <category><![CDATA[Birthday Week]]></category>
            <category><![CDATA[Cloudflare Images]]></category>
            <category><![CDATA[Image Resizing]]></category>
            <category><![CDATA[Connectivity Cloud]]></category>
            <guid isPermaLink="false">3Xy6Z8kmJJ64r3Ac3eJQPy</guid>
            <dc:creator>Deanna Lam</dc:creator>
        </item>
        <item>
            <title><![CDATA[How Cloudflare Images addressed the aCropalypse vulnerability]]></title>
            <link>https://blog.cloudflare.com/how-cloudflare-images-addressed-the-acropalypse-vulnerability/</link>
            <pubDate>Mon, 10 Jul 2023 13:00:14 GMT</pubDate>
            <description><![CDATA[ Customers using Cloudflare Images or Image Resizing products are protected against the aCropalypse vulnerability.  ]]></description>
            <content:encoded><![CDATA[ <p></p><p>Acropalypse (<a href="https://www.cve.org/CVERecord?id=CVE-2023-21036">CVE-2023-21036</a>) is a vulnerability caused by image editing tools failing to truncate images when editing has made them smaller, most often seen when images are cropped. This leaves remnants of the cropped contents written in the file after the image has finished. The remnants (written in a ‘trailer’ after the end-of-image marker) are ignored by most software when reading the image, but can be used to partially reconstruct the original image by an attacker.</p><p>The general class of vulnerability can, in theory, affect any image format if it ignores data written after the end of the image. In this case the applications affected were the ‘Markup’ screenshot editor that shipped with Google Pixel phones from the Pixel 3 (saving its images in the PNG format) and the Windows Snipping tool (with both PNG and JPEG formats).</p><p>Our customers deliver their images using Cloudflare Images products and may have images that are affected. We would like to ensure their images are protected from this vulnerability if they have been edited using a vulnerable editor.</p><p>As a concrete example, imagine a Cloudflare customer running a social network, delivering images using Cloudflare Images. A user of the social network might take a screenshot of an invoice containing their address after ordering a product, crop their address out and share the cropped image on the social network. If the image was cropped using an editor affected by aCropalypse an attacker would be able to recover their address, violating their expectation of privacy.</p>
    <div>
      <h3>How Cloudflare Images products works</h3>
      <a href="#how-cloudflare-images-products-works">
        
      </a>
    </div>
    <p><a href="https://developers.cloudflare.com/images/cloudflare-images/">Cloudflare Images</a> and <a href="https://developers.cloudflare.com/images/image-resizing/">Image Resizing</a> use a proxy as the upstream for requests. This proxy fetches the original image (from either Cloudflare Images storage or the customer’s upstream), applies any transformations (from the <a href="https://developers.cloudflare.com/images/cloudflare-images/transform/resize-images/">variant definition</a> for Cloudflare Images, or from the <a href="https://developers.cloudflare.com/images/image-resizing/url-format/">URL/worker</a> parameters for Image Resizing) and then responds with the transformed image.</p><p>This naturally provides protection against aCropalypse for our customers: the proxy will ignore any trailing data in the input, so it won’t be present in the re-encoded image.</p><p>However, for certain requests, the proxy might respond with the original. This occurs when two conditions hold: the original can satisfy the request and the re-encoded image has a larger file size. The original satisfies the request if we can guarantee that if the requested format is supported the original format will be supported, it has the same dimensions, it doesn’t have any metadata that needs stripping and it doesn’t have any other transformations such as sharpening or overlays.</p><p>Even if the original can satisfy the request, it is fairly unlikely the original will be smaller for images affected by aCropalypse as the leaked information in the trailer will increase the file size. So Cloudflare Images and Image Resizing should provide protection against aCropalypse without adding any additional mitigations.</p><p>That being said, we couldn’t guarantee that images affected by aCropalypse would always be re-encoded. We wanted to be able to offer this guarantee for customers of Cloudflare Images and Image Resizing.</p>
    <div>
      <h3>How we addressed the issue for JPEG and PNG file format</h3>
      <a href="#how-we-addressed-the-issue-for-jpeg-and-png-file-format">
        
      </a>
    </div>
    <p>To ensure that no images with a trailer will ever be passed through, we can add another requirement to reply with the original image — if the original image is a PNG or a JPEG (so might have been affected by aCropalypse), it must not have a trailer. Then we just need to be able to detect trailers for both formats.</p><p>As a first idea we might consider simply checking that the image ends with the correct end-of-image marker, which for JPEG is the byte sequence [0xFF 0xD9] and for PNG is the byte sequence [0x00 0x00 0x00 0x00 0x49 0x45 0x4E 0x44 0xAE 0x42 0x60 0x82]. But this won’t work for images affected by aCropalypse: because the original image was a valid image, the trailer that results from overwriting the start of the file will be the end of a valid image. We also can’t check whether there is more than one end-of-image marker in the file; both formats have chunks of variable-length bytestrings in which the end-of-image marker could appear. We need to do it properly by parsing the image’s structure and checking there is no data after its end.</p><p>For JPEGs, we use a Rust wrapper of the library libjpeg-turbo for decoding. Libjpeg-turbo allows fine control of resource usage; for example it allows decompressing and re-compressing a JPEG file a scanline at a time. This flexibility allows us to easily detect trailers using the library’s API: we just have to check that once we have consumed the end-of-image marker all of the input has been consumed. In our proxy we use an in-memory buffer as input, so we can check that there are no bytes left in the buffer:</p>
            <pre><code>pub fn consume_eoi_marker(&amp;mut self) -&gt; bool {
    // Try to consume the EOI marker of the image
    unsafe {
        (ffi::jpeg_input_complete(&amp;self.dec.cinfo) == 1) || {
            ffi::jpeg_consume_input(&amp;mut self.dec.cinfo);
            ffi::jpeg_input_complete(&amp;self.dec.cinfo) == 1
        }
    }
}

pub fn has_trailer(&amp;mut self) -&gt; io::Result&lt;bool&gt; {
    if self.consume_eoi_marker() {
        let src = unsafe {
            NonNull::new(self.dec.cinfo.src)
                .ok_or_else(|| {
                    io::Error::new(
                        io::ErrorKind::Other,
                        "source manager not set".to_string()
                    )
                })?
                .as_ref()
        };

        // We have a trailer if we have any bytes left over in the buffer
        Ok(src.bytes_in_buffer != 0)
    } else {
        // We didn't consume the EOI - we can't say if there is a trailer
        Err(io::Error::new(
            io::ErrorKind::Other,
            "EOI not reached".to_string(),
        ))
    }
}</code></pre>
            <p>For PNGs, we use the lodepng library. This has a much simpler API surface that decodes an image in one shot when you call <code>lodepng_decode</code>. This doesn’t tell us how many bytes were read or provide an interface to detect if we have a trailer.</p><p>Luckily the PNG format has a <a href="https://www.w3.org/TR/2003/REC-PNG-20031110/#5DataRep">very consistent and simple internal structure</a>:</p><ul><li><p>First the PNG prelude, the byte sequence [0x89, 0x50, 0x4e, 0x47, 0x0d, 0x0a, 0x1a, 0x0a]</p></li><li><p>Then a series of chunks, which each consist of</p></li></ul><ol><li><p>4 bytes length as a big-endian integer N</p></li><li><p>4 bytes chunk type</p></li><li><p>N bytes of data (whose meaning depends on the chunk type)</p></li><li><p>4 bytes of checksum — CRC-32 of the type and data</p></li></ol><p>The file is terminated by a chunk of type IEND with no data.</p><p>As the format is so regular, it’s easy to write a separate parser that just reads the prelude, loops through the chunks until we see IEND, and then checks if we have any bytes left. We can perform this check after decoding the image with lodepng, as this allows us to skip validating the checksums as lodepng has already checked them for us:</p>
            <pre><code>const PNG_PRELUDE: &amp;[u8] = &amp;[0x89, 0x50, 0x4e, 0x47, 0x0d, 0x0a, 0x1a, 0x0a];

enum ChunkStatus {
    SeenEnd { has_trailer: bool },
    MoreChunks,
}

fn consume_chunks_until_iend(buf: &amp;[u8]) -&gt; Result&lt;(ChunkStatus, &amp;[u8]), &amp;'static str&gt; {
    let (length_bytes, buf) = consume(buf, 4)?;
    let (chunk_type, buf) = consume(buf, 4)?;

    // Infallible: We've definitely consumed 4 bytes
    let length = u32::from_be_bytes(length_bytes.try_into().unwrap());

    let (_data, buf) = consume(buf, length as usize)?;

    let (_checksum, buf) = consume(buf, 4)?;

    if chunk_type == b"IEND" &amp;&amp; buf.is_empty() {
        Ok((ChunkStatus::SeenEnd { has_trailer: false }, buf))
    } else if chunk_type == b"IEND" &amp;&amp; !buf.is_empty() {
        Ok((ChunkStatus::SeenEnd { has_trailer: true }, buf))
    } else {
        Ok((ChunkStatus::MoreChunks, buf))
    }
}

pub(crate) fn has_trailer(png_data: &amp;[u8]) -&gt; Result&lt;bool, &amp;'static str&gt; {
    let (magic, mut buf) = consume(png_data, PNG_PRELUDE.len())?;

    if magic != PNG_PRELUDE {
        return Err("expected prelude");
    }

    loop {
        let (status, tmp_buf) = consume_chunks_until_iend(buf)?;
        buf = tmp_buf;
        if let ChunkStatus::SeenEnd { has_trailer } = status {
            return Ok(has_trailer)
        }
    }
}</code></pre>
            
    <div>
      <h3>Conclusion</h3>
      <a href="#conclusion">
        
      </a>
    </div>
    <p>Customers using Cloudflare Images or Image Resizing products are protected against the aCropalypse vulnerability. The Images team addressed the vulnerability in a way that didn’t require any changes to the original images or cause any increased latency or regressions for customers.</p> ]]></content:encoded>
            <category><![CDATA[Cloudflare Images]]></category>
            <category><![CDATA[Vulnerabilities]]></category>
            <guid isPermaLink="false">1ToJNuP6B50CzxWo0Bqb9M</guid>
            <dc:creator>Nicholas Skehin</dc:creator>
        </item>
        <item>
            <title><![CDATA[SVG support in Cloudflare Images]]></title>
            <link>https://blog.cloudflare.com/svg-support-in-cloudflare-images/</link>
            <pubDate>Wed, 21 Sep 2022 14:00:00 GMT</pubDate>
            <description><![CDATA[ Cloudflare Images now supports storing and delivering SVG files ]]></description>
            <content:encoded><![CDATA[ <p></p><p>Cloudflare Images was announced one year ago <a href="/announcing-cloudflare-images/">on this very blog</a> to help you solve the problem of delivering images in the right size, right quality and fast. Very fast.</p><p>It doesn’t really matter if you only run a personal blog, or a portal with thousands of vendors and millions of end-users. Doesn’t matter if you need one hundred images to be served one thousand times each at most, or if you deal with tens of millions of new, unoptimized, images that you deliver billions of times per month.</p><p>We want to remove the complexity of dealing with the need to store, to process, resize, re-encode and serve the images using multiple platforms and vendors.</p><p>At the time we wrote:</p><blockquote><p><i>Images is a single product that stores, resizes, optimizes and serves images. We built Cloudflare Images, so customers of all sizes can build a scalable and affordable image pipeline in minutes.</i></p></blockquote><p>We supported the most common formats, such as JPG, WebP, PNG and GIF.</p><p>We did not feel the need to support SVG files. SVG files are inherently scalable, so there is nothing to resize on the server side before serving them to your audience. One can even argue that SVG files are documents that can generate images through mathematical formulas of vectors and nodes, but are not images <i>per se.</i></p><p>There was also the clear notion that SVG files were a potential risk due to known and <a href="https://www.fortinet.com/blog/threat-research/scalable-vector-graphics-attack-surface-anatomy">well documented</a> vulnerabilities. We knew we could do something from the security angle, but still, why go through that workload if it <i>didn’t make sense</i> in the first place to consider an SVG as a supported format.</p><p>Not supporting SVG files, though, did bring a set of challenges to an increasing number of our customers. <a href="https://w3techs.com/technologies/details/im-svg">Some stats already show that around 50% of websites serve SVG files</a>, which matches the pulse we got from talking with many of you, customers and community.</p><p>If you relied on SVGs, you had to select a second storage location or a second image platform elsewhere. That commonly resulted in an egress fee when serving an uncached file from that source, and it goes against what we want for our product: one image pipeline to cover all your needs.</p><p>We heard loud and clear, and starting from today, you can store and serve SVG files, safely, with Cloudflare Images.</p>
    <div>
      <h3>SVG, what is so special about them?</h3>
      <a href="#svg-what-is-so-special-about-them">
        
      </a>
    </div>
    <p>The Scalable Vector Graphics file type is great for serving all kinds of illustrations, charts, logos, and icons.</p><p>SVG files don't represent images as pixels, but as geometric shapes (lines, arcs, polygons) that can be drawn with perfect sharpness at any resolution.</p><p>Let’s use now a complex image as an example, filled with more than four hundred paths and ten thousand nodes:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4ruJDon2gjvBXwHi9DGsA7/997727a99a00188695871c37f08adf46/uHWAmWDUYVNmDByskHnBsSf_-poXNMAz7sxTw-bjNYHldqbU5ecTj_upSCKIHoXRnolnrlpPqvyDbBray-TaRkDJcGOO9CKQUdY3CpvwmaNn0rRkqqnPLAJJaE0D.png" />
            
            </figure><p>Contrary to the bitmaps where pixels arrange together to create the visual perception of an image to the human eye, that vector image can be resized with no quality loss. That happens because resizing that SVG to 300% of its original size is redefining the size of the vectors to 300%, not expanding pixels to 300%.</p><p>This becomes evident when we’re dealing with small resolution images.</p><p>Here is the 100px width SVG from the Toroid shown above:</p><p><img src="http://staging.blog.mrk.cfdata.org/content/images/2022/09/Toroid.svg" /></p><p>and the correspondent 100 pixels width PNG:</p><p><img src="http://staging.blog.mrk.cfdata.org/content/images/2022/09/image3-18.png" /></p><p>Now here is the same SVG with the HTML width attribute set at 300px:</p><p><img src="http://staging.blog.mrk.cfdata.org/content/images/2022/09/Toroid.svg" /></p><p>and the same PNG you saw before, but, upscaled by 3x, so the width is also 300px:</p><p><img src="http://staging.blog.mrk.cfdata.org/content/images/2022/09/unnamed.png" /></p><p>The visual quality loss on the PNG is obvious when it gets scaled up.</p><p>Keep in mind: The Toroid shown above is stored in an SVG file of 142Kb. And that is a very complex and heavy SVG file already.</p><p>Now, if you do want to display a PNG with an original width of 1024px to present a high quality image of the same Toroid above, the size will become an issue:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1haST58mkysZs17Fn2dNv4/d297d11a7aef91c24c8fcda6a89ddf4f/unnamed--1-.png" />
            
            </figure><p>The new 1024px PNG, however, weighs 344 KB. That’s about 2.4 times the weight of the unique SVG that you could use in any size.</p><p>Think about the storage and bandwidth savings when all you need to do with an SVG, to get the exact same displayed image is use a <code>width=”1024”</code> in your HTML. It requires less than half of the kilobytes used on the PNG.</p><p>Couple all of this with the flexibility of using attributes like <code>viewbox</code> in your HTML code, and you can pan, zoom, crop, scale, all without ever needing anything other than the one and original SVG file.</p><p>Here’s an example of an SVG being resized on the client side, with no visual quality loss:</p><div></div>
<p></p><p>Let’s do a quick summary of what we covered so far: SVG files are wonderful for vector images like illustrations, charts, logos, and are infinitely scalable with no need to resize on the server side;</p><p>the same generated image, but on a bitmap is either heavier than the SVG when used in high resolutions, or with very noticeable loss of visual quality when scaled up from a lower resolution.</p>
    <div>
      <h3>So, what are the downsides of using SVG files?</h3>
      <a href="#so-what-are-the-downsides-of-using-svg-files">
        
      </a>
    </div>
    <p>SVG files aren't just images. They are XML-based documents that are as powerful as HTML pages. They can contain arbitrary JavaScript, fetch external content from other URLs or embed HTML elements. This gives SVG files much more power than expected from a simple image.</p><p>Throughout the years, numerous exploits have been known, identified and corrected.</p><p>Some old attacks were very rudimentary, yet effective. The famous <a href="https://en.wikipedia.org/wiki/Billion_laughs_attack">Billion Laughs</a> exploited how <a href="https://www.w3resource.com/xml/entities.php">XML uses Entities and declares them in the Document Type Definition</a>, and how it handles recursion.</p><p>Entities can be something as simple as a declaration of a text string, or a nested reference to other previous entities.</p><p>If you defined a first entity with a simple string, and then created a second entity calling 10 times the first one, and then a third entity calling 10 times the second one up until a 10th one of the same kind, you were requiring a parser to generate an output of a billion strings as defined on the very simple first entity. This would most commonly exhaust resources on the server parsing the XML, and form a <a href="https://www.cloudflare.com/en-gb/learning/ddos/what-is-a-ddos-attack/">DoS</a>. While that particular limitation from the XML parsing got widely addressed through XML parser memory caps and lazy loading of entities, more complex attacks became a regular thing in recent years.</p><p>The common themes in these more recent attacks have been <a href="https://www.cloudflare.com/learning/security/how-to-prevent-xss-attacks/">XSS (cross-site-scripting)</a> and foreign objects referenced in the XML content. In both cases, using SVG inside  tags in your HTML is an invitation for any ill-intended file to reach your end-users. So, what exactly can we do about it and make you trust any SVG file you serve?</p>
    <div>
      <h3>The SVG filter</h3>
      <a href="#the-svg-filter">
        
      </a>
    </div>
    <p>We've developed a filter that simplifies SVG files to only features used for images, so that serving SVG images from any source is just as safe as serving a JPEG or PNG, while preserving SVG's vector graphics capabilities.</p><ul><li><p>We remove scripting. This prevents SVG files from being used for cross-site scripting attacks. Although browsers don't allow scripts in , they would run scripts when SVG files are opened directly as a top-level document.</p></li><li><p>We remove hyperlinks to other documents. This makes SVG files less attractive for SEO spam and phishing.</p></li><li><p>We remove references to cross-origin resources. This stops 3rd parties from tracking who is viewing the image.</p></li></ul><p>What's left is just an image.</p><p>SVG files can also contain embedded images in other formats, like JPEG and PNG, in the form of <a href="https://developer.mozilla.org/en-US/docs/Web/HTTP/Basics_of_HTTP/Data_URLs">Data URLs</a>. We treat these embedded images just like other images that we process, and optimize them too. We don't support SVG files embedded in SVG recursively, though. It does open the door to recursive parsing leading to resource exhaustion on the parser. While the most common browsers are already limiting SVG recursion to one level, the potential to exploit that door led us to not include, at least for now, this capability on our filter.</p><p>We do set Content-Security-Policy (CSP) headers in all our HTTP response headers to disable unwanted features, and that alone acts as first defense, but filtering acts in more depth in case these headers are lost (e.g. if the image was saved as a file and served elsewhere).</p><p>Our tool is <a href="https://github.com/cloudflare/svg-hush">open-source</a>. It's written in Rust and can filter SVG files in a streaming fashion without buffering, so it's fast enough for filtering on the fly.</p><p>The SVG format is pretty complex, with lots of features. If there is safe SVG functionality that we don't support yet, you can report issues and contribute to development of the filter.</p><p>You can see how the tool actually works by looking at the tests folder in the open-source repository,  where a sample unfiltered XML and the already filtered version are present.</p><p>Here’s how a diff of those files looks like:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/61edBaHRCc923sZoBnknKB/dd590659eaab4dafb9c637434e3411a4/image5-8.png" />
            
            </figure><p>Removed are the external references, foreignObjects and any other potential threats.</p>
    <div>
      <h3>How you can use SVG files in Cloudflare Images</h3>
      <a href="#how-you-can-use-svg-files-in-cloudflare-images">
        
      </a>
    </div>
    <p>Starting now you can upload SVG files to Cloudflare Images and serve them at will. Uploading the images can be done like for any other supported format, <a href="https://developers.cloudflare.com/images/cloudflare-images/upload-images/dashboard-upload/">via UI</a> or <a href="https://developers.cloudflare.com/images/cloudflare-images/upload-images/upload-via-url/">API</a>.</p><div></div><p>Variants, <a href="https://developers.cloudflare.com/images/cloudflare-images/transform/resize-images/">named</a> or <a href="https://developers.cloudflare.com/images/cloudflare-images/transform/flexible-variants/">flexible</a>, are intended to transform bitmap (raster) images into whatever size you want to serve them.</p><p>SVG files, as vector images, do not require resizing inside the Images pipeline.</p><p>This results in a banner with the following message when you’re previewing an SVG in the UI:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7DBn12LoNf1pQdIV0yxlN3/737bebe7ab5bb17f2883448aaea23524/image1-30.png" />
            
            </figure><p>And as a result, all variants listed will show the exact same image in the exact same dimensions.</p><p>Because an image is worth a thousand words, especially when trying to describe behaviors, here is what will it look like if you scroll through the variants preview:</p><div></div><p>With Cloudflare Images you do get a default Public Variant listed when you start using the product, and so you can immediately start serving your SVG files using it, just like this:</p><p><a href="https://imagedelivery.net/">https://imagedelivery.net/</a>&lt;your_account_hash&gt;/&lt;your_SVG_ID&gt;/public</p><p>And, as shown from above, you can use any of your variant names to serve the image, as it won’t affect the output at all.</p><p>If you’re an Image Resizing customer, you can also benefit from serving your files with our tool. Make sure you head to the <a href="https://developers.cloudflare.com/images/image-resizing/">Developer Documentation</a> pages to see how.</p>
    <div>
      <h3>What’s next?</h3>
      <a href="#whats-next">
        
      </a>
    </div>
    <p>You can subscribe to Cloudflare Images <a href="https://dash.cloudflare.com/?to=/:account/images">directly in the dashboard</a>, and starting from today you can use the product to store and serve SVG files.</p><p>If you want to contribute to further developments of the filtering too and help expand its abilities, check out our <a href="https://github.com/cloudflare/svg-hush">SVG-Hush Tool repo</a>.</p><p>You can also connect directly with the team in our <a href="https://discord.com/invite/cloudflaredev">Cloudflare Developers Discord Server</a>.</p> ]]></content:encoded>
            <category><![CDATA[GA Week]]></category>
            <category><![CDATA[General Availability]]></category>
            <category><![CDATA[Cloudflare Images]]></category>
            <guid isPermaLink="false">5Z8tlaSgZifZHEK46BkW2r</guid>
            <dc:creator>Paulo Costa</dc:creator>
            <dc:creator>Yevgen Safronov</dc:creator>
            <dc:creator>Kornel Lesiński</dc:creator>
        </item>
        <item>
            <title><![CDATA[Announcing the Cloudflare Images Sourcing Kit]]></title>
            <link>https://blog.cloudflare.com/cloudflare-images-sourcing-kit/</link>
            <pubDate>Fri, 13 May 2022 12:59:25 GMT</pubDate>
            <description><![CDATA[ Migrating millions of images into Cloudflare is now simple, fast and at the distance of a few clicks. The new Cloudflare Images Sourcing Kit 
Allows you to define your image sources, reuse them when you need to add new images or refresh existing ones ]]></description>
            <content:encoded><![CDATA[ <p></p><p>When we announced <a href="/announcing-cloudflare-images-beta/">Cloudflare Images to the world</a>, we introduced a way to store images within the product and help customers move away from the egress fees met when using remote sources for their deliveries via Cloudflare.</p><p>To <a href="https://www.cloudflare.com/products/cloudflare-images/">store the images in Cloudflare</a>, customers can upload them <a href="https://developers.cloudflare.com/images/cloudflare-images/upload-images/dashboard-upload/">via UI</a> with a simple drag and drop, or <a href="https://developers.cloudflare.com/images/cloudflare-images/api-request/">via API</a> for scenarios with a high number of objects for which scripting their way through the upload process makes more sense.</p><p>To create flexibility on how to import the images, we’ve recently also included the ability to <a href="https://developers.cloudflare.com/images/cloudflare-images/upload-images/upload-via-url/">upload via URL</a> or <a href="https://developers.cloudflare.com/images/cloudflare-images/upload-images/custom-id/">define custom names and paths for your images</a> to allow a simple mapping between customer repositories and the objects in Cloudflare. It's also possible to <a href="https://developers.cloudflare.com/images/cloudflare-images/serve-images/#serving-images-from-custom-domains">serve from a custom hostname</a> to create flexibility on how your end-users see the path, to improve the delivery performance by removing the need to do TLS negotiations or to improve your brand recognition through URL consistency.</p><p>Still, there was no simple way to tell our product: <i>“Tens of millions of images are in this repository URL. Go and grab them all from me”</i>.  </p><p>In some scenarios, our customers have buckets with millions of images to upload to Cloudflare Images. Their goal is to migrate all objects to Cloudflare through a one-time process, allowing you to drop the external storage altogether.</p><p>In another common scenario, different departments in larger companies use independent systems configured with varying storage repositories, all of which they feed at specific times with uneven upload volumes. And it would be best if they could reuse definitions to get all those new Images in Cloudflare to ensure the portfolio is up-to-date while not paying egregious egress fees by serving the public directly from those multiple storage providers.</p><p>These situations required the upload process to Cloudflare Images to include logistical coordination and scripting knowledge. Until now.</p>
    <div>
      <h3>Announcing the Cloudflare Images Sourcing Kit</h3>
      <a href="#announcing-the-cloudflare-images-sourcing-kit">
        
      </a>
    </div>
    <p>Today, we are happy to share with you our Sourcing Kit, where you can define one or more sources containing the objects you want to migrate to Cloudflare Images.</p><p>But, what exactly is Sourcing? In industries like manufacturing, it implies a number of operations, from selecting suppliers, to vetting raw materials and delivering reports to the process owners.</p><p>So, we borrowed that definition and translated it into a Cloudflare Images set of capabilities allowing you to:</p><ol><li><p>Define one or multiple repositories of images to bulk import;</p></li><li><p>Reuse those sources and import only new images;</p></li><li><p>Make sure that only actual usable images are imported and not other objects or file types that exist in that source;</p></li><li><p>Define the target path and filename for imported images;</p></li><li><p>Obtain Logs for the bulk operations;</p></li></ol><p>The new kit does it all. So let's go through it.</p>
    <div>
      <h3>How the Cloudflare Images Sourcing Kit works</h3>
      <a href="#how-the-cloudflare-images-sourcing-kit-works">
        
      </a>
    </div>
    <p>In the <a href="https://dash.cloudflare.com/?to=/:account/images">Cloudflare Dashboard</a>, you will soon find the Sourcing Kit under Images.</p><p>In it, you will be able to create a new source definition, view existing ones, and view the status of the last operations.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4SZEMoU2nrlZPvDawlEpXZ/f14ac5bdf189995fa2f5c0429811cf6e/image5-12.png" />
            
            </figure><p>Clicking on the create button will launch the wizard that will guide you through the first bulk import from your defined source:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5W4ZRJaDMUAzHpxq3j0Nkc/1401923c512b249b716ddac271ee1ff2/image2-32.png" />
            
            </figure><p>First, you will need to input the Name of the Source and the URL for accessing it. You’ll be able to save the definitions and reuse the source whenever you wish.After running the necessary validations, you’ll be able to define the rules for the import process.</p><p>The first option you have allows an Optional Prefix Path. Defining a prefix allows a unique identifier for the images uploaded from this particular source, differentiating the ones imported from this source.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5syJ0nK1aQtBS1O65s9ao7/1dad4362abe8b7eba03664816090212e/image4-19.png" />
            
            </figure><p>The naming rule in place respects the source image name and path already, so let's assume there's a puppy image to be retrieved at:</p><p><code>[https://my-bucket.s3.us-west-2.amazonaws.com/folderA/puppy.png](https://my-bucket.s3.us-west-2.amazonaws.com/folderA/puppy.png)</code></p><p>When imported without any Path Prefix, you’ll find the image at</p><p><code>[https://imagedelivery.net/&lt;AccountId&gt;/folderA/puppy.png](https://imagedelivery.net/&lt;AccountId&gt;/folderA/puppy.png)</code></p><p>Now, you might want to create an additional Path Prefix to identify the source, for example by mentioning that this bucket is from the Technical Writing department. In the puppy case, the result would be:</p><p><code>[https://imagedelivery.net/&lt;AccountId&gt;/**techwriting**/folderA/puppy.png](https://imagedelivery.net/&lt;AccountId&gt;/techwriting/folderA/puppy.png)</code></p><p>Custom Path prefixes also provide a way to prevent name clashes coming from other sources.</p><p>Still, there will be times when customers don't want to use them. And, when re-using the source to import images, a same path+filename destinations clash might occur.</p><p>By default, we don’t overwrite existing images, but we allow you to select that option and refresh your catalog present in the Cloudflare pipeline.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5xRY7HGX666fWFuaKU8bgm/6f0c97e2c024965bf4129d56c57b224a/image6-12.png" />
            
            </figure><p>Once these inputs are defined, a click on the Create and start migration button at the bottom will trigger the upload process.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/301MOQX804MipTdEAzO3Wz/367288704deb02c6a2d5eb284f214867/image10.png" />
            
            </figure><p>This action will show the final wizard screen, where the migration status is displayed. The progress log will report any errors obtained during the upload and is also available to download.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5qz31zhtbNMiIAiRceslTp/7d2e1128ffd82e86cd4752157fdeaaf9/image7-6.png" />
            
            </figure><p>You can reuse, edit or delete source definitions when no operations are running, and at any point, from the home page of the kit, it's possible to access the status and return to the ongoing or last migration report.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/13JDKst8pmw43G21YQ1yi2/7a5c8ee33a2e3ab31675a4caf97a8e4c/image3-24.png" />
            
            </figure>
    <div>
      <h3>What’s next?</h3>
      <a href="#whats-next">
        
      </a>
    </div>
    <p>With the Beta version of the Cloudflare Images Sourcing Kit, we will allow you to define AWS S3 buckets as a source for the imports. In the following versions, we will enable definitions for other common repositories, such as the ones from Azure Storage Accounts or Google Cloud Storage.</p><p>And while we're aiming for this to be a simple UI, we also plan to make everything available through CLI: from defining the repository URL to starting the upload process and retrieving a final report.</p>
    <div>
      <h3>Apply for the Beta version</h3>
      <a href="#apply-for-the-beta-version">
        
      </a>
    </div>
    <p>We will be releasing the Beta version of this kit in the following weeks, allowing you to source your images from third party repositories to Cloudflare.</p><p>If you want to be the first to use Sourcing Kit, request to join the waitlist on the <a href="https://dash.cloudflare.com/?to=/:account/images">Cloudflare Images dashboard</a>.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/75BuXJQtfqBDv8SOx3JKzs/3b765c95ac58e7bb7a8eeade89d323ea/image1-39.png" />
            
            </figure><p></p> ]]></content:encoded>
            <category><![CDATA[Platform Week]]></category>
            <category><![CDATA[Product News]]></category>
            <category><![CDATA[Cloudflare Images]]></category>
            <category><![CDATA[Developers]]></category>
            <guid isPermaLink="false">72mbRGNXN4aGuGsSccsIqq</guid>
            <dc:creator>Paulo Costa</dc:creator>
            <dc:creator>Natalie Yeh</dc:creator>
            <dc:creator>Yevgen Safronov</dc:creator>
        </item>
        <item>
            <title><![CDATA[Building a full stack application with Cloudflare Pages]]></title>
            <link>https://blog.cloudflare.com/building-full-stack-with-pages/</link>
            <pubDate>Wed, 17 Nov 2021 13:58:53 GMT</pubDate>
            <description><![CDATA[ Full-stack support for Cloudflare Pages is now in open beta, and you can test it today with this example image-sharing project that integrates with KV, Durable Objects, Cloudflare Images and Cloudflare Access.  ]]></description>
            <content:encoded><![CDATA[ <p></p><p>We were so excited to <a href="/cloudflare-pages-goes-full-stack">announce support for full stack applications in Cloudflare Pages</a> that we knew we had to show it off in a big way. We've built a sample image-sharing platform to demonstrate how you can add serverless functions right from within Pages with help from Cloudflare Workers. With just one new file to your project, you can add dynamic rendering, interact with other APIs, and persist data with KV and Durable Objects. The possibilities for full-stack applications, in combination with Pages' quick development cycles and unlimited preview environments, gives you the power to create almost any application.</p><p>Today, we're walking through our example image-sharing platform. We want to be able to share pictures with friends while still also keeping some images private. We'll build a JSON API with Functions (storing data on KV and Durable Objects), integrate with Cloudflare Images and Cloudflare Access, and use React for our front end.</p><p>If you're wanting to dive right into the good stuff, <a href="https://images.pages.dev/">our demo instance is published here</a>, and <a href="https://github.com/cloudflare/images.pages.dev">the code is on GitHub</a>, but stick around for a more gentle approach.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6ONc6WtXvEqXxJXIVlwX4o/85109aa1b16e6a79fc0b1ed60eccf485/image2-17.png" />
            
            </figure>
    <div>
      <h2>Building serverless functions with Cloudflare Pages</h2>
      <a href="#building-serverless-functions-with-cloudflare-pages">
        
      </a>
    </div>
    
    <div>
      <h3>File-based routing</h3>
      <a href="#file-based-routing">
        
      </a>
    </div>
    <p>If you're not already familiar, Cloudflare Pages <a href="https://developers.cloudflare.com/pages/get-started">connects with your git provider</a> (GitHub and <a href="/cloudflare-pages-partners-with-gitlab">GitLab</a>), and automates the deployment of your static site to Cloudflare's network. Functions lets you enhance these apps by sprinkling in dynamic data. If you haven't already, <a href="https://dash.cloudflare.com/sign-up/pages">you can sign up here</a>.</p><p>In our project, let's create a new function:</p>
            <pre><code>// ./functions/time.js


export const onRequest = () =&gt; {
  return new Response(new Date().toISOString())
}</code></pre>
            <p><code>git commit</code>-ing and pushing this file should trigger a build and deployment of your first Pages function. Any requests for <code>/time</code> will be served by this function, and all other requests will fall-back to the static assets of your project. Placing Functions files in directories works as you'd expect: <code>./functions/api/time.js</code> would be available at <code>/api/time</code> and <code>./functions/some_directory/index.js</code> would be available at <code>/some_directory</code>.</p><p>We also support TypeScript (<code>./functions/time.ts</code> would work just the same), as well as parameterized files:</p><ul><li><p><code>./functions/todos/[id].js</code> with single square brackets will match all requests like <code>/todos/123</code>;</p></li><li><p>and <code>./functions/todos/[[path]].js</code> with double square brackets, will match requests for any number of path segments (e.g. <code>/todos/123/subtasks</code>).</p></li></ul><p>We declare a <code>PagesFunction</code> type in the <a href="https://github.com/cloudflare/workers-types">@cloudflare/workers-types</a> library which you can use to type-check your Functions.</p>
    <div>
      <h3>Dynamic data</h3>
      <a href="#dynamic-data">
        
      </a>
    </div>
    <p>So, returning to our image-sharing app, let's assume we already have some images uploaded, and we want to display them on the homepage. We'll need an endpoint which will return a list of these images, which the front-end can call:</p>
            <pre><code>// ./functions/api/images.ts

export const jsonResponse = (value: any, init: ResponseInit = {}) =&gt;
  new Response(JSON.stringify(value), {
    headers: { "Content-Type": "application/json", ...init.headers },
    ...init,
  });

const generatePreviewURL = ({
  previewURLBase,
  imagesKey,
  isPrivate,
}: {
  previewURLBase: string;
  imagesKey: string;
  isPrivate: boolean;
}) =&gt; {
  // If isPrivate, generates a signed URL for the 'preview' variant
  // Else, returns the 'blurred' variant URL which never requires signed URLs
  // https://developers.cloudflare.com/images/cloudflare-images/serve-images/serve-private-images-using-signed-url-tokens

  return "SIGNED_URL";
};

export const onRequestGet: PagesFunction&lt;{
  IMAGES: KVNamespace;
}&gt; = async ({ env }) =&gt; {
  const { imagesKey } = (await env.IMAGES.get("setup", "json")) as Setup;

  const kvImagesList = await env.IMAGES.list&lt;ImageMetadata&gt;({
    prefix: `image:uploaded:`,
  });

  const images = kvImagesList.keys
    .map((kvImage) =&gt; {
      try {
        const { id, previewURLBase, name, alt, uploaded, isPrivate } =
          kvImage.metadata as ImageMetadata;

        const previewURL = generatePreviewURL({
          previewURLBase,
          imagesKey,
          isPrivate,
        });

        return {
          id,
          previewURL,
          name,
          alt,
          uploaded,
          isPrivate,
        };
      } catch {
        return undefined;
      }
    })
    .filter((image) =&gt; image !== undefined);

  return jsonResponse({ images });
};</code></pre>
            <p>Eagle-eyed readers will notice we're exporting <code>onRequestGet</code> which lets us only respond to <code>GET</code> requests.</p><p>We're also using a KV namespace (accessed with <code>env.IMAGES</code>) to store information about images that have been uploaded. To create a binding in your Pages project, navigate to the "Settings" tab.</p><p>![](<a href="/content/images/2021/11/unnamed-15.png_REGULAR">http://staging.blog.mrk.cfdata.org/content/images/2021/11/unnamed-15.png_REGULAR</a> "Screenshot of the "Functions" page on the Pages project "Settings" tab in the Cloudflare dashboard")</p>
    <div>
      <h3>Interfacing with other APIs</h3>
      <a href="#interfacing-with-other-apis">
        
      </a>
    </div>
    <p>Cloudflare Images is an inexpensive, high-performance, and featureful service for <a href="https://www.cloudflare.com/developer-platform/solutions/hosting/">hosting</a> and transforming images. You can create multiple variants to render your images in different ways and control access with signed URLs. We'll add a function to interface with this service's API and upload incoming files to Cloudflare Images:</p>
            <pre><code>// ./functions/api/admin/upload.ts

export const onRequestPost: PagesFunction&lt;{
  IMAGES: KVNamespace;
}&gt; = async ({ request, env }) =&gt; {
  const { apiToken, accountId } = (await env.IMAGES.get(
    "setup",
    "json"
  )) as Setup;

  // Prepare the Cloudflare Images API request body
  const formData = await request.formData();
  formData.set("requireSignedURLs", "true");
  const alt = formData.get("alt") as string;
  formData.delete("alt");
  const isPrivate = formData.get("isPrivate") === "on";
  formData.delete("isPrivate");

  // Upload the image to Cloudflare Images
  const response = await fetch(
    `https://api.cloudflare.com/client/v4/accounts/${accountId}/images/v1`,
    {
      method: "POST",
      body: formData,
      headers: {
        Authorization: `Bearer ${apiToken}`,
      },
    }
  );

  // Store the image metadata in KV
  const {
    result: {
      id,
      filename: name,
      uploaded,
      variants: [url],
    },
  } = await response.json&lt;{
    result: {
      id: string;
      filename: string;
      uploaded: string;
      requireSignedURLs: boolean;
      variants: string[];
    };
  }&gt;();

  const metadata: ImageMetadata = {
    id,
    previewURLBase: url.split("/").slice(0, -1).join("/"),
    name,
    alt,
    uploaded,
    isPrivate,
  };

  await env.IMAGES.put(
    `image:uploaded:${uploaded}`,
    "Values stored in metadata.",
    { metadata }
  );
  await env.IMAGES.put(`image:${id}`, JSON.stringify(metadata));

  return jsonResponse(true);
};</code></pre>
            
    <div>
      <h3>Persisting data</h3>
      <a href="#persisting-data">
        
      </a>
    </div>
    <p>We're already using KV to store information that is read often but rarely written to. What about features that require a bit more synchronicity?</p><p>Let's add a download counter to each of our images. We can create a <code>highres</code> variant in Cloudflare Images, and increment the counter every time a user requests a link. This requires a bit more setup, but unlocking the power of Durable Objects in your projects is absolutely worth it.</p><p>We'll need to create and publish the Durable Object class capable of maintaining this download count:</p>
            <pre><code>// ./durable_objects/downloadCounter.js
ts#example---counter

export class DownloadCounter {
  constructor(state) {
    this.state = state;
    // `blockConcurrencyWhile()` ensures no requests are delivered until initialization completes.
    this.state.blockConcurrencyWhile(async () =&gt; {
      let stored = await this.state.storage.get("value");
      this.value = stored || 0;
    });
  }

  async fetch(request) {
    const url = new URL(request.url);
    let currentValue = this.value;

    if (url.pathname === "/increment") {
      currentValue = ++this.value;
      await this.state.storage.put("value", currentValue);
    }

    return jsonResponse(currentValue);
  }
}</code></pre>
            
    <div>
      <h3>Middleware</h3>
      <a href="#middleware">
        
      </a>
    </div>
    <p>If you need to execute some code (such as authentication or logging) before you run your function, Pages offers easy-to-use middleware which can be applied at any level in your file-based routing. By creating a <code>_middleware.ts</code> file in a directory, we know to first run this file, and then execute your function when <code>next()</code> is called.</p><p>In our application, we want to prevent unauthorized users from uploading images (<code>/api/admin/upload</code>) or deleting images (<code>/api/admin/delete</code>). Cloudflare Access lets us apply <a href="https://www.cloudflare.com/learning/access-management/role-based-access-control-rbac/">role-based access control</a> to all or part of our application, and you only need a single file to integrate it into our serverless functions. We create  <code>./functions/api/admin/_middleware.ts</code> which will apply to all incoming requests at <code>/api/admin/*</code>:</p>
            <pre><code>// ./functions/api/admin/_middleware.ts

const validateJWT = async (jwtAssertion: string | null, aud: string) =&gt; {
  // If the JWT is valid, return the JWT payload
  // Else, return false
  // https://developers.cloudflare.com/cloudflare-one/identity/users/validating-json

  return jwtPayload;
};

const cloudflareAccessMiddleware: PagesFunction&lt;{ IMAGES: KVNamespace }&gt; =
  async ({ request, env, next, data }) =&gt; {
    const { aud } = (await env.IMAGES.get("setup", "json")) as Setup;

    const jwtPayload = await validateJWT(
      request.headers.get("CF-Access-JWT-Assertion"),
      aud
    );

    if (jwtPayload === false)
      return new Response("Access denied.", { status: 403 });

    // We could also use the data object to pass information between middlewares
    data.user = jwtPayload.email;

    return await next();
  };

export const onRequest = [cloudflareAccessMiddleware];</code></pre>
            <p>Middleware is a powerful tool at your disposal allowing you to easily protect parts of your application with Cloudflare Access, or quickly integrate with <a href="https://www.cloudflare.com/learning/performance/what-is-observability/">observability</a> and error logging platforms such as Honeycomb and Sentry.</p>
    <div>
      <h2>Integrating as Jamstack</h2>
      <a href="#integrating-as-jamstack">
        
      </a>
    </div>
    <p>The "Jam" of "Jamstack" stands for JavaScript, API and Markup. Cloudflare Pages previously provided the 'J' and 'M', and with Workers in the middle, you can truly go full-stack Jamstack.</p><p>We've built the front end of this image sharing platform with <a href="https://create-react-app.dev/">Create React App</a> as an approachable example, but <a href="https://developers.cloudflare.com/pages/platform/build-configuration#framework-presets">Cloudflare Pages natively integrates with an ever-growing number of frameworks</a> (currently 23), and you can always <a href="https://developers.cloudflare.com/pages/platform/build-configuration#build-commands-and-directories">configure your own entirely custom build command</a>.</p><p>Your front end simply needs to make a call to the Functions we've already configured, and render out that data. We're using <a href="https://swr.vercel.app/">SWR</a> to simplify things, but you could do this with entirely vanilla JavaScript <code>fetch</code>-es, if that's your preference.</p>
            <pre><code>// ./src/components/ImageGrid.tsx

export const ImageGrid = () =&gt; {
  const { data, error } = useSWR&lt;{ images: Image[] }&gt;("/api/images");

  if (error || data === undefined) {
    return &lt;div&gt;An unexpected error has occurred when fetching the list of images. Please try again.&lt;/div&gt;;
  }


  return (
    &lt;div&gt;
      {data.images.map((image) =&gt; (
        &lt;ImageCard image={image} key={image.id} /&gt;
      ))}
    &lt;/div&gt;
  );

}</code></pre>
            
    <div>
      <h2>Local development</h2>
      <a href="#local-development">
        
      </a>
    </div>
    <p>No matter how fast it is, iterating on a project like this can be painful if you have to push up every change in order to test how it works. We've released a first-class integration with wrangler for local development of Pages projects, including full support for Functions, Workers, secrets, environment variables and KV. Durable Objects support is coming soon.</p><p>Install from npm:</p>
            <pre><code>npm install wrangler@beta</code></pre>
            <p>and either serve a folder of static assets, or proxy your existing tooling:</p>
            <pre><code># Serve a directory
npx wrangler pages dev ./public

# or integrate with your other tools
npx wrangler pages dev -- npx react-scripts start</code></pre>
            
    <div>
      <h2>Go forth, and build!</h2>
      <a href="#go-forth-and-build">
        
      </a>
    </div>
    <p>If you like puppies, <a href="https://images.pages.dev/">we've deployed our image-sharing application here</a>, and if you like code, <a href="https://github.com/cloudflare/images.pages.dev">that's over on GitHub</a>. Feel free to fork and deploy it yourself! There's a five-minute setup wizard, and you'll need Cloudflare Images, Access, Workers, and Durable Objects.</p><p>We are so excited about the future of the Pages platform, and we want to hear what you're building! Show off your full-stack applications in the <a href="https://discord.com/channels/595317990191398933/783765338692386886">#what-i-built channel</a>, or get assistance in the <a href="https://discord.com/channels/595317990191398933/789155108529111069">#pages-help channel</a> on <a href="https://discord.gg/cloudflaredev">our Discord server</a>.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3FMhGC7kxjUEspTnvUGjeQ/50a9a9bd201ed390f1f62c72bc9e2cb4/image1-37.png" />
            
            </figure>
    <div>
      <h3>Watch on Cloudflare TV</h3>
      <a href="#watch-on-cloudflare-tv">
        
      </a>
    </div>
    <div></div><p></p> ]]></content:encoded>
            <category><![CDATA[Full Stack Week]]></category>
            <category><![CDATA[Cloudflare Pages]]></category>
            <category><![CDATA[Cloudflare Workers]]></category>
            <category><![CDATA[Full Stack]]></category>
            <category><![CDATA[Cloudflare Images]]></category>
            <category><![CDATA[Cloudflare Access]]></category>
            <category><![CDATA[Developers]]></category>
            <category><![CDATA[Developer Platform]]></category>
            <guid isPermaLink="false">48ToA8dVaTtXmVL58V9bJm</guid>
            <dc:creator>Greg Brimble</dc:creator>
            <dc:creator>Obinna Ekwuno</dc:creator>
        </item>
        <item>
            <title><![CDATA[Optimizing images on the web]]></title>
            <link>https://blog.cloudflare.com/optimizing-images/</link>
            <pubDate>Wed, 15 Sep 2021 12:59:31 GMT</pubDate>
            <description><![CDATA[ A detailed breakdown of how best to optimize images for the web, a new tool to test a webpage's image performance, and explanation of how Cloudflare Images can help to improve your website's image experience. ]]></description>
            <content:encoded><![CDATA[ 
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4erH7OasTo8mE8mhF1oEu4/f83683295d2d6dede03fc6b38fb70faa/tXnAVVjW7s4o475UtWZNzRyZXKdKpRWniLRUziVX0ohuQehj0NDgNTW7FaMAE9ZUEdvpU04d4fR7_1XqVseW0mgA0fZT2E8KS_3c3GICC6HxpPWG5nMYmhm1b4zl.png" />
            
            </figure><p>Images are a massive part of the Internet. On the median web page, <a href="https://almanac.httparchive.org/en/2020/page-weight">images account for 51% of the bytes loaded</a>, so any improvement made to their speed or their size has a significant impact on performance.</p><p>Today, we are excited to announce Cloudflare’s Image Optimization Testing Tool <i>(as of JUL 2023, this tool is no longer available)</i>. Simply enter your website’s URL, and we’ll run a series of automated tests to determine if there are any possible improvements you could make in delivering optimal images to visitors.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2CZPi22VztOJrNnWuNCsmc/d115048e7177f11dc3b84b30c225147a/SCREENSHOT-2.png" />
            
            </figure>
    <div>
      <h2>How users experience speed</h2>
      <a href="#how-users-experience-speed">
        
      </a>
    </div>
    <p>Everyone who has ever browsed the web has experienced a website that was slow to load. Often, this is a result of poorly optimized images on that webpage that are either too large for purpose or that were embedded on the page with insufficient information.</p><p>Images on a page might take painfully long to load as pixels agonizingly fill in from top-to-bottom; or worse still, they might cause massive shifts of the page layout as the browser learns about their dimensions. These problems are a serious annoyance to users and as of August 2021, search engines punish pages accordingly.</p><p>Understandably, slow page loads have an adverse effect on a page's “bounce rate” which is the percentage of visitors which quickly move off of the page. On e-commerce sites in particular, the bounce rate typically has a direct monetary impact and pages are usually very image-heavy. It is critically important to optimize all the images on your webpages to reduce load on and egress from your origin, to improve your performance in search engine rankings and, ultimately, to provide a great experience for your users.</p>
    <div>
      <h2>Measuring speed</h2>
      <a href="#measuring-speed">
        
      </a>
    </div>
    <p>Since the end of August 2021, <a href="https://developers.google.com/search/blog/2020/11/timing-for-page-experience">Google has used the Core Web Vitals to quantify page performance when considering search results rankings</a>. These metrics are three numbers: <a href="https://web.dev/lcp/">Largest Contentful Paint (LCP)</a>, <a href="https://web.dev/fid/">First Input Delay (FID)</a>, and <a href="https://web.dev/cls/">Cumulative Layout Shift (CLS)</a>. They approximate the experience of loading, interactivity and visual stability respectively.</p><p>CLS and LCP are the two metrics we can improve by optimizing images. When CLS is high, this indicates that large amounts of the page layout is shifting as it loads. LCP measures the time it takes for the single largest image or text block in the viewport to render.</p><p>These can both be measured “in the field” with Real User Monitoring (RUM) analytics such as <a href="/start-measuring-web-vitals-with-browser-insights/">Cloudflare's Web Analytics</a>, or in a “lab environment” using <a href="https://images.cloudflare.com/">Cloudflare’s Image Optimization Testing Tool</a>.</p>
    <div>
      <h2>How to optimize for speed</h2>
      <a href="#how-to-optimize-for-speed">
        
      </a>
    </div>
    
    <div>
      <h3>Dimensions</h3>
      <a href="#dimensions">
        
      </a>
    </div>
    <p>One of the most impactful performance improvements a website author can make is ensuring they deliver images with appropriate dimensions. Images taken on a modern camera can be truly massive, and some recent flagship phones have gigantic sensors. The Samsung Galaxy S21 Ultra, for example, has a 108 MP sensor which captures a 12,000 by 9,000 pixel image. That same phone has a screen width of only 1440 pixels. It is physically impossible to show every pixel of the photo on that device: for a landscape photo, only 12% of pixel columns can be displayed.</p><p>Embedding this image on a webpage presents the same problem, but this time, that image and all of its unused pixels are sent over the Internet. Ultimately, this creates unnecessary load on the server, higher egress costs, and longer loading times for visitors.. This is exacerbated even further for visitors on mobile since they are often using a slower connection and have limits on their data usage. On a fast 3G connection, that 108 MP photo might consume 26 MB of both the visitor’s data plan and the website’s egress bandwidth, and take more than two minutes to load!</p><p>It might be tempting to always deliver images with the highest possible resolution to avoid “blocky” or pixelated images, but when resizing is done correctly, this is not a problem. “Blocky” artifacts typically occur when an image is processed multiple times (for example, an image is repeatedly uploaded and downloaded by users on a platform which compresses that image). Pixelated images occur when an image has been shrunk to a size smaller than the screen it is rendered on.</p><p>So, how can website authors avoid these pitfalls and ensure a correctly sized image is delivered to visitors’ devices? There are two main approaches:</p><ul><li><p><b>Media conditions with </b><code><b>srcset</b></code><b> and </b><code><b>sizes</b></code></p></li></ul><p>When embedding an image on a webpage, traditionally the author would simply pass a <code>src</code> attribute on an <code>img</code> tag:</p>
            <pre><code>&lt;img src="hello_world_12000.jpg" alt="Hello, world!" /&gt;</code></pre>
            <p><a href="https://caniuse.com/srcset">Since 2017, all modern browsers have supported the more dynamic <code>srcset</code> attribute</a>. This allows authors to set multiple image sources, depending on the matching media condition of the visitor’s browser:</p>
            <pre><code>&lt;img srcset="hello_world_1500.jpg 1500w,
             hello_world_2000.jpg 2000w,
             hello_world_12000.jpg 12000w"
     sizes="(max-width: 1500px) 1500px,
            (max-width: 2000px) 2000px,
            12000px"
     src="hello_world_12000.jpg"
     alt="Hello, world!" /&gt;</code></pre>
            <p>Here, with the <code>srcset</code> attribute, we're informing the browser that there are three variants of the image, each with a different intrinsic width: 1,500 pixels, 2,000 pixels and the original 12,000 pixels. The browser then evaluates the media conditions in the <code>sizes</code> attribute ( <code>(max-width: 1500px)</code> and <code>(max-width: 2000px)</code>) in order to select the appropriate image variant from the <code>srcset</code> attribute. If the browser's viewport width is less than 1500px, the <code>hello_world_1500.jpg</code> image variant will be loaded; if the browser's viewport width is between 1500px and 2000px, the <code>hello_world_2000.jpg</code> image variant will be loaded; and finally, if the browser's viewport width is larger than 2000px, the browser will fallback to loading the <code>hello_world_12000.jpg</code> image variant.</p><p>Similar behavior is also possible with a <code>picture</code> element, using the <code>source</code> child element which supports a variety of other selectors.</p><ul><li><p><b><b><b>Client Hints</b></b></b></p></li></ul><p>Client Hints are a standard that some browsers are choosing to implement, and some not. They are a set of HTTP request headers which tell the server about the client's device. For example, the browser can attach a <code>Viewport-Width</code> header when requesting an image which informs the server of the width of that particular browser's viewport (note this header is currently in the process of <a href="https://wicg.github.io/responsive-image-client-hints/#sec-ch-viewport-width">being renamed</a> to <code>Sec-CH-Viewport-Width</code>).</p><p>This simplifies the markup in the previous example greatly — in fact, no changes are required from the original simple HTML:</p>
            <pre><code>&lt;img src="hello_world_12000.jpg" alt="Hello, world!" /&gt;</code></pre>
            <p>If Client Hints are supported, when the browser makes a request for <code>hello_world_12000.jpg</code>, it might attach the following header:</p>
            <pre><code>Viewport-Width: 1440</code></pre>
            <p>The server could then automatically serve a smaller image variant (e.g. <code>hello_world_1500.jpg</code>), despite the request originally asking for <code>hello_world_12000.jpg</code> image.</p><p>By enabling browsers to request an image with appropriate dimensions, we save bandwidth and time for both your server and for your visitors.</p>
    <div>
      <h3>Format</h3>
      <a href="#format">
        
      </a>
    </div>
    <p>JPEG, PNG, GIF, WebP, and now, AVIF. AVIF is the latest image format with widespread industry support, and it often outperforms its preceding formats. AVIF supports transparency with an alpha channel, it supports animations, and it is typically 50% smaller than comparable JPEGs (vs. WebP's reduction of only 30%).</p><p><a href="/generate-avif-images-with-image-resizing/">We added the AVIF format to Cloudflare's Image Resizing product last year</a> as soon as Google Chrome added support. Firefox 93 (scheduled for release on October 5, 2021) will be Firefox's first stable release, and with both Microsoft and Apple as members of AVIF's <a href="https://aomedia.org/">Alliance for Open Media</a>, we hope to see support in Edge and Safari soon. Before these modern formats, we also saw innovative approaches to improving how an image loads on a webpage. <a href="https://blurha.sh/">BlurHash</a> is a technique of embedding a very small representation of the image inside the HTML markup which can be immediately rendered and acts as a placeholder until the final image loads. This small representation (hash) produced a blurry mix of colors similar to that of the final image and so eased the loading experience for users.</p><p><a href="/parallel-streaming-of-progressive-images/">Progressive JPEGs</a> are similar in effect, but are a built-in feature of the image format itself. Instead of encoding the image bytes from top-to-bottom, bytes are ordered in increasing levels of image detail. This again produces a more subtle loading experience, where the user first sees a low quality image which progressively “enhances” as more bytes are loaded.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/24SqYrCdUCp0ADM7W5sh11/ed8511ca138b4db9c92248b7db2cbbe1/Enhance.png" />
            
            </figure>
    <div>
      <h3>Quality</h3>
      <a href="#quality">
        
      </a>
    </div>
    <p>The newer image formats (WebP and AVIF) support lossless compression, unlike their predecessor, JPEG. For some uses, lossless compression might be appropriate, but for the majority of websites, speed is prioritized and this minor loss in quality is worth the time and bytes saved.</p><p>Optimizing where to set the quality is a balancing act: too aggressive and artifacts become visible on your image; too little and the image is unnecessarily large. <a href="https://opensource.google/projects/butteraugli">Butteraugli</a> and <a href="https://en.wikipedia.org/wiki/Structural_similarity">SSIM</a> are examples of algorithms which approximate our perception of image quality, but this is currently difficult to automate and is therefore best set manually. In general, however, we find that around 85% in most compression libraries is a sensible default.</p>
    <div>
      <h3>Markup</h3>
      <a href="#markup">
        
      </a>
    </div>
    <p>All of the previous techniques reduce the number of bytes an image uses. This is great for improving the loading speed of those images and the Largest Contentful Paint (LCP) metric. However, to improve the Cumulative Layout Shift (CLS) metric, we must minimize changes to the page layout. This can be done by informing the browser of the image size ahead of time.</p><p>On a poorly optimized webpage, images will be embedded without their dimensions in the markup. The browser fetches those images, and only once it has received the header bytes of the image can it know about the dimensions of that image. The effect is that the browser first renders the page where the image takes up zero pixels, and then suddenly redraws with the dimensions of that image before actually loading the image pixels themselves. This is jarring to users and has a serious impact on usability.</p><p>It is important to include dimensions of the image inside HTML markup to allow the browser to allocate space for that image before it even begins loading. This prevents unnecessary redraws and reduces layout shift. It is even possible to set dimensions when dynamically loading responsive images: by informing the browser of the height and width of the original image, assuming the aspect ratio is constant, it will automatically calculate the correct height, even when using a width selector.</p>
            <pre><code>&lt;img height="9000"
     width="12000"
     srcset="hello_world_1500.jpg 1500w,
             hello_world_2000.jpg 2000w,
             hello_world_12000.jpg 12000w"
     sizes="(max-width: 1500px) 1500px,
            (max-width: 2000px) 2000px,
            12000px"
     src="hello_world_12000.jpg"
     alt="Hello, world!" /&gt;</code></pre>
            <p>Finally, lazy-loading is a technique which reduces the work that the browser has to perform right at the onset of page loading. By deferring image loads to only just before they're needed, the browser can prioritize more critical assets such as fonts, styles and JavaScript. By setting the <code>loading</code> property on an image to <code>lazy</code>, you instruct the browser to only load the image as it enters the viewport. For example, on an e-commerce site which renders a grid of products, this would mean that the page loads faster for visitors, and seamlessly fetches images below the fold, as a user scrolls down. This is <a href="https://caniuse.com/loading-lazy-attr">supported by all major browsers except Safari</a> which currently has this feature hidden behind an experimental flag.</p>
            <pre><code>&lt;img loading="lazy" … /&gt;</code></pre>
            
    <div>
      <h3>Hosting</h3>
      <a href="#hosting">
        
      </a>
    </div>
    <p>Finally, you can improve image loading by hosting all of a page's images together on the same first-party domain. If each image was hosted on a different domain, the browser would have to perform a DNS lookup, create a TCP connection and perform the TLS handshake for every single image. When they are all co-located on a single domain (especially so if that is the same domain as the page itself), the browser can re-use the connection which improves the speed it can load those images.</p>
    <div>
      <h2>Test your website</h2>
      <a href="#test-your-website">
        
      </a>
    </div>
    <p>Today, we’re excited to announce the launch of <a href="https://images.cloudflare.com/">Cloudflare’s Image Optimization Testing Tool</a>. Simply enter your website URL, and we’ll run a series of automated tests to determine if there are any possible improvements you could make in delivering optimal images to visitors.</p><p>We use WebPageTest and Lighthouse to calculate the Core Web Vitals on two versions of your page: one as the original, and one with Cloudflare's best-effort automatic optimizations. These optimizations are performed using a Cloudflare Worker in combination with our Image Resizing product, and will transform an image's format, quality, and dimensions.</p><p>We report key summary metrics about your webpage's performance, including the aforementioned Cumulative Layout Shift (CLS) and Largest Contentful Page (LCP), as well as a detailed breakdown of each image on your page and the optimizations that can be made.</p>
    <div>
      <h2>Cloudflare Images</h2>
      <a href="#cloudflare-images">
        
      </a>
    </div>
    <p><a href="/announcing-cloudflare-images">Cloudflare Images</a> can help you to solve a number of the problems outlined in this post. By storing your images with Cloudflare and configuring a set of variants, we can deliver optimized images from our edge to your website or app. We automatically set the optimal image format and allow you to customize the dimensions and fit for your use-cases.</p><p>We're excited to see what you build with Cloudflare Images, and you can expect additional features and integrations in the near future. <a href="https://dash.cloudflare.com/sign-up/images">Get started with Images today from $6/month</a>.</p>
    <div>
      <h2>Watch on Cloudflare TV</h2>
      <a href="#watch-on-cloudflare-tv">
        
      </a>
    </div>
    <div></div> ]]></content:encoded>
            <category><![CDATA[Speed Week]]></category>
            <category><![CDATA[Product News]]></category>
            <category><![CDATA[Speed & Reliability]]></category>
            <category><![CDATA[Cloudflare Images]]></category>
            <guid isPermaLink="false">35szDDTJbHZORdWjXxUTzz</guid>
            <dc:creator>Greg Brimble</dc:creator>
        </item>
        <item>
            <title><![CDATA[Building Cloudflare Images in Rust and Cloudflare Workers]]></title>
            <link>https://blog.cloudflare.com/building-cloudflare-images-in-rust-and-cloudflare-workers/</link>
            <pubDate>Wed, 15 Sep 2021 12:59:28 GMT</pubDate>
            <description><![CDATA[ Using Rust and Cloudflare Workers helps us quickly iterate and deliver product improvements over the coming weeks and months. ]]></description>
            <content:encoded><![CDATA[ 
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/vts7EELfGtvXeEkzKzM3B/0bed2d7c96c8cdb043a810e2c64c96e9/image3-14.png" />
            
            </figure><p>This post explains how we implemented the Cloudflare Images product with reusable Rust libraries and Cloudflare Workers. It covers the technical design of <a href="https://developers.cloudflare.com/image-resizing/">Cloudflare Image Resizing</a> and Cloudflare Images. Using Rust and Cloudflare Workers helps us quickly iterate and deliver product improvements over the coming weeks and months.</p>
    <div>
      <h3>Reuse of code in Rusty image projects</h3>
      <a href="#reuse-of-code-in-rusty-image-projects">
        
      </a>
    </div>
    <p>We developed <a href="https://developers.cloudflare.com/image-resizing/">Image Resizing</a> in Rust. It's a web server that receives HTTP requests for images along with resizing options, fetches the full-size images from the origin, applies resizing and other image processing operations, compresses, and returns the HTTP response with the optimized image.</p><p>Rust makes it easy to split projects into libraries (called crates). The image processing and compression parts of Image Resizing are usable as libraries.</p><p>We also have a product called  <a href="/introducing-polish-automatic-image-optimizati/">Polish</a>, which is a Golang-based service that recompresses images in our cache. Polish was initially designed to run command-line programs like <code>jpegtran</code> and <code>pngcrush</code>. We took the core of Image Resizing and wrapped it in a command-line executable. This way, when Polish needs to apply lossy compression or generate WebP images or animations, it can use Image Resizing via a command-line tool instead of a third-party tool.</p><p>Reusing libraries has allowed us to easily unify processing between Image Resizing and Polish (for example, to ensure that both handle metadata and color profiles in the same way).</p><p>Cloudflare Images is another product we've built in Rust. It added support for a custom storage back-end, variants (size presets), support for signing URLs and more. We made it as a collection of Rust crates, so we can reuse pieces of it in other services running anywhere in our network. Image Resizing provides image processing for Cloudflare Images and shares libraries with Images to understand the new URL scheme, access the storage back-end, and database for variants.</p>
    <div>
      <h3>How Image Resizing works</h3>
      <a href="#how-image-resizing-works">
        
      </a>
    </div>
    
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6LlxMguVop5gSnpNNOszTn/b14d2635eabab9622b6b7dcbada2f408/image-resizing-diagram.png" />
            
            </figure><p>The Image Resizing service runs at the edge and is deployed on every server of the Cloudflare global network. Thanks to Cloudflare's global Anycast network, the closest Cloudflare data center will handle eyeball image resizing requests. Image Resizing is tightly integrated with the Cloudflare cache and handles eyeball requests only on a cache miss.</p><p>There are two ways to use Image Resizing. The default <a href="https://developers.cloudflare.com/image-resizing/url-format">URL scheme</a> provides an easy, declarative way of specifying image dimensions and other options. The other way is to use a JavaScript API in a <a href="https://developers.cloudflare.com/image-resizing/resizing-with-workers">Worker</a>. Cloudflare Workers give powerful programmatic control over every image resizing request.</p>
    <div>
      <h3>How Cloudflare Images work</h3>
      <a href="#how-cloudflare-images-work">
        
      </a>
    </div>
    
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5uCiQpHMXkjdKzBhvIruxh/1c1930777fee1539e201b30eb9bf530a/DES-3375-2.png" />
            
            </figure><p>Cloudflare Images consists of the following components:</p><ul><li><p>The Images core service that powers the public API to manage images assets.</p></li><li><p>The Image Resizing service responsible for image transformations and caching.</p></li><li><p>The Image delivery Cloudflare Worker responsible for serving images and passing corresponding parameters through to the Imaging Resizing service.</p></li><li><p>Image storage that provides access and storage for original image assets.</p></li></ul><p>To support Cloudflare Images scenarios for image transformations, we made several changes to the Image Resizing service:</p><ul><li><p>Added access to Cloudflare storage with original image assets.</p></li><li><p>Added access to variant definitions (size presets).</p></li><li><p>Added support for signing URLs.</p></li></ul>
    <div>
      <h3>Image delivery</h3>
      <a href="#image-delivery">
        
      </a>
    </div>
    <p>The primary use case for Cloudflare Images is to provide a simple and easy-to-use way of managing images assets. To cover egress costs, we provide image delivery through the Cloudflare managed imagedelivery.net domain. It is configured with <a href="/tiered-cache-smart-topology/">Tiered Caching</a> to maximize the cache hit ratio for image assets. imagedelivery.net provides <a href="https://www.cloudflare.com/developer-platform/solutions/hosting/">image hosting</a> without a need to configure a custom domain to proxy through Cloudflare.</p><p>A Cloudflare Worker powers image delivery. It parses image URLs and passes the corresponding parameters to the image resizing service.</p>
    <div>
      <h3>How we store Cloudflare Images</h3>
      <a href="#how-we-store-cloudflare-images">
        
      </a>
    </div>
    <p>There are several places we store information on Cloudflare Images:</p><ul><li><p>image metadata in Cloudflare's core data centers</p></li><li><p>variant definitions in Cloudflare's edge data centers</p></li><li><p>original images in core data centers</p></li><li><p>optimized images in Cloudflare cache, physically close to eyeballs.</p></li></ul><p>Image variant definitions are stored and delivered to the edge using Cloudflare's distributed key-value store called <a href="/introducing-quicksilver-configuration-distribution-at-internet-scale/">Quicksilver</a>. We use a single source of truth for variants. The Images core service makes calls to Quicksilver to read and update variant definitions.</p><p>The rest of the information about the image is stored in the image URL itself:<a href="https://imagedelivery.net/">https://imagedelivery.net/</a>//</p><p> contains a flag, whether it's publicly available or requires access verification. It's not feasible to store any image metadata in Quicksilver as the data volume would increase linearly with the number of images we host. Instead, we only allow a finite number of variants per account, so we responsibly utilize available disk space on the edge. The downside of storing image metadata as part of  is that  will change on access change.</p>
    <div>
      <h3>How we keep Cloudflare Images up to date</h3>
      <a href="#how-we-keep-cloudflare-images-up-to-date">
        
      </a>
    </div>
    <p>The only way to access images is through the use of variants. Each variant is a named <a href="https://developers.cloudflare.com/image-resizing/resizing-with-workers#fetch-options">image resizing configuration</a>. Once the image asset is fetched, we cache the transformed image in the Cloudflare cache. The critical question is how we keep processed images up to date. The answer is by purging the Cloudflare cache when necessary. There are two use cases:</p><ul><li><p>access to the image is changed</p></li><li><p>the variant definition is updated</p></li></ul><p>In the first instance, we purge the cache by calling a URL:<a href="https://imagedelivery.net/"><i>https://imagedelivery.net/</i></a><i>/</i></p><p>Then, the customer updates the variant we issue a cache purge request by tag:<i>account-id/variant-name</i></p><p>To support cache purge by tag, the image resizing service adds the necessary tags for all transformed images.</p>
    <div>
      <h3>How we restrict access to Cloudflare Images</h3>
      <a href="#how-we-restrict-access-to-cloudflare-images">
        
      </a>
    </div>
    <p>The Image resizing service supports restricted access to images by using URL signatures with expiration. URLs are signed with an SHA-256 HMAC key. The steps to produce valid signatures are:</p><ol><li><p>Take the path and query string (the path starts with /).</p></li><li><p>Compute the path’s SHA-256 HMAC with the query string, using the Images' URL signing key as the secret. The key is configured in the Dashboard.</p></li><li><p>If the URL is meant to expire, compute the Unix timestamp (number of seconds since 1970) of the expiration time, and append <code>?exp=</code> and the timestamp as an integer to the URL.</p></li><li><p>Append <code>?</code> or <code>&amp;</code> to the URL as appropriate (<code>?</code> if it had no query string; <code>&amp;</code> if it had a query string).</p></li><li><p>Append <code>sig=</code> and the HMAC as hex-encoded 64 characters.</p></li></ol><p>A signed URL looks like this:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5M87aEfnYeWRuAJk9MJaI0/c8d289f755d39a2885fc247358f9a32a/signed-1.png" />
            
            </figure><p>A signed URL with an expiration timestamp looks like this:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6cVOACVdzpkmfB6BHXsxk/23c586690ee6d72a0460f866b8f7bcdf/signed-2.png" />
            
            </figure><p>Signature of /hello/world URL with a secret ‘this is a secret’ is <code>6293f9144b4e9adc83416d1b059abcac750bf05b2c5c99ea72fd47cc9c2ace34</code>.</p><p><code>https://imagedelivery.net/hello/world?sig=6293f9144b4e9adc83416d1b059abcac750bf05b2c5c99ea72fd47cc9c2ace34</code></p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5SZF6WjgpV8GjeqEFbT9Yk/516d7148a25154db914bd6f41f685a3b/JS.png" />
            
            </figure>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7nkNNkzLLNwZAYLSzt6NPI/3696a549ae7cee402d4d335419b88c19/Rust.png" />
            
            </figure>
    <div>
      <h3>Direct creator uploads with Cloudflare Worker and KV</h3>
      <a href="#direct-creator-uploads-with-cloudflare-worker-and-kv">
        
      </a>
    </div>
    <p>Similar to <a href="https://developers.cloudflare.com/stream/uploading-videos/direct-creator-uploads">Cloudflare Stream</a>, Images supports direct creator uploads. That allow users to upload images without API tokens. Everyday use of direct creator uploads is by web apps, client-side applications, or mobile apps where users upload content directly to Cloudflare Images.</p><p>Once again, we used our serverless platform to support direct creator uploads. The successful API call stores the account's information in Workers KV with the specified expiration date. A simple Cloudflare Worker handles the upload URL, which reads the KV value and grants upload access only on a successful call to KV.</p>
    <div>
      <h3>Future Work</h3>
      <a href="#future-work">
        
      </a>
    </div>
    <p>Cloudflare Images product has an exciting product roadmap. Let’s review what’s possible with the current architecture of Cloudflare Images.</p>
    <div>
      <h4>Resizing hints on upload</h4>
      <a href="#resizing-hints-on-upload">
        
      </a>
    </div>
    <p>At the moment, no image transformations happen on upload. That means we can serve the image globally once it is uploaded to Image storage. We are considering adding resizing hints on image upload. That won't necessarily schedule image processing in all cases but could provide a valuable signal to resize the most critical image variants. An example could be to <a href="/generate-avif-images-with-image-resizing/">generate an AVIF</a> variant for the most vital image assets.</p>
    <div>
      <h4>Serving images from custom domains</h4>
      <a href="#serving-images-from-custom-domains">
        
      </a>
    </div>
    <p>We think serving images from a domain we manage (with Tiered Caching) is a great default option for many customers. The downside is that loading Cloudflare images requires additional TLS negotiations on the client-side, adding latency and impacting loading performance. On the other hand, serving Cloudflare Images from custom domains will be a viable option for customers who set up a website through Cloudflare. The good news is that we can support such functionality with the current architecture without radical changes in the architecture.</p>
    <div>
      <h3>Conclusion</h3>
      <a href="#conclusion">
        
      </a>
    </div>
    <p>The Cloudflare Images product runs on top of the Cloudflare global network. We built Cloudflare Images in Rust and Cloudflare Workers. This way, we use Rust reusable libraries in several products such as Cloudflare Images, Image Resizing, and Polish. Cloudflare’s serverless platform is an indispensable tool to build Cloudflare products internally. If you are interested in building innovative products in Rust and Cloudflare Workers, <a href="https://www.cloudflare.com/careers/jobs/?department=Emerging%20Technology%20and%20Incubation&amp;location=default">we're hiring</a>.</p>
    <div>
      <h3>Watch on Cloudflare TV</h3>
      <a href="#watch-on-cloudflare-tv">
        
      </a>
    </div>
    <div></div><p></p> ]]></content:encoded>
            <category><![CDATA[Speed Week]]></category>
            <category><![CDATA[Cloudflare Images]]></category>
            <category><![CDATA[Developers]]></category>
            <category><![CDATA[Rust]]></category>
            <category><![CDATA[Cloudflare Workers]]></category>
            <category><![CDATA[Product News]]></category>
            <category><![CDATA[Speed & Reliability]]></category>
            <category><![CDATA[Developer Platform]]></category>
            <guid isPermaLink="false">21W0eo2QBR2GXgCMaVjCuY</guid>
            <dc:creator>Yevgen Safronov</dc:creator>
        </item>
        <item>
            <title><![CDATA[How Cloudflare Images can make your life easier]]></title>
            <link>https://blog.cloudflare.com/how-cloudflare-images-can-make-your-life-easier/</link>
            <pubDate>Wed, 15 Sep 2021 12:59:24 GMT</pubDate>
            <description><![CDATA[ Are you tired of buckets and non ending egress fees? Cloudflare Images can help you to create and maintain your entire image pipeline.  ]]></description>
            <content:encoded><![CDATA[ <p></p><p>Imagine how a customer would feel if they get to your website, and it takes forever to load all the images you serve. This would become a negative user experience that might lead to lower overall site traffic and high bounce rates.</p><p>The good news is that regardless of whether you need to store and serve 100,000 or one million images, Cloudflare Images gives you the tools you need to build an entire image pipeline from scratch.</p>
    <div>
      <h3><b>Customer pains</b></h3>
      <a href="#customer-pains">
        
      </a>
    </div>
    <p>After speaking with many of Cloudflare customers, we quickly understood that whether you are an e-commerce retailer, a blogger or have a platform for creators, everyone shares the same problems:</p><ul><li><p><b>Egress fees.</b> Each time an image needs to go from Product A (storage) to Product B (optimize) and to Product C (delivery) there’s a fee. If you multiply this by the millions of images clients serve per day it’s easy to understand why their bills are so high.</p></li><li><p><b><b><b>Buckets everywhere.</b></b></b> Our customers’ current pipelines involve uploading images to and from services like AWS, then using open source products to optimize images, and finally to serve the images they need to store them in another cloud bucket since CDN don’t have long-term storage. This means that there is a dependency on buckets at each step of the pipeline.</p></li><li><p><b>Load times.</b> When an image is not correctly optimized the image can be much larger than needed resulting in an unnecessarily long download time. This can lead to a bad end user experience that might result in loss of overall site traffic.</p></li><li><p><b><b><b>High Maintenance.</b></b></b> To maintain an image pipeline companies need to rely on several AWS and open source products, plus an engineering team to build and maintain that pipeline. This takes the focus away from engineering on the product itself.</p></li></ul>
    <div>
      <h3>How can Cloudflare Images help?  </h3>
      <a href="#how-can-cloudflare-images-help">
        
      </a>
    </div>
    
    <div>
      <h4>Zero Egress Costs</h4>
      <a href="#zero-egress-costs">
        
      </a>
    </div>
    <p>The majority of cloud providers allow you to store images for a small price, but the bill starts to grow every time you need to retrieve that image to optimize and deliver. The good news is that with Cloudflare Images customers don’t need to worry about egress costs, since all storage, optimization and delivery are part of the same tool.</p>
    <div>
      <h4>The buckets stop with Cloudflare Images</h4>
      <a href="#the-buckets-stop-with-cloudflare-images">
        
      </a>
    </div>
    <p>One small step for humankind, one giant leap for image enthusiasts!</p><p>With Cloudflare Images the bucket pain stops now, and customers have two options:</p><ol><li><p>One image upload can generate up to 100 variants, which allows developers to stop placing image sizes in URLs. This way, if a site gets redesigned there isn’t a need to change all the image sizes because you already have all the variants you need stored in Cloudflare Images.</p></li><li><p>Give your users a one-time permission to upload one file to your server. This way developers don’t need to write additional code to move files from users into a bucket — they will be automatically uploaded into your Cloudflare storage.</p></li></ol>
    <div>
      <h4>Minimal engineering effort</h4>
      <a href="#minimal-engineering-effort">
        
      </a>
    </div>
    <p>Have you ever dreamed about your team focusing entirely on product development instead of maintaining infrastructure? We understand, and this is why we created a straightforward set of APIs as well as a UI in the Cloudflare Dashboard. This allows your team to serve and optimize images without the need to set up and maintain a pipeline from scratch.</p><p>Once you get access to Cloudflare Images your team can start:</p><ul><li><p>Uploading, deleting and updating images via API.</p></li><li><p>Setting up preferred variants.</p></li><li><p>Editing with Image Resizing both with the UI and API.</p></li><li><p>Serving an image with one click.</p></li></ul>
    <div>
      <h4>Process images on the fly</h4>
      <a href="#process-images-on-the-fly">
        
      </a>
    </div>
    <p>We all know that Google and many other search engines use the loading speed as one of their ranking factors; Cloudflare Images helps you be on the top of that list. We automatically detect what browser your clients are using and serve the most optimized version of the image, so that you don’t need to worry about file extensions, configuring origins for your image sets or even cache hit rates.</p><p>Curious to have a sneak peek at Cloudflare Images? <a href="https://dash.cloudflare.com/sign-up/images">Sign up now!</a></p>
    <div>
      <h3>Watch on Cloudflare TV</h3>
      <a href="#watch-on-cloudflare-tv">
        
      </a>
    </div>
    <div></div> ]]></content:encoded>
            <category><![CDATA[Speed Week]]></category>
            <category><![CDATA[Speed & Reliability]]></category>
            <category><![CDATA[Cloudflare Images]]></category>
            <category><![CDATA[Product News]]></category>
            <guid isPermaLink="false">1UuNIF9TzTuuPTxR7cu77A</guid>
            <dc:creator>Rita Soares</dc:creator>
        </item>
        <item>
            <title><![CDATA[Vary for Images: Serve the Correct Images to the Correct Browsers]]></title>
            <link>https://blog.cloudflare.com/vary-for-images-serve-the-correct-images-to-the-correct-browsers/</link>
            <pubDate>Mon, 13 Sep 2021 12:58:11 GMT</pubDate>
            <description><![CDATA[ We’re excited to announce support for Vary, an HTTP header that ensures different content types can be served to user-agents with differing capabilities. ]]></description>
            <content:encoded><![CDATA[ 
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5E8mZjU7F1SA7fPkNnwTJS/0181bcb724d8411675ad222075fe5610/Vary-Support.png" />
            
            </figure><p>Today, we’re excited to announce support for <b>Vary</b>, an HTTP header that ensures different content types can be served to user-agents with differing capabilities.</p><p>At Cloudflare, we’re obsessed with performance. Our job is to ensure that content gets from our network to visitors quickly, and also that the <i>correct</i> content is served. Serving incompatible or unoptimized content burdens website visitors with a poor experience while needlessly stressing a website’s infrastructure. Lots of traffic served from our edge consists of image files, and for these requests and responses, serving optimized image formats often results in significant performance gains. However, as browser technology has advanced, so too has the complexity required to serve optimized image content to browsers all with differing capabilities — not all browsers support <i>all</i> image formats! Providing features to ensure that the correct images are served to the correct requesting browser, device, or screen is important!</p>
    <div>
      <h3>Serving images on the modern web</h3>
      <a href="#serving-images-on-the-modern-web">
        
      </a>
    </div>
    <p>In the web’s early days, if you wanted to serve a full color image, JPEGs reigned supreme and were universally supported. Since then, the state of the art in image encoding has advanced by leaps and bounds, and there are now increasingly more advanced and efficient codecs like WebP and AVIF that promise reduced file sizes and improved quality.</p><p>This sort of innovation is exciting, and delivers real improvements to user experience. However, it makes the job of web servers and edge networks more complicated. As an example, until very recently, <a href="https://insanelab.com/blog/web-development/webp-web-design-vs-jpeg-gif-png/">WebP</a> image files were <a href="https://www.keycdn.com/support/webp-browser-support#:~:text=Safari%20will%20support%20WebP%20in,almost%20be%20completely%20globally%20supported.">not universally supported</a> by commonly used browsers. A specific browser not supporting an image file becomes a problem when “intermediate caches”, like Cloudflare, are involved in delivering content.</p><p>Let’s say, for example, that a website wants to provide the best experience to whatever browser requests the site. A desktop browser sends a request to the website and the origin server responds with the website’s content including images. This response is cached by a CDN prior to getting sent back to the requesting browser.</p><p>Now let’s say a mobile browser comes along and requests that same website with those images. In the situation where a cached image is a WebP file, and WebP is not supported by the mobile browser, the website will not load properly because the content returned from cache is not supported by the mobile browser. That’s a problem.</p><p>To help solve this issue, today we’re excited to announce our support of the Vary header for images.</p>
    <div>
      <h3>How Vary works</h3>
      <a href="#how-vary-works">
        
      </a>
    </div>
    <p>Vary is an HTTP response header that allows origins to serve variants of the same content from a single URL, and have intermediate caches serve the correct variant to each user-agent that comes along.</p><p>Smashing Magazine has an excellent deep dive on how Vary negotiation works <a href="https://www.smashingmagazine.com/2017/11/understanding-vary-header/">here</a>.</p><p>When browsers send a request for a website, they include a variety of request headers. A fairly common example might look something like:</p>
            <pre><code>GET /page.html HTTP/1.1
Host: example.com
Connection: keep-alive
Accept:
text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.9
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/91.0.4472.164 Safari/537.36
Accept-Encoding: gzip, deflate, br</code></pre>
            <p>As we can see above, the browser sends a lot of information in these headers along with the GET request for the URL. What’s important for Vary for Images is the <i>Accept</i> header. The <a href="https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Accept"><i>Accept</i> header</a> tells the origin what sort of content the browser is capable of handling (file types, etc.) and provides a list of content preferences.</p><p>When the origin gets the request, it sees the <i>Accept</i> header which details the content preference for the browser’s request. In the origin’s response, Vary tells the browser that content returned was different depending on the value of the <i>Accept</i> header in the request. Thus if a different browser comes along and sends a request with different <i>Accept</i> header values, this new browser can get a different response from the origin. An example origin response may look something like:</p>
            <pre><code>HTTP/1.1 200 OK
Content-Length: 123456
Vary: Accept</code></pre>
            
    <div>
      <h3>How Vary works with Cloudflare’s cache</h3>
      <a href="#how-vary-works-with-cloudflares-cache">
        
      </a>
    </div>
    <p>Now, let’s add Cloudflare to the mix. Cloudflare sits in between the browser and the origin in the above example. When Cloudflare receives the origin’s response, we cache the specific image variant so that subsequent requests from browsers with the same image preferences can be served from cache. This also means that serving multiple image variants for the same asset will create distinct cache entries.</p>
    <div>
      <h3>Accept header normalization</h3>
      <a href="#accept-header-normalization">
        
      </a>
    </div>
    <p>Caching variants in intermediate caches can be difficult to get right. Naive caching of variants can cause problems by serving incorrect or unsupported image variants to browsers. Some solutions that reduce the potential for caching incorrect variants generally provide those safeguards at the expense of performance.</p><p>For example, through a process known as content-negotiation, the correct variant is directed to the requesting browser through a process of multiple requests and responses. The browser could send a request to the origin asking for a list of available resource variants. When the origin responds with the list, the browser can make an additional request for the desired resources from that list, which the server would then respond to. These redundant calls to narrow down which type of content that the browser accepts and the server has available can cause performance delays.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/71lg6VkOQspb3Sy4fLvCtR/a8b1a42b14a7cb8855312a7b539c21f3/Vary-Images-Diagram.png" />
            
            </figure><p>Vary for Images reduces the need for these redundant negotiations to an origin by parsing the request’s <i>Accept</i> header and sending that on to the origin to ensure that the origin knows exactly what content it needs to deliver to the browser. Additionally because the expected variant values can be set in Cloudflare’s API (see below), we make an end-run around the negotiation process because we are sure what to ask for and expect from the origin. This reduces the needless back-and-forth between browsers and servers.</p>
    <div>
      <h3>How to Enable Vary for Images</h3>
      <a href="#how-to-enable-vary-for-images">
        
      </a>
    </div>
    <p>You can enable Vary for Images from Cloudflare’s API for Pro, Business, and Enterprise Customers.</p><p>Things to keep in mind when using Vary:</p><ul><li><p>Vary for Images enables varying on the following file extensions: avif, bmp, gif, jpg, jpeg, jp2, jpg2, png, tif, tiff, webp. These extensions can have multiple variants served so long as the origin server sends the <code>Vary: Accept</code> response header.</p></li><li><p>If the origin server sends <code>Vary: Accept</code> but does not serve the expected variant, the response will not be cached. This will be indicated with the BYPASS cache status in the response headers.</p></li><li><p>The list of variant types the origin serves for each extension must be configured so that Cloudflare can decide which variant to serve without having to contact the origin server.</p></li></ul>
    <div>
      <h3>Enabling Vary in action</h3>
      <a href="#enabling-vary-in-action">
        
      </a>
    </div>
    <p>Enabling Vary functionality currently requires the use of the Cloudflare API. Here’s an example of how to enable variant support for a zone that wants to serve JPEGs in addition to WebP and AVIF variants for jpeg and jpg extensions.</p><p><b>Create a variants rule:</b></p>
            <pre><code>curl -X PATCH
"https://api.cloudflare.com/client/v4/zones/023e105f4ecef8ad9ca31a8372d0c353/cache/variants" \ 
	-H "X-Auth-Email: user@example.com" \ 
	-H "X-Auth-Key: 3xamp1ek3y1234" \ 
	-H "Content-Type: application/json" \ 
	--data 
'{"value":{"jpeg":["image/webp","image/avif"],"jpg":["image/webp","image/avif"]}}' </code></pre>
            <p><b>Modify to only allow WebP variants:</b></p>
            <pre><code>curl -X PATCH 
"https://api.cloudflare.com/client/v4/zones/023e105f4ecef8ad9ca31a8372d0c353/cache/variants" \ 
	-H "X-Auth-Email: user@example.com" \ 
	-H "X-Auth-Key: 3xamp1ek3y1234" \ 
	-H "Content-Type: application/json" \ 
	--data 
'{"value":{"jpeg":["image/webp"],"jpg":["image/webp"]}}' </code></pre>
            <p><b>Delete the rule:</b></p>
            <pre><code>curl -X DELETE 
"https://api.cloudflare.com/client/v4/zones/023e105f4ecef8ad9ca31a8372d0c353/cache/variants" \ 
	-H "X-Auth-Email: user@example.com" \ 
	-H "X-Auth-Key: 3xamp1ek3y1234" </code></pre>
            <p><b>Get the rule:</b></p>
            <pre><code>curl -X GET 
"https://api.cloudflare.com/client/v4/zones/023e105f4ecef8ad9ca31a8372d0c3533/cache/variants" \
	-H "X-Auth-Email: user@example.com" \ 
	-H "X-Auth-Key: 3xamp1ek3y1234"</code></pre>
            
    <div>
      <h3>Purging variants</h3>
      <a href="#purging-variants">
        
      </a>
    </div>
    <p>Any purge of varied images will purge <b>all</b> content variants for that URL. That way, if the image changes, you can easily update the cache with a single purge versus chasing down how many potential out-of-date variants may exist. This behavior is true regardless of purge type (single file, tag, or hostname) used.</p>
    <div>
      <h3>Other image optimization tools available at Cloudflare</h3>
      <a href="#other-image-optimization-tools-available-at-cloudflare">
        
      </a>
    </div>
    <p>Providing an additional option for customers to optimize the delivery of images also allows Cloudflare to support more customer configurations. For other ways Cloudflare can help you serve images to visitors quickly and efficiently, you can check out:</p><ul><li><p><a href="https://developers.cloudflare.com/cache/best-practices/activate-polish">Polish</a> — Cloudflare’s automatic product that strips image metadata and applies compression. Polish accelerates image downloads by reducing image size.</p></li><li><p><a href="https://developers.cloudflare.com/image-resizing/">Image Resizing</a> — Cloudflare’s image resizing product works as a proxy on top of the Cloudflare edge cache to apply the adjustments to an image’s size and quality.  </p></li><li><p><a href="/announcing-cloudflare-images-beta/">Cloudflare for Images</a> — Cloudflare’s all-in-one service to host, resize, optimize, and deliver all of your website’s images.</p></li></ul>
    <div>
      <h3>Try Vary for Images Out</h3>
      <a href="#try-vary-for-images-out">
        
      </a>
    </div>
    <p>Vary for Images provides options that ensure the best images are served to the browser based on the browser’s capabilities and preferences. If you’re looking for more control over how your images are delivered to browsers, we encourage you to try this new feature out.</p>
    <div>
      <h3>Watch on Cloudflare TV</h3>
      <a href="#watch-on-cloudflare-tv">
        
      </a>
    </div>
    <div></div> ]]></content:encoded>
            <category><![CDATA[Speed Week]]></category>
            <category><![CDATA[Cloudflare Images]]></category>
            <category><![CDATA[Speed & Reliability]]></category>
            <category><![CDATA[Product News]]></category>
            <guid isPermaLink="false">5rh1VV7hnb9Bg5z33hPVd1</guid>
            <dc:creator>Alex Krivit</dc:creator>
        </item>
        <item>
            <title><![CDATA[Announcing Cloudflare Images beta to simplify your image pipeline]]></title>
            <link>https://blog.cloudflare.com/announcing-cloudflare-images-beta/</link>
            <pubDate>Tue, 20 Apr 2021 17:00:00 GMT</pubDate>
            <description><![CDATA[ Today, we are announcing the beta of Cloudflare Images: a simple service to store, resize, optimize, and deliver images at scale.  ]]></description>
            <content:encoded><![CDATA[ <p></p><p>Today, we are announcing the beta of Cloudflare Images: a simple service to store, resize, optimize, and deliver images at scale.</p><p>In 2018, we launched Stream to provide a single product that could be used to store, encode, and deliver videos. With Cloudflare Images, we are doing for images what Stream did for videos. Just like Stream, Cloudflare Images eliminates the need to think about storage buckets, egress costs, and many other common problems that are solved for you out of the box. Whether you are building an ecommerce platform with millions of high-res product pictures and videos or a new app for creators, you can build your entire media pipeline by combining Cloudflare Images and Stream.</p>
    <div>
      <h2>Fundamental questions for storing and serving images</h2>
      <a href="#fundamental-questions-for-storing-and-serving-images">
        
      </a>
    </div>
    <p>Any time you are building infrastructure for image storage and processing, there are four fundamental questions you must answer:</p><ol><li><p>“Where do we store images?”</p></li><li><p>“How do we secure, resize, and optimize the images for different use cases?”</p></li><li><p>“How do we serve the images to our users reliably?”</p></li><li><p>“How do we do all of these things at scale while having predictable and affordable pricing, especially during spikes?”</p></li></ol><p>Cloudflare Images has a straightforward set of APIs and simple pricing structure that answers all of these pesky questions. We built Images so your team can spend less energy maintaining infrastructure and more time focusing on what makes your product truly special.</p>
    <div>
      <h2>Current state of image infrastructure</h2>
      <a href="#current-state-of-image-infrastructure">
        
      </a>
    </div>
    
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3JMlXeae2Kmb12pBqL8UqD/cd736adf75910be35d4d471ce9c32005/image4-12.png" />
            
            </figure><p>We talked to many Cloudflare customers who are using Cloudflare to serve millions of images every day. We heard two recurring themes. First, customers wished there was a simpler way for them to securely store, resize, and serve the images. Their current infrastructure generally involves using product A for storage, product B for resizing, and product C for the actual delivery. Combining these products together (often from multiple vendors) quickly becomes messy with multiple points of failure. Moreover, maintaining this infrastructure requires ongoing monitoring and tweaks.</p><p>Second, we heard that customers often end up with an ever-growing egress bill due to multiple vendors and products. It blew our minds that the egress bill can be a multiple of the storage cost itself. Every time your pictures move from <i>product A</i> (storage provider) to <i>product B</i> (resizing service) to <i>product C</i> (the CDN), you generally pay an egress cost which can quickly add up depending on the number of pictures and their variants. Multiplied by tens of millions of images and variants, this cost can add up to tens of thousands of dollars per month.</p>
    <div>
      <h2>Why Cloudflare Images</h2>
      <a href="#why-cloudflare-images">
        
      </a>
    </div>
    
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2kvSceRgjdPwh6wa0SLOiE/0c599be3620370c6a69065551b40d181/image2-27.png" />
            
            </figure>
    <div>
      <h3>Eliminate egress cost and storage bucket hell</h3>
      <a href="#eliminate-egress-cost-and-storage-bucket-hell">
        
      </a>
    </div>
    <p>Each time you upload an image to Cloudflare Images, you receive an <i>:image_id</i>. There are no buckets and folders to manage the originals and the variants. And because of Images built-in support for resizing and delivery, there is no egress cost. If you have internal metadata that you’d like to associate with a picture, you can set the <i>meta</i> field for every upload to any arbitrary JSON value.</p>
    <div>
      <h3>Simple APIs catered to your needs</h3>
      <a href="#simple-apis-catered-to-your-needs">
        
      </a>
    </div>
    <p>When talking to customers we saw two main patterns of how customers would like to deliver images:</p><ol><li><p>Upload an image and get an <code>:image_uid</code> back that allows future operations on the image. In this case, the image URL would be <a href="https://imagedelivery.net/small-thumbnail/:image_uid">https://imagedelivery.net/small-thumbnail/:image_uid</a></p></li><li><p>Upload images with the filename as their main identifier, e.g. filename reflects SKU.In this case, it is up to you to make sure there are no duplicate filenames, as they would be rejected.</p></li></ol><p>Here the image URL would be <a href="https://imagedelivery.net/small-thumbnail/:account_hash/:filename">https://imagedelivery.net/small-thumbnail/:account_hash/:filename</a></p>
    <div>
      <h3>Resize and secure your pictures with Variants</h3>
      <a href="#resize-and-secure-your-pictures-with-variants">
        
      </a>
    </div>
    <p>Cloudflare Images supports Variants. When you create a variant, you can define properties including variant name, width, height, and whether the variant should be publicly accessible. You can then associate an image with one or more variants using the UI or the API.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6SdAe27QQBBqkyogHVRKP2/875638cbcfbf4ba7d56fc150505ef9a2/image1-33.png" />
            
            </figure><p>Let’s say you are storing user profile pictures. You could define a variant called “<i>profile-thumbnail</i>” and configure that variant to serve images of a fixed width and height. Once you have a variant, you can associate the profile pictures with the <i>profile-thumbnail</i> variant. Whenever you need to display a profile picture in your app, you simply call <a href="https://imagedelivery.net/profile-thumbnail/:image_uid">https://imagedelivery.net/profile-thumbnail/:image_uid</a> or <a href="https://imagedelivery.net/profile-thumbnail/:account_hash/:filename">https://imagedelivery.net/profile-thumbnail/:account_hash/:filename</a>.</p><p>Variants also offer access control. You might only want logged-in users to view the larger version of the profile pictures. You could create another variant called <i>large-profile-picture</i> and make it require a signed URL token. When a user tries to access the large profile picture with a URL such as <a href="https://imagedelivery.net/large-profile-picture/:image_uid">https://imagedelivery.net/large-profile-picture/:image_uid</a>, the request will fail because there is no valid token provided.</p><p>An indirect upside of using variants is that your organization has a single source of truth for the different ways you are using images across your apps. Different teams can create variants for their use cases, enabling you to audit the security and optimization settings for different types of pictures from one central place. We learned that as our customers' products grow in complexity, separate teams may be responsible for handling various aspects of the product. For example, one team may be responsible for user profile pictures and the different variants associated with it. Another team may be responsible for creator uploads and maintaining different variations that are available to the public and to paid members. Over time, organizations can lose track of this logic with no single source of truth. With variants, this logic is clearly laid out and can serve as the source of truth for all the ways you are securing and optimizing different types of image uploads in your product.</p><p>There is no additional cost for using variants. Every picture uploaded to Images can be associated with up to five variants. You will be able to associate an image with a variant using the UI or the API.</p>
    <div>
      <h3>Intelligent delivery for every device and use case</h3>
      <a href="#intelligent-delivery-for-every-device-and-use-case">
        
      </a>
    </div>
    <p>Cloudflare Images automatically serves the most optimized version of the image. You no longer need to worry about things like file extensions. When a client requests a picture hosted on Cloudflare Images, we automatically identify the ideal supported format at the Cloudflare edge and serve it to the client from the edge. For example, 93.55% of all users use a web browser that supports webp. For those users, Images would automatically serve webp images. To the remaining users, Images would serve PNGs (and in very rare cases where neither webp or PNGs are supported, it would serve JPGs). In future, we plan to automatically support AVIF for highly-requested images.</p><p>When you use Images, you no longer need to worry about cache hit rates, image file types, configuring origins for your image assets.</p>
    <div>
      <h2>Simple pricing</h2>
      <a href="#simple-pricing">
        
      </a>
    </div>
    <p>To use Cloudflare Images, you will pay a fixed monthly fee for every 100,000 images stored in Cloudflare Images (up to 10MB per image). And at the end of each month, you pay for the number of images served. There are no additional costs for resizing, egress, or optimized routing.</p>
    <div>
      <h2>Request an Invite</h2>
      <a href="#request-an-invite">
        
      </a>
    </div>
    <p>If you want to be part of Cloudflare Images beta, <a href="https://docs.google.com/forms/d/1x1caSSYQn10dRjxNLJlG-MdHgLnUa2mnR6iUpa2ahxI">request an invite</a>. We will start inviting a limited set of users to try out Cloudflare Images in the coming weeks. Pricing and developer docs for Images will be posted at the time we start sending invites.</p><p>We can’t wait to see what you build using Cloudflare Images!</p> ]]></content:encoded>
            <category><![CDATA[Developer Week]]></category>
            <category><![CDATA[Developers]]></category>
            <category><![CDATA[Cloudflare Images]]></category>
            <category><![CDATA[Product News]]></category>
            <guid isPermaLink="false">1FAshpEwIB7Rsm85VyRF89</guid>
            <dc:creator>Zaid Farooqui</dc:creator>
            <dc:creator>Marc Lamik</dc:creator>
        </item>
    </channel>
</rss>