
<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:media="http://search.yahoo.com/mrss/">
    <channel>
        <title><![CDATA[ The Cloudflare Blog ]]></title>
        <description><![CDATA[ Get the latest news on how products at Cloudflare are built, technologies used, and join the teams helping to build a better Internet. ]]></description>
        <link>https://blog.cloudflare.com</link>
        <atom:link href="https://blog.cloudflare.com/" rel="self" type="application/rss+xml"/>
        <language>en-us</language>
        
        <lastBuildDate>Mon, 13 Apr 2026 15:04:12 GMT</lastBuildDate>
        <item>
            <title><![CDATA[State-of-the-art image generation Leonardo models and text-to-speech Deepgram models now available in Workers AI]]></title>
            <link>https://blog.cloudflare.com/workers-ai-partner-models/</link>
            <pubDate>Wed, 27 Aug 2025 14:00:00 GMT</pubDate>
            <description><![CDATA[ We're expanding Workers AI with new partner models from Leonardo.Ai and Deepgram. Start using state-of-the-art image generation models from Leonardo and real-time TTS and STT models from Deepgram.  ]]></description>
            <content:encoded><![CDATA[ <p>When we first launched <a href="https://www.cloudflare.com/developer-platform/products/workers-ai/"><u>Workers AI</u></a>, we made a bet that AI models would get faster and smaller. We built our infrastructure around this hypothesis, adding specialized GPUs to our datacenters around the world that can serve inference to users as fast as possible. We created our platform to be as general as possible, but we also identified niche use cases that fit our infrastructure well, such as low-latency image generation or real-time audio voice agents. To lean in on those use cases, we’re bringing on some new models that will help make it easier to develop for these applications.</p><p>Today, we’re excited to announce that we are expanding our model catalog to include closed-source partner models that fit this use case. We’ve partnered with <a href="http://leonardo.ai"><u>Leonardo.Ai</u></a> and <a href="https://deepgram.com/"><u>Deepgram</u></a> to bring their latest and greatest models to Workers AI, hosted on Cloudflare’s infrastructure. Leonardo and Deepgram both have models with a great speed-to-performance ratio that suit the infrastructure of Workers AI. We’re starting off with these great partners — but expect to expand our catalog to other partner models as well.</p><p>The benefits of using these models on Workers AI is that we don’t only have a standalone inference service, we also have an entire suite of Developer products that allow you to build whole applications around AI. If you’re building an image generation platform, you could use Workers to <a href="https://www.cloudflare.com/developer-platform/solutions/hosting/">host the application logic</a>, Workers AI to generate the images, R2 for storage, and Images for serving and transforming media. If you’re building Realtime voice agents, we offer WebRTC and WebSocket support via Workers, speech-to-text, text-to-speech, and turn detection models via Workers AI, and an orchestration layer via Cloudflare Realtime. All in all, we want to lean into use cases that we think Cloudflare has a unique advantage in, with developer tools to back it up, and make it all available so that you can build the best AI applications on top of our holistic Developer Platform.</p>
    <div>
      <h2>Leonardo Models</h2>
      <a href="#leonardo-models">
        
      </a>
    </div>
    <p><a href="https://www.leonardo.ai"><u>Leonardo.Ai</u></a> is a generative AI media lab that trains their own models and hosts a platform for customers to create generative media. The Workers AI team has been working with Leonardo for a while now and have experienced the magic of their image generation models firsthand. We’re excited to bring on two image generation models from Leonardo: @cf/leonardo/phoenix-1.0 and @cf/leonardo/lucid-origin.</p><blockquote><p><i>“We’re excited to enable Cloudflare customers a new avenue to extend and use our image generation technology in creative ways such as creating character images for gaming, generating personalized images for websites, and a host of other uses... all through the Workers AI and the Cloudflare Developer Platform.” - </i><b><i>Peter Runham</i></b><i>, CTO, </i><a href="http://leonardo.ai"><i><u>Leonardo.Ai </u></i></a></p></blockquote><p>The Phoenix model is trained from the ground up by Leonardo, excelling at things like text rendering and prompt coherence. The full image generation request took 4.89s end-to-end for a 25 step, 1024x1024 image.</p>
            <pre><code>curl --request POST \
  --url https://api.cloudflare.com/client/v4/accounts/{ACCOUNT_ID}/ai/run/@cf/leonardo/phoenix-1.0 \
  --header 'Authorization: Bearer {TOKEN}' \
  --header 'Content-Type: application/json' \
  --data '{
    "prompt": "A 1950s-style neon diner sign glowing at night that reads '\''OPEN 24 HOURS'\'' with chrome details and vintage typography.",
    "width":1024,
    "height":1024,
    "steps": 25,
    "seed":1,
    "guidance": 4,
    "negative_prompt": "bad image, low quality, signature, overexposed, jpeg artifacts, undefined, unclear, Noisy, grainy, oversaturated, overcontrasted"
}'
</code></pre>
            
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1q7ndHYrwLQqqAdX6kGEkl/96ece588cf82691fa8e8d11ece382672/BLOG-2903_2.png" />
          </figure><p>The Lucid Origin model is a recent addition to Leonardo’s family of models and is great at generating photorealistic images. The image took 4.38s to generate end-to-end at 25 steps and a 1024x1024 image size.</p>
            <pre><code>curl --request POST \
  --url https://api.cloudflare.com/client/v4/accounts/{ACCOUNT_ID}/ai/run/@cf/leonardo/lucid-origin \
  --header 'Authorization: Bearer {TOKEN}' \
  --header 'Content-Type: application/json' \
  --data '{
    "prompt": "A 1950s-style neon diner sign glowing at night that reads '\''OPEN 24 HOURS'\'' with chrome details and vintage typography.",
    "width":1024,
    "height":1024,
    "steps": 25,
    "seed":1,
    "guidance": 4,
    "negative_prompt": "bad image, low quality, signature, overexposed, jpeg artifacts, undefined, unclear, Noisy, grainy, oversaturated, overcontrasted"
}'
</code></pre>
            
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/26VKWD8ua6Pe2awQWRnF7n/bb42c9612b08269af4ef38df39a2ed30/BLOG-2903_3.png" />
          </figure>
    <div>
      <h2>Deepgram Models</h2>
      <a href="#deepgram-models">
        
      </a>
    </div>
    <p>Deepgram is a voice AI company that develops their own audio models, allowing users to interact with AI through a natural interface for humans: voice. Voice is an exciting interface because it carries higher bandwidth than text, because it has other speech signals like pacing, intonation, and more. The Deepgram models that we’re bringing on our platform are audio models which perform extremely fast speech-to-text and text-to-speech inference. Combined with the Workers AI infrastructure, the models showcase our unique infrastructure so customers can build low-latency voice agents and more.</p><blockquote><p><i>"By hosting our voice models on Cloudflare's Workers AI, we're enabling developers to create real-time, expressive voice agents with ultra-low latency. Cloudflare's global network brings AI compute closer to users everywhere, so customers can now deliver lightning-fast conversational AI experiences without worrying about complex infrastructure." - </i><i><b>Adam Sypniewski</b></i><i>, CTO, Deepgram</i></p></blockquote><p><a href="https://developers.cloudflare.com/workers-ai/models/nova-3"><u>@cf/deepgram/nova-3</u></a> is a speech-to-text model that can quickly transcribe audio with high accuracy. <a href="https://developers.cloudflare.com/workers-ai/models/aura-1"><u>@cf/deepgram/aura-1</u></a> is a text-to-speech model that is context aware and can apply natural pacing and expressiveness based on the input text. The newer Aura 2 model will be available on Workers AI soon. We’ve also improved the experience of sending binary mp3 files to Workers AI, so you don’t have to convert it into an Uint8 array like you had to previously. Along with our Realtime announcements (coming soon!), these audio models are the key to enabling customers to build voice agents directly on Cloudflare.</p><p>With the AI binding, a call to the Nova 3 speech-to-text model would look like this:</p>
            <pre><code>const URL = "https://www.some-website.com/audio.mp3";
const mp3 = await fetch(URL);
 
const res = await env.AI.run("@cf/deepgram/nova-3", {
    "audio": {
      body: mp3.body,
      contentType: "audio/mpeg"
    },
    "detect_language": true
  });
</code></pre>
            <p>With the REST API, it would look like this:</p>
            <pre><code>curl --request POST \
  --url 'https://api.cloudflare.com/client/v4/accounts/{ACCOUNT_ID}/ai/run/@cf/deepgram/nova-3?detect_language=true' \
  --header 'Authorization: Bearer {TOKEN}' \
  --header 'Content-Type: audio/mpeg' \
  --data-binary @/path/to/audio.mp3</code></pre>
            <p>As well, we’ve added WebSocket support to the Deepgram models, which you can use to keep a connection to the inference server live and use it for bi-directional input and output. To use the Nova model with WebSocket support, check out our <a href="https://developers.cloudflare.com/workers-ai/models/nova-3"><u>Developer Docs</u></a>.</p><p>All the pieces work together so that you can:</p><ol><li><p><b>Capture audio</b> with Cloudflare Realtime from any WebRTC source</p></li><li><p><b>Pipe it</b> via WebSocket to your processing pipeline</p></li><li><p><b>Transcribe</b> with audio ML models Deepgram running on Workers AI</p></li><li><p><b>Process</b> with your LLM of choice through a model hosted on Workers AI or proxied via <a href="https://developers.cloudflare.com/ai-gateway/"><u>AI Gateway</u></a></p></li><li><p><b>Orchestrate</b> everything with Realtime Agents</p></li></ol>
    <div>
      <h2>Try these models out today</h2>
      <a href="#try-these-models-out-today">
        
      </a>
    </div>
    <p>Check out our<a href="https://developers.cloudflare.com/workers-ai/"><u> developer docs</u></a> for more details, pricing and how to get started with the newest partner models available on Workers AI.</p><div>
  
</div><p></p> ]]></content:encoded>
            <category><![CDATA[AI Week]]></category>
            <category><![CDATA[AI]]></category>
            <category><![CDATA[Developer Platform]]></category>
            <category><![CDATA[Developers]]></category>
            <category><![CDATA[Cloudflare Workers]]></category>
            <category><![CDATA[Workers AI]]></category>
            <guid isPermaLink="false">35N861jwJHF4GEiRCDxWP</guid>
            <dc:creator>Michelle Chen</dc:creator>
            <dc:creator>Nikhil Kothari</dc:creator>
        </item>
        <item>
            <title><![CDATA[First-party tags in seconds: Cloudflare integrates Google tag gateway for advertisers ]]></title>
            <link>https://blog.cloudflare.com/google-tag-gateway-for-advertisers/</link>
            <pubDate>Thu, 08 May 2025 18:15:00 GMT</pubDate>
            <description><![CDATA[ Cloudflare introduces a one-click integration with Google tag gateway for advertisers. ]]></description>
            <content:encoded><![CDATA[ <p>If you’re a marketer, advertiser, or a business owner that runs your own website, there’s a good chance you’ve used Google tags in order to collect analytics or measure conversions. A <a href="https://support.google.com/analytics/answer/11994839?hl=en"><u>Google tag</u></a> is a single piece of code you can use across your entire website to send events to multiple destinations like Google Analytics and Google Ads. </p><p>Historically, the common way to deploy a Google tag meant serving the JavaScript payload directly from Google’s domain. This can work quite well, but can sometimes impact performance and accurate data measurement. That’s why Google developed a way to deploy a Google tag using your own first-party infrastructure using <a href="https://developers.google.com/tag-platform/tag-manager/server-side"><u>server-side tagging</u></a>. However, this server-side tagging required deploying and maintaining a separate server, which comes with a cost and requires maintenance.</p><p>That’s why we’re excited to be Google’s launch partner and announce our direct integration of Google tag gateway for advertisers, providing many of the same performance and accuracy benefits of server-side tagging without the overhead of maintaining a separate server.   </p><p>Any <a href="https://www.cloudflare.com/learning/dns/glossary/what-is-a-domain-name/">domain</a> proxied through Cloudflare can now serve your Google tags directly from that domain. This allows you to get better measurement signals for your website and can enhance your campaign performance, with early testers seeing on average an 11% uplift in data signals. The setup only requires a few clicks — if you already have a Google tag snippet on the page, no changes to that tag are required.</p><p>Oh, did we mention it’s free? We’ve heard great feedback from customers who participated in a closed beta, and we are excited to open it up to all customers on any <a href="https://www.cloudflare.com/plans/">Cloudflare plan</a> today.      </p>
    <div>
      <h3>Combining Cloudflare’s security and performance infrastructure with Google tag’s ease of use </h3>
      <a href="#combining-cloudflares-security-and-performance-infrastructure-with-google-tags-ease-of-use">
        
      </a>
    </div>
    <p>Google Tag Manager is <a href="https://radar.cloudflare.com/year-in-review/2024#website-technologies"><u>the most used tag management solution</u></a>: it makes a complex tagging ecosystem easy to use and requires less effort from web developers. That’s why we’re collaborating with the Ads measurement and analytics teams at Google to make the integration with Google tag gateway for advertisers as seamless and accessible as possible.</p><p>Site owners will have two options of where to enable this feature: in the Google tag console, or via the Cloudflare dashboard. When logging into the Google tag console, you’ll see an option to enable Google tag gateway for advertisers in the Admin settings tab. </p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1QUzHjBrer762UOvypV2Fh/4695fb3996591f001bb02b1be88e41ad/image1.png" />
          </figure><p>Alternatively, if you already know your tag ID and have admin access to your site’s Cloudflare account, you can enable the feature, edit the measurement ID and path directly from the Cloudflare dashboard: </p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/amyXiwUzZ0X2V3BGzuOja/b4480e0fe1b420cf7942b0d0957fd6f5/image2.png" />
          </figure>
    <div>
      <h3>Improved performance and measurement accuracy  </h3>
      <a href="#improved-performance-and-measurement-accuracy">
        
      </a>
    </div>
    <p>Before, if site owners wanted to serve first-party tags from their own domain, they had to set up a complex configuration: create a <a href="https://www.cloudflare.com/learning/dns/dns-records/dns-cname-record/">CNAME</a> entry for a new subdomain, create an Origin Rule to forward requests, and a Transform Rule to include geolocation information.</p><p>This new integration dramatically simplifies the setup and makes it a one-click integration by leveraging Cloudflare's position as a <a href="https://www.cloudflare.com/learning/cdn/glossary/reverse-proxy/"><u>reverse proxy</u></a> for your domain. </p><p>In Google Tag Manager’s Admin settings, you can now connect your Cloudflare account and configure your measurement ID directly in Google, and it will push your config to Cloudflare. </p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2ToLjUY5vNVWV5AGjxeONF/b79b4b32c24080b2461860cea58232a3/image3.png" />
          </figure><p>When you enable the Google tag gateway for advertisers, specific calls to Google’s measurement servers from your website are intercepted and re-routed through your domain. The result: instead of the browser directly requesting the tag script from a Google domain (<code>e.g. www.googletagmanager.com</code>), the request is routed seamlessly through your own domain (<code>e.g. www.example.com/metrics</code>).</p><p>Cloudflare acts as an intermediary for these requests. It first securely fetches the necessary Google tag JavaScript files from Google's servers in the background, then serves these scripts back to the end user's browser from your domain. This makes the request appear as a first-party request.</p><p>A bit more on how this works: When a browser requests <code>https://example.com/gtag/js?id=G-XXXX</code>, Cloudflare intercepts and rewrites the path into the original Google endpoint, preserving all query-string parameters and normalizing the <b>Origin</b> and <b>Referer</b> headers to match Google’s expectations. It then fetches the script on your behalf, and routes all subsequent measurement payloads through the same first-party proxy to the appropriate Google collection endpoints.</p><p>This setup also impacts how cookies are stored from your domain. A <a href="https://www.cloudflare.com/learning/privacy/what-are-cookies/"><u>cookie</u></a> is a small text file that a website asks your browser to store on your computer. When you visit other pages on that same website, or return later, your browser sends that cookie back to the website's server. This allows the site to remember information about you or your preferences, like whether a user is logged in, items in a shopping cart, or, in the case of analytics and advertising, an identifier to recognize your browser across visits.</p><p>With Cloudflare’s integration with Google tag gateway for advertisers, the tag script itself is delivered <i>from your own domain</i>. When this script instructs the browser to set a cookie, the cookie is created and stored under your website's domain. </p>
    <div>
      <h3>How can I get started? </h3>
      <a href="#how-can-i-get-started">
        
      </a>
    </div>
    <p>Detailed instructions to get started can be found <a href="https://developers.cloudflare.com/google-tag-gateway/"><u>here</u></a>. You can also log in to your Cloudflare Dashboard, navigate to the Engagement Tab, and select Google tag gateway in the navigation to set it up directly in the Cloudflare dashboard.</p> ]]></content:encoded>
            <category><![CDATA[Advertising]]></category>
            <category><![CDATA[Analytics]]></category>
            <category><![CDATA[Google Analytics]]></category>
            <category><![CDATA[Google]]></category>
            <guid isPermaLink="false">3wpdZp6NrwT8NcND208zZT</guid>
            <dc:creator>Will Allen</dc:creator>
            <dc:creator>Nikhil Kothari</dc:creator>
        </item>
        <item>
            <title><![CDATA[Meta’s Llama 4 is now available on Workers AI]]></title>
            <link>https://blog.cloudflare.com/meta-llama-4-is-now-available-on-workers-ai/</link>
            <pubDate>Sun, 06 Apr 2025 03:22:00 GMT</pubDate>
            <description><![CDATA[ Llama 4 Scout 17B Instruct is now available on Workers AI: use this multimodal, Mixture of Experts AI model on Cloudflare's serverless AI platform to build next-gen AI applications. ]]></description>
            <content:encoded><![CDATA[ <p>As one of Meta’s launch partners, we are excited to make Meta’s latest and most powerful model, Llama 4, available on the Cloudflare <a href="https://developers.cloudflare.com/workers-ai/"><u>Workers AI</u></a> platform starting today. Check out the <a href="https://developers.cloudflare.com/workers-ai/models/llama-4-scout-17b-16e-instruct"><u>Workers AI Developer Docs</u></a> to begin using Llama 4 now.</p>
    <div>
      <h3>What’s new in Llama 4?</h3>
      <a href="#whats-new-in-llama-4">
        
      </a>
    </div>
    <p>Llama 4 is an industry-leading release that pushes forward the frontiers of open-source generative Artificial Intelligence (AI) models. Llama 4 relies on a novel design that combines a <a href="#what-is-a-mixture-of-experts-model"><u>Mixture of Experts</u></a> architecture with an early-fusion backbone that allows it to be natively multimodal.</p><p>The Llama 4 “herd” is made up of two models: Llama 4 Scout (109B total parameters, 17B active parameters) with 16 experts, and Llama 4 Maverick (400B total parameters, 17B active parameters) with 128 experts. The Llama Scout model is available on Workers AI today.</p><p>Llama 4 Scout has a context window of up to 10 million (10,000,000) tokens, which makes it one of the first open-source models to support a window of that size. A larger context window makes it possible to hold longer conversations, deliver more personalized responses, and support better <a href="https://developers.cloudflare.com/workers-ai/guides/tutorials/build-a-retrieval-augmented-generation-ai/"><u>Retrieval Augmented Generation</u></a> (RAG). For example, users can take advantage of that increase to summarize multiple documents or reason over large codebases. At launch, Workers AI is supporting a context window of 131,000 tokens to start and we’ll be working to increase this in the future.</p><p>Llama 4 does not compromise parameter depth for speed. Despite having 109 billion total parameters, the Mixture of Experts (MoE) architecture can intelligently use only a fraction of those parameters during active inference. This delivers a faster response that is made smarter by the 109B parameter size.</p>
    <div>
      <h3>What is a Mixture of Experts model?</h3>
      <a href="#what-is-a-mixture-of-experts-model">
        
      </a>
    </div>
    <p>A Mixture of Experts (MoE) model is a type of <a href="https://arxiv.org/abs/2209.01667"><u>Sparse Transformer</u></a> model that is composed of individual specialized neural networks called “experts”. MoE models also have a “router” component that manages input tokens and which experts they get sent to. These specialized experts work together to provide deeper results and faster inference times, increasing both model quality and performance.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7nQnnpYyTW5pLVPofbW6YD/3f9e79c13a419220cda20e7cae43c578/image2.png" />
          </figure><p>For an illustrative example, let’s say there’s an expert that’s really good at generating code while another expert is really good at creative writing. When a request comes in to write a <a href="https://en.wikipedia.org/wiki/Fibonacci_sequence"><u>Fibonacci</u></a> algorithm in Haskell, the router sends the input tokens to the coding expert. This means that the remaining experts might remain unactivated, so the model only needs to use the smaller, specialized neural network to solve the problem.</p><p>In the case of Llama 4 Scout, this means the model is only using one expert (17B parameters) instead of the full 109B total parameters of the model. In reality, the model probably needs to use multiple experts to handle a request, but the point still stands: an MoE model architecture is incredibly efficient for the breadth of problems it can handle and the speed at which it can handle it.</p><p>MoE also makes it more efficient to train models. We recommend reading <a href="https://ai.meta.com/blog/llama-4-multimodal-intelligence/"><u>Meta’s blog post</u></a> on how they trained the Llama 4 models. While more efficient to train, hosting an MoE model for inference can sometimes be more challenging. You need to load the full model weights (over 200 GB) into GPU memory. Supporting a larger context window also requires keeping more memory available in a Key Value cache.</p><p>Thankfully, Workers AI solves this by offering Llama 4 Scout as a serverless model, meaning that you don’t have to worry about things like infrastructure, hardware, memory, etc. — we do all of that for you, so you are only one API request away from interacting with Llama 4. </p>
    <div>
      <h3>What is early-fusion?</h3>
      <a href="#what-is-early-fusion">
        
      </a>
    </div>
    <p>One challenge in building AI-powered applications is the need to grab multiple different models, like a Large Language Model (LLM) and a visual model, to deliver a complete experience for the user. Llama 4 solves that problem by being natively multimodal, meaning the model can understand both text and images.</p><p>You might recall that <a href="https://developers.cloudflare.com/workers-ai/models/llama-3.2-11b-vision-instruct/"><u>Llama 3.2 11b</u></a> was also a vision model, but Llama 3.2 actually used separate parameters for vision and text. This means that when you sent an image request to the model, it only used the vision parameters to understand the image.</p><p>With Llama 4, all the parameters natively understand both text and images. This allowed Meta to train the model parameters with large amounts of unlabeled text, image, and video data together. For the user, this means that you don’t have to chain together multiple models like a vision model and an LLM for a multimodal experience — you can do it all with Llama 4.</p>
    <div>
      <h3>Try it out now!</h3>
      <a href="#try-it-out-now">
        
      </a>
    </div>
    <p>We are excited to partner with Meta as a launch partner to make it effortless for developers to use Llama 4 in Cloudflare Workers AI. The release brings an efficient, multimodal, highly-capable and open-source model to anyone who wants to build AI-powered applications.</p><p>Cloudflare’s Developer Platform makes it possible to build complete applications that run alongside our Llama 4 inference. You can rely on our compute, storage, and agent layer running seamlessly with the inference from models like Llama 4. To learn more, head over to our <a href="https://developers.cloudflare.com/workers-ai/models/llama-4-scout-17b-16e-instruct"><u>developer docs model page</u></a> for more information on using Llama 4 on Workers AI, including pricing, additional terms, and acceptable use policies.</p><p>Want to try it out without an account? Visit our <a href="https://playground.ai.cloudflare.com/"><u>AI playground </u></a>or get started with building your AI experiences with Llama 4 and Workers AI.</p> ]]></content:encoded>
            <category><![CDATA[Developer Week]]></category>
            <category><![CDATA[Developers]]></category>
            <category><![CDATA[Workers AI]]></category>
            <guid isPermaLink="false">3G2O7IP6rSTIhSEUVmIDkt</guid>
            <dc:creator>Michelle Chen</dc:creator>
            <dc:creator>Jesse Kipp</dc:creator>
            <dc:creator>Nikhil Kothari</dc:creator>
        </item>
        <item>
            <title><![CDATA[Meta Llama 3.1 now available on Workers AI]]></title>
            <link>https://blog.cloudflare.com/meta-llama-3-1-available-on-workers-ai/</link>
            <pubDate>Tue, 23 Jul 2024 15:15:55 GMT</pubDate>
            <description><![CDATA[ Cloudflare is excited to be a launch partner with Meta to introduce Workers AI support for Llama 3.1 ]]></description>
            <content:encoded><![CDATA[ <p>At Cloudflare, we’re big supporters of the open-source community – and that extends to our approach for <a href="https://developers.cloudflare.com/workers-ai/">Workers AI</a> models as well. Our strategy for our Cloudflare AI products is to provide a top-notch developer experience and toolkit that can help people build applications with open-source models.  </p><p>We’re excited to be one of Meta’s launch partners to make their newest <a href="https://github.com/meta-llama/llama-models/blob/main/models/llama3_1/MODEL_CARD.md">Llama 3.1 8B model</a> available to all Workers AI users on Day 1. You can run their latest model by simply swapping out your model ID to <code>@cf/meta/llama-3.1-8b-instruct</code> or test out the model on our <a href="https://playground.ai.cloudflare.com">Workers AI Playground</a>. Llama 3.1 8B is free to use on Workers AI until the model graduates out of beta.</p><p>Meta’s Llama collection of models have consistently shown high-quality performance in areas like general knowledge, steerability, math, tool use, and multilingual translation. Workers AI is excited to continue to distribute and serve the Llama collection of models on our serverless inference platform, powered by our globally distributed GPUs.</p><p>The Llama 3.1 model is particularly exciting, as it is released in a higher precision (bfloat16), incorporates function calling, and adds support across 8 languages. Having multilingual support built-in means that you can use Llama 3.1 to write prompts and receive responses directly in languages like English, French, German, Hindi, Italian, Portuguese, Spanish, and Thai. Expanding model understanding to more languages means that your applications have a bigger reach across the world, and it’s all possible with just one model.</p>
            <pre><code>const answer = await env.AI.run('@cf/meta/llama-3.1-8b-instruct', {
    stream: true,
    messages: [{
        "role": "user",
        "content": "Qu'est-ce que ç'est verlan en français?"
    }],
});</code></pre>
            <p>Llama 3.1 also introduces native function calling (also known as tool calls) which allows LLMs to generate structured JSON outputs which can then be fed into different APIs. This means that function calling is supported out-of-the-box, without the need for a fine-tuned variant of Llama that specializes in tool use. Having this capability built-in means that you can use one model across various tasks.</p><p>Workers AI recently announced <a href="/embedded-function-calling">embedded function calling</a>, which is now usable with Meta Llama 3.1 as well. Our embedded function calling gives developers a way to run their inference tasks far more efficiently than traditional architectures, leveraging Cloudflare Workers to reduce the number of requests that need to be made manually. It also makes use of our open-source <a href="https://www.npmjs.com/package/@cloudflare/ai-utils">ai-utils</a> package, which helps you orchestrate the back-and-forth requests for function calling along with other helper methods that can automatically generate tool schemas. Below is an example function call to Llama 3.1 with embedded function calling that then stores key-values in Workers KV.</p>
            <pre><code>const response = await runWithTools(env.AI, "@cf/meta/llama-3.1-8b-instruct", {
    messages: [{ role: "user", content: "Greet the user and ask them a question" }],
    tools: [{
        name: "Store in memory",
        description: "Store everything that the user talks about in memory as a key-value pair.",
        parameters: {
            type: "object",
            properties: {
                key: {
                    type: "string",
                    description: "The key to store the value under.",
                },
                value: {
                    type: "string",
                    description: "The value to store.",
                },
            },
            required: ["key", "value"],
        },
        function: async ({ key, value }) =&gt; {
                await env.KV.put(key, value);

                return JSON.stringify({
                    success: true,
                });
         }
    }]
})</code></pre>
            <p>We’re excited to see what you build with these new capabilities. As always, use of the new model should be conducted with Meta’s <a href="https://github.com/meta-llama/llama-models/blob/main/models/llama3_1/USE_POLICY.md">Acceptable Use Policy</a> and <a href="https://github.com/meta-llama/llama-models/blob/main/models/llama3_1/LICENSE">License</a> in mind. Take a look at our <a href="https://developers.cloudflare.com/workers-ai/models/llama-3.1-8b-instruct/">developer documentation</a> to get started!</p> ]]></content:encoded>
            <category><![CDATA[Workers AI]]></category>
            <category><![CDATA[AI]]></category>
            <category><![CDATA[Product News]]></category>
            <category><![CDATA[Developer Platform]]></category>
            <category><![CDATA[Developers]]></category>
            <category><![CDATA[Open Source]]></category>
            <guid isPermaLink="false">Mmf9yB6m0SRgCJfyxvYK8</guid>
            <dc:creator>Michelle Chen</dc:creator>
            <dc:creator>Nikhil Kothari</dc:creator>
        </item>
        <item>
            <title><![CDATA[Meta Llama 3 available on Cloudflare Workers AI]]></title>
            <link>https://blog.cloudflare.com/meta-llama-3-available-on-cloudflare-workers-ai/</link>
            <pubDate>Thu, 18 Apr 2024 20:58:33 GMT</pubDate>
            <description><![CDATA[ We are thrilled to give developers around the world the ability to build AI applications with Meta Llama 3 using Workers AI. We are proud to be a launch partner with Meta for their newest 8B Llama 3 model ]]></description>
            <content:encoded><![CDATA[ <p></p><p>We are thrilled to give developers around the world the ability to build AI applications with Meta Llama 3 using Workers AI. We are proud to be a launch partner with Meta for their newest 8B Llama 3 model, and excited to continue our partnership to bring the best of open-source models to our inference platform.</p>
    <div>
      <h2>Workers AI</h2>
      <a href="#workers-ai">
        
      </a>
    </div>
    <p><a href="/workers-ai">Workers AI’s initial launch</a> in beta included support for Llama 2, as it was one of the most requested open source models from the developer community. Since that initial launch, we’ve seen developers build all kinds of innovative applications including knowledge sharing <a href="https://workers.cloudflare.com/built-with/projects/ai.moda/">chatbots</a>, creative <a href="https://workers.cloudflare.com/built-with/projects/Audioflare/">content generation</a>, and automation for <a href="https://workers.cloudflare.com/built-with/projects/Azule/">various workflows</a>.  </p><p>At Cloudflare, we know developers want simplicity and flexibility, with the ability to build with multiple AI models while optimizing for accuracy, performance, and cost, among other factors. Our goal is to make it as easy as possible for developers to use their models of choice without having to worry about the complexities of hosting or deploying models.</p><p>As soon as we learned about the development of Llama 3 from our partners at Meta, we knew developers would want to start building with it as quickly as possible. Workers AI’s serverless inference platform makes it extremely easy and cost-effective to start using the latest large language models (LLMs). Meta’s commitment to developing and growing an open AI-ecosystem makes it possible for customers of all sizes to use AI at scale in production. All it takes is a few lines of code to run inference to Llama 3:</p>
            <pre><code>export interface Env {
  // If you set another name in wrangler.toml as the value for 'binding',
  // replace "AI" with the variable name you defined.
  AI: any;
}

export default {
  async fetch(request: Request, env: Env) {
    const response = await env.AI.run('@cf/meta/llama-3-8b-instruct', {
        messages: [
{role: "user", content: "What is the origin of the phrase Hello, World?"}
	 ]
      }
    );

    return new Response(JSON.stringify(response));
  },
};</code></pre>
            
    <div>
      <h2>Built with Meta Llama 3</h2>
      <a href="#built-with-meta-llama-3">
        
      </a>
    </div>
    <p>Llama 3 offers leading performance on a wide range of industry benchmarks. You can learn more about the architecture and improvements on Meta’s <a href="https://ai.meta.com/blog/meta-llama-3/">blog post</a>. Cloudflare Workers AI supports <a href="https://developers.cloudflare.com/workers-ai/models/llama-3-8b-instruct/">Llama 3 8B</a>, including the instruction fine-tuned model.</p><p>Meta’s testing shows that Llama 3 is the most advanced open LLM today on <a href="https://github.com/meta-llama/llama3/blob/main/eval_details.md?cf_target_id=1F7E4663A460CE17F25CF8ADDF6AB9F1">evaluation benchmarks</a> such as MMLU, GPQA, HumanEval, GSM-8K, and MATH. Llama 3 was trained on an increased number of training tokens (15T), allowing the model to have a better grasp on language intricacies. Larger context windows doubles the capacity of Llama 2, and allows the model to better understand lengthy passages with rich contextual data. Although the model supports a context window of 8k, we currently only support 2.8k but are looking to support 8k context windows through quantized models soon. As well, the new model introduces an efficient new <a href="https://github.com/openai/tiktoken">tiktoken</a>-based tokenizer with a vocabulary of 128k tokens, encoding more characters per token, and achieving better performance on English and multilingual benchmarks. This means that there are 4 times as many parameters in the embedding and output layers, making the model larger than the previous Llama 2 generation of models.</p><p>Under the hood, Llama 3 uses <a href="https://arxiv.org/abs/2305.13245">grouped-query attention</a> (GQA), which improves inference efficiency for longer sequences and also renders their 8B model architecturally equivalent to <a href="https://developers.cloudflare.com/workers-ai/models/mistral-7b-instruct-v0.1/">Mistral-7B</a>. For tokenization, it uses byte-level <a href="https://huggingface.co/learn/nlp-course/en/chapter6/5">byte-pair encoding (BPE)</a>, similar to OpenAI’s GPT tokenizers. This allows tokens to represent any arbitrary byte sequence — even those without a valid utf-8 encoding. This makes the end-to-end model much more flexible in its representation of language, and leads to improved performance.</p><p>Along with the base Llama 3 models, Meta has released a suite of offerings with tools such as <a href="https://ai.meta.com/blog/meta-llama-3/">Llama Guard 2, Code Shield, and CyberSec Eval 2,</a> which we are hoping to release on our Workers AI platform shortly.</p>
    <div>
      <h2>Try it out now</h2>
      <a href="#try-it-out-now">
        
      </a>
    </div>
    <p>Meta Llama 3 8B is available in the <a href="https://developers.cloudflare.com/workers-ai/models/">Workers AI Model Catalog</a> today! Check out the <a href="https://developers.cloudflare.com/workers-ai/models/llama-3-8b-instruct/">documentation here</a> and as always if you want to share your experiences or learn more, join us in the <a href="https://discord.cloudflare.com">Developer Discord</a>.</p> ]]></content:encoded>
            <category><![CDATA[Llama]]></category>
            <category><![CDATA[Developers]]></category>
            <category><![CDATA[Developer Platform]]></category>
            <category><![CDATA[Workers AI]]></category>
            <category><![CDATA[Cloudflare Workers]]></category>
            <category><![CDATA[Product News]]></category>
            <guid isPermaLink="false">3FHdMMB8JzNt8hkAYDcqVL</guid>
            <dc:creator>Michelle Chen</dc:creator>
            <dc:creator>Davina Zamanzadeh</dc:creator>
            <dc:creator>Isaac Rehg</dc:creator>
            <dc:creator>Nikhil Kothari</dc:creator>
        </item>
    </channel>
</rss>