
<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:media="http://search.yahoo.com/mrss/">
    <channel>
        <title><![CDATA[ The Cloudflare Blog ]]></title>
        <description><![CDATA[ Get the latest news on how products at Cloudflare are built, technologies used, and join the teams helping to build a better Internet. ]]></description>
        <link>https://blog.cloudflare.com</link>
        <atom:link href="https://blog.cloudflare.com/" rel="self" type="application/rss+xml"/>
        <language>en-us</language>
        
        <lastBuildDate>Tue, 14 Apr 2026 23:03:17 GMT</lastBuildDate>
        <item>
            <title><![CDATA[Developer Week 2025 wrap-up]]></title>
            <link>https://blog.cloudflare.com/developer-week-2025-wrap-up/</link>
            <pubDate>Mon, 14 Apr 2025 14:00:00 GMT</pubDate>
            <description><![CDATA[ We’ve closed out Developer Week 2025. Here’s a quick recap of the announcements and in-depth technical explorations that went out during the week. ]]></description>
            <content:encoded><![CDATA[ <p>As we conclude Developer Week 2025, we’re proud to reflect upon the capabilities we’ve added to our developer platform. It’s so rewarding to deliver products, features and tools that help developers build smarter and ship faster, and even more so hearing your responses throughout the week!</p><p>Our VP of Product, Rita Kozlov, <a href="https://blog.cloudflare.com/welcome-to-developer-week-2025/"><u>kicked off Developer Week 2025</u></a> discussing the ever-evolving landscape of development, particularly in the age of AI. AI is no longer just a buzzword or a trope for a science-fiction future — in the realm of modern development, it’s a core tenet (and utility) of how we build, innovate, and solve problems. It’s influencing how and how frequently we ship code, as well as enabling <i>anyone</i> to write it.</p><p>It’s exciting to not only witness this technical revolution, but also to be building a platform that enables developers to be part of it. We want to hear your feedback and see what you build with the new capabilities — reach out to us on <a href="https://discord.com/invite/cloudflaredev"><u>Discord</u></a> or <a href="https://x.com/cloudflaredev"><u>X</u></a>.</p><p>Here’s a recap of our Developer Week 2025 announcements:</p>
    <div>
      <h3>Monday, April 7</h3>
      <a href="#monday-april-7">
        
      </a>
    </div>
    <table><tr><td><p><b>Announcement</b></p></td><td><p><b>Summary</b></p></td></tr><tr><td><p><a href="https://blog.cloudflare.com/building-ai-agents-with-mcp-authn-authz-and-durable-objects"><u>Piecing together the Agent puzzle: MCP, authentication &amp; authorization, and Durable Objects free tier </u></a></p></td><td><p>Toolkit for AI agents includes new Agents SDK support for MCP (Model Context Protocol) clients, authentication/authorization/hibernation for MCP servers, and Durable Objects free tier.</p></td></tr><tr><td><p><a href="https://blog.cloudflare.com/introducing-autorag-on-cloudflare"><u>Introducing AutoRAG: Fully-Managed Retrieval-Augmented Generation on Cloudflare</u></a></p></td><td><p>Fully managed Retrieval-Augmented Generation (RAG) pipelines powered by Cloudflare's global network and developer platform simplifies how you build and scale RAG pipelines to power your context-aware AI and search applications.</p></td></tr><tr><td><p><a href="https://blog.cloudflare.com/workflows-ga-production-ready-durable-execution"><u>Cloudflare Workflows is now GA: production-ready durable execution</u></a></p></td><td><p>Workflows — a durable execution engine built directly on top of Workers — is Generally Available and production-ready with new human-in-the-loop capabilities, more scale, and more metrics.</p></td></tr><tr><td><p><a href="https://blog.cloudflare.com/cloudflare-acquires-outerbase-database-dx"><u>Cloudflare acquires Outerbase to expand database and agent developer experience capabilities</u></a></p></td><td><p>Cloudflare acquired Outerbase, expanding our database and agent developer experience capabilities.</p></td></tr></table>
    <div>
      <h3>Tuesday, April 8</h3>
      <a href="#tuesday-april-8">
        
      </a>
    </div>
    <table><tr><td><p><b>Announcement</b></p></td><td><p><b>Summary</b></p></td></tr><tr><td><p><a href="https://blog.cloudflare.com/building-global-mysql-apps-with-cloudflare-workers-and-hyperdrive"><u>Build global MySQL apps using Cloudflare Workers and Hyperdrive</u></a></p></td><td><p>Workers connect to your MySQL databases with Hyperdrive to deliver optimal performance for regional databases, with support for your favorite drivers and ORMs.</p></td></tr><tr><td><p><a href="https://blog.cloudflare.com/how-hyperdrive-speeds-up-database-access"><u>Pools across the sea: how Hyperdrive speeds up access to databases and why we’re making it free</u></a></p></td><td><p>Hyperdrive, now available on free tier, leverages key innovations to make global database connections fast. </p></td></tr><tr><td><p><a href="https://blog.cloudflare.com/introducing-the-cloudflare-vite-plugin"><u>“Just use Vite”… with the Workers runtime</u></a></p></td><td><p>The Cloudflare Vite plugin integrates Vite, one of the most popular build tools for web development, with the Workers runtime. We announced the 1.0 release and official support for React Router v7.</p></td></tr><tr><td><p><a href="https://blog.cloudflare.com/deploying-nextjs-apps-to-cloudflare-workers-with-the-opennext-adapter"><u>Deploy your Next.js app to Cloudflare Workers with the Cloudflare adapter for OpenNext</u></a></p></td><td><p>With the 1.0-beta release of the Cloudflare adapter for OpenNext, you can host your Next.js 14 and 15 applications on Cloudflare Workers.</p></td></tr><tr><td><p><a href="https://blog.cloudflare.com/full-stack-development-on-cloudflare-workers"><u>Your frontend, backend, and database — now in one Cloudflare Worker</u></a></p></td><td><p>You can now deploy static sites, full-stack, and stateful applications on Cloudflare Workers — the primitives are all here. Framework support for React Router v7, Astro, Vue, and more are generally available today, as is the Cloudflare Vite plugin.</p></td></tr><tr><td><p><a href="https://blog.cloudflare.com/deploy-workers-applications-in-seconds"><u>Skip the setup: deploy a Workers application in seconds</u></a></p></td><td><p>Developers can set up and deploy your Worker application with a Deploy to Cloudflare button.</p></td></tr></table>
    <div>
      <h3>Wednesday, April 9</h3>
      <a href="#wednesday-april-9">
        
      </a>
    </div>
    <table><tr><td><p><b>Announcement</b></p></td><td><p><b>Summary</b></p></td></tr><tr><td><p><a href="https://blog.cloudflare.com/introducing-cloudflare-realtime-and-realtimekit"><u>Make your apps truly interactive with Cloudflare Realtime and RealtimeKit</u></a></p></td><td><p>We announced Cloudflare Realtime and RealtimeKit, a complete toolkit for shipping real-time audio and video apps in days with SDKs for Kotlin, React Native, Swift, JavaScript, and Flutter.</p></td></tr><tr><td><p><a href="https://blog.cloudflare.com/secrets-store-beta"><u>Introducing Cloudflare Secrets Store (Beta): secure your secrets, simplify your workflow</u></a></p></td><td><p>Securely store, manage, and deploy account level secrets to Cloudflare Workers through Cloudflare Secrets Store, available in beta — with role-based access control, audit logging, and Wrangler support.</p></td></tr><tr><td><p><a href="https://blog.cloudflare.com/snippets"><u>Cloudflare Snippets are now Generally Available</u></a></p></td><td><p>Cloudflare Snippets are generally available, enabling fast, cost-free JavaScript-based HTTP traffic modifications across all paid plans. </p></td></tr><tr><td><p><a href="https://blog.cloudflare.com/introducing-workers-observability-logs-metrics-and-queries-all-in-one-place/"><u>Introducing Workers Observability: logs, metrics, and queries – all in one place</u></a></p></td><td><p>Workers Observability powers up with General Availability of Workers Logs and new Query Builder to help you investigate log events across all of your Workers.</p></td></tr><tr><td><p><a href="https://blog.cloudflare.com/network-performance-update-developer-week-2025"><u>Network performance update: Developer Week 2025</u></a></p></td><td><p>Cloudflare has been tracking and comparing our speed with other top networks since 2021. We take a look at how things have changed since our last update.</p></td></tr></table>
    <div>
      <h3>Thursday, April 10</h3>
      <a href="#thursday-april-10">
        
      </a>
    </div>
    <table><tr><td><p><b>Announcement</b></p></td><td><p><b>Summary</b></p></td></tr><tr><td><p><a href="https://blog.cloudflare.com/r2-data-catalog-public-beta"><u>R2 Data Catalog: Managed Apache Iceberg tables with zero egress fees</u></a></p></td><td><p>R2 Data Catalog is now in public beta: a managed Apache Iceberg data catalog built directly into your R2 bucket.</p></td></tr><tr><td><p><a href="https://blog.cloudflare.com/d1-read-replication-beta"><u>Sequential consistency without borders: how D1 implements global read replication</u></a></p></td><td><p>D1, Cloudflare’s managed SQL database, announces global read replication beta.</p></td></tr><tr><td><p><a href="https://blog.cloudflare.com/cloudflare-acquires-arroyo-pipelines-streaming-ingestion-beta"><u>Just landed: streaming ingestion on Cloudflare with Arroyo and Pipelines</u></a></p></td><td><p>We’ve just shipped our new streaming ingestion service, Pipelines. And, we’ve acquired Arroyo, enabling us to bring new SQL-based, stateful transformations to Pipelines and R2.</p></td></tr><tr><td><p><a href="https://blog.cloudflare.com/making-super-slurper-five-times-faster"><u>Making Super Slurper 5x faster with Workers, Durable Objects, and Queues</u></a></p></td><td><p>We re-architected Super Slurper from the ground up using our Developer Platform — leveraging Cloudflare Workers, Durable Objects, and Queues — and improved transfer speeds to R2 by up to 5x.</p></td></tr></table>
    <div>
      <h3>Friday, April 11</h3>
      <a href="#friday-april-11">
        
      </a>
    </div>
    <table><tr><td><p><b>Announcement</b></p></td><td><p><b>Summary</b></p></td></tr><tr><td><p><a href="https://blog.cloudflare.com/workers-virtual-private-cloud"><u>A global virtual private cloud to build secure cross-cloud apps on Workers</u></a></p></td><td><p>We’re announcing Workers VPC: a global private network that allows applications deployed on Cloudflare Workers to connect to your legacy cloud infrastructure. Now, you can unlock access to your existing APIs and data in external clouds and build global, modern, cross-cloud apps on Workers.</p></td></tr><tr><td><p><a href="https://blog.cloudflare.com/ai-agents-and-innovation-with-launchpad-cohort5"><u>Startup spotlight: building AI agents and accelerating innovation with Cohort #5</u></a></p></td><td><p>Explore how developers in Workers Launchpad are using Cloudflare to scale AI workloads and streamline automation.</p></td></tr><tr><td><p><a href="https://blog.cloudflare.com/expanding-cloudflares-startup-program"><u>Startup Program update: empowering every stage of the startup journey</u></a></p></td><td><p>Cloudflare’s Startup Program offers up to \$250,000 in credits for companies building on our Developer Platform across 4 tiers: \$5,000, \$25,000, \$100,000, and \$250,000. </p></td></tr><tr><td><p><a href="https://blog.cloudflare.com/cloudflare-containers-coming-2025/"><u>Simple, scalable, and global: Containers are coming to Cloudflare Workers in June 2025</u></a></p></td><td><p>Cloudflare Containers are coming this June. Run new types of workloads on our network with an experience that is simple, scalable, global and deeply integrated with Workers.</p></td></tr><tr><td><p><a href="https://blog.cloudflare.com/workers-ai-improvements"><u>Workers AI gets a speed boost, batch workload support, more LoRAs, new models, and a refreshed dashboard</u></a></p></td><td><p>Workers AI inference is faster with speculative decoding &amp; prefix caching. Use our new batch inference for handling large request volumes seamlessly. Build tailored AI apps with more LoRA options. Lastly, new models and a refreshed dashboard round out this Developer Week update for Workers AI.</p></td></tr><tr><td><p><a href="https://blog.cloudflare.com/simplifying-ncmec-reporting-with-cloudflare-workflows"><u>How we simplified NCMEC reporting with Cloudflare Workflows</u></a></p></td><td><p>Cloudflare replaced a queues-based architecture in our <a href="https://www.missingkids.org/home"><u>National Center for Missing &amp; Exploited Children</u></a> (NCMEC) reporting system with Cloudflare Workflows for a structured, retryable workflow that’s easier to debug and maintain.</p></td></tr><tr><td><p><a href="https://blog.cloudflare.com/azul-certificate-transparency-log"><u>A next-generation Certificate Transparency log built on Cloudflare Workers</u></a></p></td><td><p>With recent developments in Certificate Transparency (CT), Cloudflare built a next-generation CT log on top of Cloudflare’s Developer Platform.</p></td></tr></table>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/kJLEDuWxj3rUFlNNpbL5M/bb81a64ba8d301d8ba707af4362ec806/image1.png" />
          </figure><p>Even though 2025 Developer Week has come to a close, we can’t wait to hear what you’re building and hope you’ll share it with us on <a href="https://x.com/cloudflaredev"><u>X</u></a> or <a href="https://discord.com/invite/cloudflaredev"><u>Discord</u></a>. If you’re looking to get started, check out our <a href="https://developers.cloudflare.com/products/?product-group=Developer+platform"><u>developer documentation</u></a>. </p> ]]></content:encoded>
            <category><![CDATA[Developer Week]]></category>
            <category><![CDATA[Developers]]></category>
            <category><![CDATA[AI]]></category>
            <guid isPermaLink="false">XKeWT3uNotBgN0sHVxCmb</guid>
            <dc:creator>Vy Ton</dc:creator>
        </item>
        <item>
            <title><![CDATA[Piecing together the Agent puzzle: MCP, authentication & authorization, and Durable Objects free tier]]></title>
            <link>https://blog.cloudflare.com/building-ai-agents-with-mcp-authn-authz-and-durable-objects/</link>
            <pubDate>Mon, 07 Apr 2025 13:10:00 GMT</pubDate>
            <description><![CDATA[ Cloudflare delivers toolkit for AI agents with new Agents SDK support for MCP (Model Context Protocol) clients, authentication/authorization/hibernation for MCP servers and Durable Objects free tier.  ]]></description>
            <content:encoded><![CDATA[ <p>It’s not a secret that at Cloudflare <a href="https://blog.cloudflare.com/build-ai-agents-on-cloudflare/"><u>we are bullish</u></a> on the future of <a href="https://www.cloudflare.com/learning/ai/what-is-agentic-ai/">agents</a>. We’re excited about a future where AI can not only co-pilot alongside us, but where we can actually start to delegate entire tasks to AI. </p><p>While it hasn’t been too long since we <a href="https://blog.cloudflare.com/build-ai-agents-on-cloudflare/"><u>first announced</u></a> our Agents SDK to make it easier for developers to build agents, building towards an agentic future requires continuous delivery towards this goal. Today, we’re making several announcements to help accelerate agentic development, including:</p><ul><li><p><b>New Agents SDK capabilities:</b> Build remote MCP clients, with transport and authentication built-in, to allow AI agents to connect to external services. </p></li><li><p><a href="https://developers.cloudflare.com/agents/model-context-protocol/authorization/#3-bring-your-own-oauth-provider"><b><u>BYO Auth provider for MCP</u></b></a><b>:</b> Integrations with <a href="https://stytch.com/"><u>Stytch</u></a>, <a href="https://auth0.com/"><u>Auth0</u></a>, and <a href="https://workos.com/"><u>WorkOS</u></a> to add authentication and authorization to your remote MCP server. </p></li><li><p><a href="https://developers.cloudflare.com/agents/model-context-protocol/mcp-agent-api/#hibernation-support"><b><u>Hibernation for McpAgent</u></b></a><b>:</b> Automatically sleep stateful, remote MCP servers when inactive and wake them when needed. This allows you to maintain connections for long-running sessions while ensuring you’re not paying for idle time. </p></li><li><p><a href="https://developers.cloudflare.com/changelog/2025-04-07-durable-objects-free-tier"><b><u>Durable Objects free tier</u></b></a><b>:</b> We view <a href="https://www.cloudflare.com/developer-platform/products/durable-objects/">Durable Objects</a> as a key component for building agents, and if you’re using our Agents SDK, you need access to it. Until today, Durable Objects was only accessible as part of our paid plans, and today we’re excited to include it in our free tier.</p></li><li><p><a href="https://blog.cloudflare.com/workflows-ga-production-ready-durable-execution"><b><u>Workflows GA</u></b></a><b>:</b> Enables you to ship production-ready, long-running, multi-step actions in agents.</p></li><li><p><a href="https://blog.cloudflare.com/introducing-autorag-on-cloudflare"><b><u>AutoRAG</u></b></a><b>:</b> Helps you <a href="https://www.cloudflare.com/learning/ai/how-to-build-rag-pipelines/">integrate context-aware AI</a> into your applications, in just a few clicks</p></li><li><p><a href="https://agents.cloudflare.com"><b><u>agents.cloudflare.com</u></b></a><b>:</b> our new landing page for all things agents.</p></li></ul>
    <div>
      <h2>New MCP capabilities in Agents SDK</h2>
      <a href="#new-mcp-capabilities-in-agents-sdk">
        
      </a>
    </div>
    <p>AI agents can now connect to and interact with external services through MCP (<a href="https://www.cloudflare.com/learning/ai/what-is-model-context-protocol-mcp/"><u>Model Context Protocol</u></a>). We’ve updated the Agents SDK to allow you to build a remote MCP client into your AI agent, with all the components — authentication flows, tool discovery, and connection management — built-in for you.</p><p>This allows you to build agents that can:</p><ol><li><p>Prompt the end user to grant access to a 3rd party service (MCP server).</p></li><li><p>Use tools from these external services, acting on behalf of the end user.</p></li><li><p>Call MCP servers from Workflows, scheduled tasks, or any part of your agent.</p></li><li><p>Connect to multiple MCP servers and automatically discover new tools or capabilities presented by the 3rd party service.</p></li></ol>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/X3RvQHewsVwJhq3TVOD0w/bbc5690d2d687f7a390f91474b3eb385/1.png" />
          </figure><p>MCP (Model Context Protocol) — <a href="https://www.anthropic.com/news/model-context-protocol"><u>first introduced by Anthropic</u></a> — is quickly becoming the standard way for AI agents to interact with external services, with providers like OpenAI, Cursor, and Copilot adopting the protocol.</p><p>We <a href="https://blog.cloudflare.com/remote-model-context-protocol-servers-mcp/"><u>recently announced</u></a> support for <a href="https://developers.cloudflare.com/agents/guides/remote-mcp-server/"><u>building remote MCP servers</u></a> on Cloudflare, and added an <code>McpAgent</code> class to our Agents SDK that automatically handles the remote aspects of MCP: transport and authentication/authorization. Now, we’re excited to extend the same capabilities to agents acting as MCP clients.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3nxl3bIRTbfRzpdLhHF720/41bea06c9e48b7d356d11a6f254b76ef/2.png" />
          </figure><p>Want to see it in action? Use the button below to deploy a fully remote MCP client that can be used to connect to remote MCP servers.</p><a href="https://deploy.workers.cloudflare.com/?url=https://github.com/cloudflare/ai/tree/main/demos/mcp-client"><img src="https://deploy.workers.cloudflare.com/button" /></a>
<p></p>
    <div>
      <h2>AI Agents can now act as remote MCP clients, with transport and auth included</h2>
      <a href="#ai-agents-can-now-act-as-remote-mcp-clients-with-transport-and-auth-included">
        
      </a>
    </div>
    <p>AI agents need to connect to external services to access tools, data, and capabilities beyond their built-in knowledge. That means AI agents need to be able to act as remote MCP clients, so they can connect to remote MCP servers that are hosting these tools and capabilities. </p><p>We’ve added a new class, <code>MCPClientManager</code>, into the Agents SDK to give you all the tooling you need to allow your AI agent to make calls to external services via MCP. The <code>MCPClientManager</code> class automatically handles: </p><ul><li><p><b>Transport: </b>Connect to remote MCP servers over SSE and HTTP, with support for <a href="https://spec.modelcontextprotocol.io/specification/2025-03-26/basic/transports/#streamable-http"><u>Streamable HTTP</u></a> coming soon. </p></li><li><p><b>Connection management: </b>The client tracks the state of all connections and automatically reconnects if a connection is lost.</p></li><li><p><b>Capability discovery: </b>Automatically discovers all capabilities, tools, resources, and prompts presented by the MCP server.</p></li><li><p><b>Real-time updates</b>: When a server's tools, resources, or prompts change, the client automatically receives notifications and updates its internal state.</p></li><li><p><b>Namespacing: </b>When connecting to multiple MCP servers, all tools and resources are automatically namespaced to avoid conflicts.</p></li></ul>
    <div>
      <h3>Granting agents access to tools with built-in auth check for MCP Clients</h3>
      <a href="#granting-agents-access-to-tools-with-built-in-auth-check-for-mcp-clients">
        
      </a>
    </div>
    <p>We've integrated the complete OAuth authentication flow directly into the Agents SDK, so your AI agents can securely connect and authenticate to any remote MCP server without you having to build authentication flow from scratch.</p><p>This allows you to give users a secure way to log in and explicitly grant access to allow the agent to act on their behalf by automatically: </p><ul><li><p>Supporting the OAuth 2.1 protocol.</p></li><li><p>Redirecting users to the service’s login page.</p></li><li><p>Generating the code challenge and exchanging an authorization code for an access token.</p></li><li><p>Using the access token to make authenticated requests to the MCP server.</p></li></ul><p>Here is an example of an agent that can securely connect to MCP servers by initializing the client manager, adding the server, and handling the authentication callbacks:</p>
            <pre><code>async onStart(): Promise&lt;void&gt; {
  // initialize MCPClientManager which manages multiple MCP clients with optional auth
  this.mcp = new MCPClientManager("my-agent", "1.0.0", {
    baseCallbackUri: `${serverHost}/agents/${agentNamespace}/${this.name}/callback`,
    storage: this.ctx.storage,
  });
}

async addMcpServer(url: string): Promise&lt;string&gt; {
  // Add one MCP client to our MCPClientManager
  const { id, authUrl } = await this.mcp.connect(url);
  // Return authUrl to redirect the user to if the user is unauthorized
  return authUrl
}

async onRequest(req: Request): Promise&lt;void&gt; {
  // handle the auth callback after being finishing the MCP server auth flow
  if (this.mcp.isCallbackRequest(req)) {
    await this.mcp.handleCallbackRequest(req);
    return new Response("Authorized")
  }
  
  // ...
}</code></pre>
            <p>Connecting to multiple MCP servers and discovering what capabilities they offer</p><p>You can use the Agents SDK to connect an MCP client to multiple MCP servers simultaneously. This is particularly useful when you want your agent to access and interact with tools and resources served by different service providers. </p><p>The <code>MCPClientManager</code> class maintains connections to multiple MCP servers through the <code>mcpConnections</code> object, a dictionary that maps unique server names to their respective <code>MCPClientConnection</code> instances. </p><p>When you register a new server connection using <code>connect()</code>, the manager: </p><ol><li><p>Creates a new connection instance with server-specific authentication.</p></li><li><p>Initializes the connections and registers for server capability notifications.</p></li></ol>
            <pre><code>async onStart(): Promise&lt;void&gt; {
  // Connect to an image generation MCP server
  await this.mcp.connect("https://image-gen.example.com/mcp/sse");
  
  // Connect to a code analysis MCP server
  await this.mcp.connect("https://code-analysis.example.org/sse");
  
  // Now we can access tools with proper namespacing
  const allTools = this.mcp.listTools();
  console.log(`Total tools available: ${allTools.length}`);
}</code></pre>
            <p>Each connection manages its own authentication context, allowing one AI agent to authenticate to multiple servers simultaneously. In addition, <code>MCPClientManager</code> automatically handles namespacing to prevent collisions between tools with identical names from different servers. </p><p>For example, if both an “Image MCP Server” and “Code MCP Server” have a tool named “analyze”, they will both be independently callable without any naming conflicts.</p>
    <div>
      <h2>Use Stytch, Auth0, and WorkOS to bring authentication &amp; authorization to your MCP server </h2>
      <a href="#use-stytch-auth0-and-workos-to-bring-authentication-authorization-to-your-mcp-server">
        
      </a>
    </div>
    <p>With MCP, users will have a new way of interacting with your application, no longer relying on the dashboard or API as the entrypoint. Instead, the service will now be accessed by AI agents that are acting on a user’s behalf. To ensure users and agents can connect to your service securely, you’ll need to extend your existing authentication and authorization system to support these agentic interactions, implementing login flows, permissions scopes, consent forms, and access enforcement for your MCP server. </p><p>We’re adding integrations with <a href="https://stytch.com/"><u>Stytch</u></a>, <a href="https://auth0.com/"><u>Auth0</u></a>, and <a href="https://workos.com/"><u>WorkOS</u></a> to make it easier for anyone building an MCP server to configure authentication &amp; authorization for their MCP server. </p><p>You can leverage our MCP server integration with Stytch, Auth0, and WorkOS to: </p><ul><li><p>Allow users to authenticate to your MCP server through email, social logins, SSO (single sign-on), and MFA (multi-factor authentication).</p></li><li><p>Define scopes and permissions that directly map to your MCP tools.</p></li><li><p>Present users with a consent page corresponding with the requested permissions.</p></li></ul><p>Enforce the permissions so that agents can only invoke permitted tools. </p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6oYchjMwoMxwYxsqq4PObk/381937e89c249b87c1930295b407faf6/3.png" />
          </figure><p>Get started with the examples below by using the “Deploy to Cloudflare” button to deploy the demo MCP servers in your Cloudflare account. These demos include pre-configured authentication endpoints, consent flows, and permission models that you can tailor to fit your needs. Once you deploy the demo MCP servers, you can use the <a href="https://playground.ai.cloudflare.com/"><u>Workers AI playground</u></a>, a browser-based remote MCP client, to test out the end-to-end user flow. </p>
    <div>
      <h3>Stytch</h3>
      <a href="#stytch">
        
      </a>
    </div>
    <p><a href="https://stytch.com/docs/guides/connected-apps/mcp-servers"><u>Get started</u></a> with a remote MCP server that uses Stytch to allow users to sign in with email, Google login or enterprise SSO and authorize their AI agent to view and manage their company’s OKRs on their behalf. Stytch will handle restricting the scopes granted to the AI agent based on the user’s role and permissions within their organization. When authorizing the MCP Client, each user will see a consent page that outlines the permissions that the agent is requesting that they are able to grant based on their role.</p><a href="https://deploy.workers.cloudflare.com/?url=https://github.com/cloudflare/ai/tree/main/demos/mcp-stytch-b2b-okr-manager"><img src="https://deploy.workers.cloudflare.com/button" /></a>
<p></p><p>For more consumer use cases, deploy a remote MCP server for a To Do app that uses Stytch for authentication and MCP client authorization. Users can sign in with email and immediately access the To Do lists associated with their account, and grant access to any AI assistant to help them manage their tasks.</p><a href="https://deploy.workers.cloudflare.com/?url=https://github.com/cloudflare/ai/tree/main/demos/mcp-stytch-consumer-todo-list"><img src="https://deploy.workers.cloudflare.com/button" /></a>
<p></p><p>Regardless of use case, Stytch allows you to easily turn your application into an OAuth 2.0 identity provider and make your remote MCP server into a Relying Party so that it can easily inherit identity and permissions from your app. To learn more about how Stytch is enabling secure authentication to remote MCP servers, read their <a href="http://stytch.com/blog/remote-mcp-stytch-cloudflare"><u>blog post</u></a>.</p><blockquote><p><i>“One of the challenges of realizing the promise of AI agents is enabling those agents to securely and reliably access data from other platforms. Stytch Connected Apps is purpose-built for these agentic use cases, making it simple to turn your app into an OAuth 2.0 identity provider to enable secure access to remote MCP servers. By combining Cloudflare Workers with Stytch Connected Apps, we're removing the barriers for developers, enabling them to rapidly transition from AI proofs-of-concept to secure, deployed implementations.” — Julianna Lamb, Co-Founder &amp; CTO, Stytch.</i></p></blockquote>
    <div>
      <h3>Auth0</h3>
      <a href="#auth0">
        
      </a>
    </div>
    <p>Get started with a remote MCP server that uses Auth0 to authenticate users through email, social logins, or enterprise SSO to interact with their todos and personal data through AI agents. The MCP server securely connects to API endpoints on behalf of users, showing exactly which resources the agent will be able to access once it gets consent from the user. In this implementation, access tokens are automatically refreshed during long running interactions.</p><p>To set it up, first deploy the protected API endpoint: </p><a href="https://deploy.workers.cloudflare.com/?url=https://github.com/cloudflare/ai/tree/main/demos/remote-mcp-auth0/todos-api"><img src="https://deploy.workers.cloudflare.com/button" /></a>
<p></p><p>Then, deploy the MCP server that handles authentication through Auth0 and securely connects AI agents to your API endpoint. </p><a href="https://deploy.workers.cloudflare.com/?url=https://github.com/cloudflare/ai/tree/main/demos/remote-mcp-auth0/mcp-auth0-oidc"><img src="https://deploy.workers.cloudflare.com/button" /></a>
<p></p><blockquote><p><i>"Cloudflare continues to empower developers building AI products with tools like AI Gateway, Vectorize, and Workers AI. The recent addition of Remote MCP servers further demonstrates that Cloudflare Workers and Durable Objects are a leading platform for deploying serverless AI. We’re very proud that Auth0 can help solve the authentication and authorization needs for these cutting-edge workloads." — Sandrino Di Mattia, Auth0 Sr. Director, Product Architecture.</i></p></blockquote>
    <div>
      <h3>WorkOS</h3>
      <a href="#workos">
        
      </a>
    </div>
    <p>Get started with a remote MCP server that uses WorkOS's AuthKit to authenticate users and manage the permissions granted to AI agents. In this example, the MCP server dynamically exposes tools based on the user's role and access rights. All authenticated users get access to the <code>add</code> tool, but only users who have been assigned the <code>image_generation</code> permission in WorkOS can grant the AI agent access to the image generation tool. This showcases how MCP servers can conditionally expose capabilities to AI agents based on the authenticated user's role and permission.</p><a href="https://deploy.workers.cloudflare.com/?url=https://github.com/cloudflare/ai/tree/main/demos/remote-mcp-authkit"><img src="https://deploy.workers.cloudflare.com/button" /></a>
<p></p><blockquote><p><i>“MCP is becoming the standard for AI agent integration, but authentication and authorization are still major gaps for enterprise adoption. WorkOS Connect enables any application to become an OAuth 2.0 authorization server, allowing agents and MCP clients to securely obtain tokens for fine-grained permission authorization and resource access. With Cloudflare Workers, developers can rapidly deploy remote MCP servers with built-in OAuth and enterprise-grade access control. Together, WorkOS and Cloudflare make it easy to ship secure, enterprise-ready agent infrastructure.” — Michael Grinich, CEO of WorkOS.</i></p></blockquote>
    <div>
      <h2>Hibernate-able WebSockets: put AI agents to sleep when they’re not in use</h2>
      <a href="#hibernate-able-websockets-put-ai-agents-to-sleep-when-theyre-not-in-use">
        
      </a>
    </div>
    <p>Starting today, a new improvement is landing in the McpAgent class: support for the <a href="https://developers.cloudflare.com/durable-objects/best-practices/websockets/#websocket-hibernation-api"><u>WebSockets Hibernation API</u></a> that allows your MCP server to go to sleep when it’s not receiving requests and instantly wake up when it’s needed. That means that you now only pay for compute when your agent is actually working.</p><p>We <a href="https://blog.cloudflare.com/remote-model-context-protocol-servers-mcp/"><u>recently introduced</u></a> the <a href="https://developers.cloudflare.com/agents/model-context-protocol/tools/?cf_history_state=%7B%22guid%22%3A%22C255D9FF78CD46CDA4F76812EA68C350%22%2C%22historyId%22%3A11%2C%22targetId%22%3A%22DF3E523E0077ACCB6730439891CDD7D4%22%7D"><u>McpAgent class</u></a>, which allows developers to build remote MCP servers on Cloudflare by using Durable Objects to maintain stateful connections for every client session. We decided to build McpAgent to be stateful from the start, allowing developers to build servers that can remember context, user preferences, and conversation history. But maintaining client connections means that the session can remain active for a long time, even when it’s not being used. </p>
    <div>
      <h3>MCP Agents are hibernate-able by default</h3>
      <a href="#mcp-agents-are-hibernate-able-by-default">
        
      </a>
    </div>
    <p>You don’t need to change your code to take advantage of hibernation. With our latest SDK update, all McpAgent instances automatically include hibernation support, allowing your stateful MCP servers to sleep during inactive periods and wake up with their state preserved when needed. </p>
    <div>
      <h3>How it works</h3>
      <a href="#how-it-works">
        
      </a>
    </div>
    <p>When a request comes in on the Server-Sent Events endpoint, /sse, the Worker initializes a WebSocket connection to the appropriate Durable Object for the session and returns an SSE stream back to the client. All responses flow over this stream.</p><p>The implementation leverages the WebSocket Hibernation API within Durable Objects. When periods of inactivity occur, the Durable Object can be evicted from memory while keeping the WebSocket connection open. If the WebSocket later receives a message, the runtime recreates the Durable Object and delivers the message to the appropriate handler.</p>
    <div>
      <h2>Durable Objects on free tier</h2>
      <a href="#durable-objects-on-free-tier">
        
      </a>
    </div>
    <p>To help you build AI agents on Cloudflare, we’re making <a href="http://developers.cloudflare.com/durable-objects/what-are-durable-objects/"><u>Durable Objects</u></a> available on the free tier, so you can start with zero commitment. With Agents SDK, your AI agents deploy to Cloudflare running on Durable Objects.</p><p>Durable Objects offer compute alongside durable storage, that when combined with <a href="https://www.cloudflare.com/developer-platform/products/workers/">Workers</a>, unlock stateful, serverless applications. Each Durable Object is a stateful coordinator for handling client real-time interactions, making requests to external services like LLMs, and creating agentic “memory” through state persistence in <a href="https://blog.cloudflare.com/sqlite-in-durable-objects/"><u>zero-latency SQLite storage</u></a> — all tasks required in an AI agent. Durable Objects scale out to millions of agents effortlessly, with each agent created near the user interacting with their agent for fast performance, all managed by Cloudflare. </p><p>Zero-latency SQLite storage in Durable Objects was <a href="https://blog.cloudflare.com/sqlite-in-durable-objects/"><u>introduced in public beta</u></a> September 2024 for Birthday Week. Since then, we’ve focused on missing features and robustness compared to pre-existing key-value storage in Durable Objects. We are excited to make SQLite storage generally available, with a 10 GB SQLite database per Durable Object, and recommend SQLite storage for all new Durable Object classes. Durable Objects free tier can only access SQLite storage.</p><p><a href="https://www.cloudflare.com/plans/free/">Cloudflare’s free tier</a> allows you to build real-world applications. On the free plan, every Worker request can call a Durable Object. For <a href="https://developers.cloudflare.com/durable-objects/platform/pricing/"><u>usage-based pricing</u></a>, Durable Objects incur compute and storage usage with the following free tier limits.</p><div>
    <figure>
        <table>
            <colgroup>
                <col></col>
                <col></col>
                <col></col>
            </colgroup>
            <tbody>
                <tr>
                    <td> </td>
                    <td>
                        <p><span><span><strong>Workers Free</strong></span></span></p>
                    </td>
                    <td>
                        <p><span><span><strong>Workers Paid</strong></span></span></p>
                    </td>
                </tr>
                <tr>
                    <td>
                        <p><span><span>Compute: Requests</span></span></p>
                    </td>
                    <td>
                        <p><span><span>100,000 / day</span></span></p>
                    </td>
                    <td>
                        <p><span><span>1 million / month included</span></span></p>
                        <p><span><span>+ $0.15 / million</span></span></p>
                    </td>
                </tr>
                <tr>
                    <td>
                        <p><span><span>Compute: Duration</span></span></p>
                    </td>
                    <td>
                        <p><span><span>13,000 GB-s / day</span></span></p>
                    </td>
                    <td>
                        <p><span><span>400,000 GB-s / month  included </span></span></p>
                        <p><span><span>+ $12.50 / million GB-s</span></span></p>
                    </td>
                </tr>
                <tr>
                    <td>
                        <p><span><span>Storage: Rows read</span></span></p>
                    </td>
                    <td>
                        <p><span><span>5 million / day</span></span></p>
                    </td>
                    <td>
                        <p><span><span>25 billion / month included</span></span></p>
                        <p><span><span>+ $0.001 / million </span></span></p>
                    </td>
                </tr>
                <tr>
                    <td>
                        <p><span><span>Storage: Rows written</span></span></p>
                    </td>
                    <td>
                        <p><span><span>100,000 / day</span></span></p>
                    </td>
                    <td>
                        <p><span><span>50 million / month included</span></span></p>
                        <p><span><span>+ $1.00 / million</span></span></p>
                    </td>
                </tr>
                <tr>
                    <td>
                        <p><span><span>Storage: SQL stored data</span></span></p>
                    </td>
                    <td>
                        <p><span><span>5 GB (total)</span></span></p>
                    </td>
                    <td>
                        <p><span><span>5 GB-month included</span></span></p>
                        <p><span><span>+ $0.20 / GB-month</span></span></p>
                    </td>
                </tr>
            </tbody>
        </table>
    </figure>
</div>
    <div>
      <h3>Find us at agents.cloudflare.com</h3>
      <a href="#find-us-at-agents-cloudflare-com">
        
      </a>
    </div>
    <p>We realize this is a lot of information to take in, but don’t worry. Whether you’re new to agents as a whole, or looking to learn more about how Cloudflare can help you build agents, today we launched a new site to help get you started — <a href="https://agents.cloudflare.com"><u>agents.cloudflare.com</u></a>. </p><p>Let us know what you build!</p> ]]></content:encoded>
            <category><![CDATA[Developer Week]]></category>
            <category><![CDATA[AI]]></category>
            <category><![CDATA[Agents]]></category>
            <category><![CDATA[Developers]]></category>
            <category><![CDATA[Model Context Protocol]]></category>
            <category><![CDATA[MCP]]></category>
            <guid isPermaLink="false">6lQQWDqELUkL4c1y13VL0V</guid>
            <dc:creator>Rita Kozlov</dc:creator>
            <dc:creator>Dina Kozlov</dc:creator>
            <dc:creator>Vy Ton</dc:creator>
        </item>
        <item>
            <title><![CDATA[Leveling up Workers AI: general availability and more new capabilities]]></title>
            <link>https://blog.cloudflare.com/workers-ai-ga-huggingface-loras-python-support/</link>
            <pubDate>Tue, 02 Apr 2024 13:01:00 GMT</pubDate>
            <description><![CDATA[ Today, we’re excited to make a series of announcements, including Workers AI, Cloudflare’s inference platform becoming GA and support for fine-tuned models with LoRAs and one-click deploys from HuggingFace. Cloudflare Workers now supports the Python programming language, and more ]]></description>
            <content:encoded><![CDATA[ <p></p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1YNXJ4s4e47U7MvddTlpz8/3a53be280a5e373b589eba37bc4740d0/Cities-with-GPUs-momentum-update.png" />
            
            </figure><p>Welcome to Tuesday – our AI day of Developer Week 2024! In this blog post, we’re excited to share an overview of our new AI announcements and vision, including news about Workers AI officially going GA with improved pricing, a GPU hardware momentum update, an expansion of our Hugging Face partnership, Bring Your Own LoRA fine-tuned inference, Python support in Workers, more providers in AI Gateway, and Vectorize metadata filtering.</p>
    <div>
      <h3>Workers AI GA</h3>
      <a href="#workers-ai-ga">
        
      </a>
    </div>
    <p>Today, we’re excited to announce that our Workers AI inference platform is now Generally Available. After months of being in open beta, we’ve improved our service with greater reliability and performance, unveiled pricing, and added many more models to our catalog.</p>
    <div>
      <h4>Improved performance &amp; reliability</h4>
      <a href="#improved-performance-reliability">
        
      </a>
    </div>
    <p>With Workers AI, our goal is to make AI inference as reliable and easy to use as the rest of Cloudflare’s network. Under the hood, we’ve upgraded the load balancing that is built into Workers AI. Requests can now be routed to more GPUs in more cities, and each city is aware of the total available capacity for AI inference. If the request would have to wait in a queue in the current city, it can instead be routed to another location, getting results back to you faster when traffic is high. With this, we’ve increased rate limits across all our models – most LLMs now have a of 300 requests per minute, up from 50 requests per minute during our beta phase. Smaller models have a limit of 1500-3000 requests per minute. Check out our <a href="https://developers.cloudflare.com/workers-ai/platform/limits/">Developer Docs for the rate limits</a> of individual models.</p>
    <div>
      <h4>Lowering costs on popular models</h4>
      <a href="#lowering-costs-on-popular-models">
        
      </a>
    </div>
    <p>Alongside our GA of Workers AI, we published a <a href="https://ai.cloudflare.com/#pricing-calculator">pricing calculator</a> for our 10 non-beta models earlier this month. We want Workers AI to be one of the most affordable and accessible solutions to run <a href="https://www.cloudflare.com/learning/ai/inference-vs-training/">inference</a>, so we added a few optimizations to our models to make them more affordable. Now, Llama 2 is over 7x cheaper and Mistral 7B is over 14x cheaper to run than we had initially <a href="https://developers.cloudflare.com/workers-ai/platform/pricing/">published</a> on March 1. We want to continue to be the best platform for AI inference and will continue to roll out optimizations to our customers when we can.</p><p>As a reminder, our billing for Workers AI started on April 1st for our non-beta models, while beta models remain free and unlimited. We offer 10,000 <a href="/workers-ai#:~:text=may%20be%20wondering%20%E2%80%94-,what%E2%80%99s%20a%20neuron">neurons</a> per day for free to all customers. Workers Free customers will encounter a hard rate limit after 10,000 neurons in 24 hours while Workers Paid customers will incur usage at $0.011 per 1000 additional neurons.  Read our <a href="https://developers.cloudflare.com/workers-ai/platform/pricing/">Workers AI Pricing Developer Docs</a> for the most up-to-date information on pricing.</p>
    <div>
      <h4>New dashboard and playground</h4>
      <a href="#new-dashboard-and-playground">
        
      </a>
    </div>
    <p>Lastly, we’ve revamped our <a href="https://dash.cloudflare.com/?to=/:account/ai/workers-ai">Workers AI dashboard</a> and <a href="https://playground.ai.cloudflare.com/">AI playground</a>. The Workers AI page in the Cloudflare dashboard now shows analytics for usage across models, including neuron calculations to help you better predict pricing. The AI playground lets you quickly test and compare different models and configure prompts and parameters. We hope these new tools help developers start building on Workers AI seamlessly – go try them out!</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3uSiHo9pV21DreFiLpUfPX/5aa5c8a2448da881a0872e3f550c39a2/image3-3.png" />
            
            </figure>
    <div>
      <h3>Run inference on GPUs in over 150 cities around the world</h3>
      <a href="#run-inference-on-gpus-in-over-150-cities-around-the-world">
        
      </a>
    </div>
    
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/qyZ48xVu70u80swQKPUau/1e85ea2b833139d277d5769d5a8c8c66/image5-2.png" />
            
            </figure><p>When we announced Workers AI back in September 2023, we set out to deploy GPUs to our data centers around the world. We plan to deliver on that promise and deploy inference-tuned GPUs almost everywhere by the end of 2024, making us the most widely distributed cloud-AI inference platform. We have over 150 cities with GPUs today and will continue to roll out more throughout the year.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/451IgQCfz2j10xM8kXhZy0/67632b1b5f52e3387aa387e43c804324/image7-1.png" />
            
            </figure><p>We also have our next generation of compute servers with GPUs launching in Q2 2024, which means better performance, power efficiency, and improved reliability over previous generations. We provided a preview of our Gen 12 Compute servers design in a <a href="/cloudflare-gen-12-server-bigger-better-cooler-in-a-2u1n-form-factor">December 2023 blog post</a>, with more details to come. With Gen 12 and future planned hardware launches, the next step is to support larger machine learning models and offer fine-tuning on our platform. This will allow us to achieve higher inference throughput, lower latency and greater availability for production workloads, as well as expanding support to new categories of workloads such as fine-tuning.</p>
    <div>
      <h3>Hugging Face Partnership</h3>
      <a href="#hugging-face-partnership">
        
      </a>
    </div>
    
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7ATIA1mxaeqHYE31cztdjg/33717b0fd20244f3089cf5d4c8a9c13f/image2-2.png" />
            
            </figure><p>We’re also excited to continue our partnership with Hugging Face in the spirit of bringing the best of open-source to our customers. Now, you can visit some of the most popular models on Hugging Face and easily click to run the model on Workers AI if it is available on our platform.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3geQJnlHZhMktG1eoEWYKE/36c76ee43eca9443b4e2e9c6d3e1df7e/image6-1.png" />
            
            </figure><p>We’re happy to announce that we’ve added 4 more models to our platform in conjunction with Hugging Face. You can now access the new <a href="https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2">Mistral 7B v0.2</a> model with improved context windows, <a href="https://huggingface.co/NousResearch/Hermes-2-Pro-Mistral-7B">Nous Research’s Hermes 2 Pro</a> fine-tuned version of Mistral 7B, <a href="https://huggingface.co/google/gemma-7b-it">Google’s Gemma 7B</a>, and <a href="https://huggingface.co/Nexusflow/Starling-LM-7B-beta">Starling-LM-7B-beta</a> fine-tuned from OpenChat. There are currently 14 models that we’ve curated with Hugging Face to be available for serverless GPU inference powered by Cloudflare’s Workers AI platform, with more coming soon. These models are all served using Hugging Face’s technology with a <a href="https://github.com/huggingface/text-generation-inference/">TGI</a> backend, and we work closely with the Hugging Face team to curate, optimize, and deploy these models.</p><blockquote><p><i>“We are excited to work with Cloudflare to make AI more accessible to developers. Offering the most popular open models with a serverless API, powered by a global fleet of GPUs is an amazing proposition for the Hugging Face community, and I can’t wait to see what they build with it.”</i>- <b>Julien Chaumond</b>, Co-founder and CTO, Hugging Face</p></blockquote><p>You can find all of the open models supported in Workers AI in this <a href="https://huggingface.co/collections/Cloudflare/hf-curated-models-available-on-workers-ai-66036e7ad5064318b3e45db6">Hugging Face Collection</a>, and the “Deploy to Cloudflare Workers AI” button is at the top of each model card. To learn more, read Hugging Face’s <a href="http://huggingface.co/blog/cloudflare-workers-ai">blog post</a> and take a look at our <a href="https://developers.cloudflare.com/workers-ai/models/">Developer Docs</a> to get started. Have a model you want to see on Workers AI? Send us a message on <a href="https://discord.cloudflare.com">Discord</a> with your request.</p>
    <div>
      <h3>Supporting fine-tuned inference - BYO LoRAs</h3>
      <a href="#supporting-fine-tuned-inference-byo-loras">
        
      </a>
    </div>
    <p>Fine-tuned inference is one of our most requested features for Workers AI, and we’re one step closer now with Bring Your Own (BYO) LoRAs. Using the popular <a href="https://www.cloudflare.com/learning/ai/what-is-lora/">Low-Rank Adaptation</a> method, researchers have figured out how to take a model and adapt <i>some</i> model parameters to the task at hand, rather than rewriting <i>all</i> model parameters like you would for a fully fine-tuned model. This means that you can get fine-tuned model outputs without the computational expense of fully fine-tuning a model.</p><p>We now support bringing trained LoRAs to Workers AI, where we apply the LoRA adapter to a base model at runtime to give you fine-tuned inference, at a fraction of the cost, size, and speed of a fully fine-tuned model. In the future, we want to be able to support fine-tuning jobs and fully fine-tuned models directly on our platform, but we’re excited to be one step closer today with LoRAs.</p>
            <pre><code>const response = await ai.run(
  "@cf/mistralai/mistral-7b-instruct-v0.2-lora", //the model supporting LoRAs
  {
      messages: [{"role": "user", "content": "Hello world"],
      raw: true, //skip applying the default chat template
      lora: "00000000-0000-0000-0000-000000000", //the finetune id OR name 
  }
);</code></pre>
            <p>BYO LoRAs is in open beta as of today for Gemma 2B and 7B, Llama 2 7B and Mistral 7B models with LoRA adapters up to 100MB in size and max rank of 8, and up to 30 total LoRAs per account. As always, we expect you to use Workers AI and our new BYO LoRA feature with our <a href="https://www.cloudflare.com/service-specific-terms-developer-platform/#developer-platform-terms">Terms of Service</a> in mind, including any model-specific restrictions on use contained in the models’ license terms.</p><p>Read the technical deep dive blog post on <a href="/fine-tuned-inference-with-loras">fine-tuning with LoRA</a> and <a href="https://developers.cloudflare.com/workers-ai/fine-tunes">developer docs</a> to get started.</p>
    <div>
      <h3>Write Workers in Python</h3>
      <a href="#write-workers-in-python">
        
      </a>
    </div>
    <p>Python is the second most popular programming language in the world (after JavaScript) and the language of choice for building AI applications. And starting today, in open beta, you can now <a href="https://ggu-python.cloudflare-docs-7ou.pages.dev/workers/languages/python/">write Cloudflare Workers in Python</a>. Python Workers support all <a href="https://developers.cloudflare.com/workers/configuration/bindings/">bindings</a> to resources on Cloudflare, including <a href="https://developers.cloudflare.com/vectorize/">Vectorize</a>, <a href="https://developers.cloudflare.com/d1/">D1</a>, <a href="https://developers.cloudflare.com/kv/">KV</a>, <a href="https://www.cloudflare.com/developer-platform/products/r2/">R2</a> and more.</p><p><a href="https://ggu-python.cloudflare-docs-7ou.pages.dev/workers/languages/python/packages/langchain/">LangChain</a> is the most popular framework for building LLM‑powered applications, and like how <a href="/langchain-and-cloudflare">Workers AI works with langchain-js</a>, the <a href="https://python.langchain.com/docs/get_started/introduction">Python LangChain library</a> works on Python Workers, as do <a href="https://ggu-python.cloudflare-docs-7ou.pages.dev/workers/languages/python/packages/">other Python packages</a> like FastAPI.</p><p>Workers written in Python are just as simple as Workers written in JavaScript:</p>
            <pre><code>from js import Response

async def on_fetch(request, env):
    return Response.new("Hello world!")</code></pre>
            <p>…and are configured by simply pointing at a .py file in your <code>wrangler.toml</code>:</p>
            <pre><code>name = "hello-world-python-worker"
main = "src/entry.py"
compatibility_date = "2024-03-18"
compatibility_flags = ["python_workers"]</code></pre>
            <p>There are no extra toolchain or precompilation steps needed. The <a href="https://pyodide.org/en/stable/">Pyodide</a> Python execution environment is provided for you, directly by the Workers runtime, mirroring how Workers written in JavaScript already work.</p><p>There’s lots more to dive into — take a look at the <a href="https://ggu-python.cloudflare-docs-7ou.pages.dev/workers/languages/python/">docs</a>, and check out our <a href="/python-workers">companion blog post</a> for details about how Python Workers work behind the scenes.</p>
    <div>
      <h2>AI Gateway now supports Anthropic, Azure, AWS Bedrock, Google Vertex, and Perplexity</h2>
      <a href="#ai-gateway-now-supports-anthropic-azure-aws-bedrock-google-vertex-and-perplexity">
        
      </a>
    </div>
    
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6YaNMed9Aw4YjhbZk75SGI/a53b499b743e36ad2635357118e6623f/image4-2.png" />
            
            </figure><p>Our <a href="/announcing-ai-gateway">AI Gateway</a> product helps developers better control and observe their AI applications, with analytics, caching, rate limiting, and more. We are continuing to add more providers to the product, including Anthropic, Google Vertex, and Perplexity, which we’re excited to announce today. We quietly rolled out Azure and Amazon Bedrock support in December 2023, which means that the most popular providers are now supported via AI Gateway, including Workers AI itself.</p><p>Take a look at our <a href="https://developers.cloudflare.com/ai-gateway/">Developer Docs</a> to get started with AI Gateway.</p>
    <div>
      <h4>Coming soon: Persistent Logs</h4>
      <a href="#coming-soon-persistent-logs">
        
      </a>
    </div>
    <p>In Q2 of 2024, we will be adding persistent logs so that you can push your logs (including prompts and responses) to <a href="https://www.cloudflare.com/learning/cloud/what-is-object-storage/">object storage</a>, custom metadata so that you can tag requests with user IDs or other identifiers, and secrets management so that you can securely manage your application’s API keys.</p><p>We want AI Gateway to be the control plane for your AI applications, allowing developers to dynamically evaluate and route requests to different models and providers. With our persistent logs feature, we want to enable developers to use their logged data to fine-tune models in one click, eventually running the fine-tune job and the fine-tuned model directly on our Workers AI platform. AI Gateway is just one product in our AI toolkit, but we’re excited about the workflows and use cases it can unlock for developers building on our platform, and we hope you’re excited about it too.</p>
    <div>
      <h3>Vectorize metadata filtering and future GA of million vector indexes</h3>
      <a href="#vectorize-metadata-filtering-and-future-ga-of-million-vector-indexes">
        
      </a>
    </div>
    <p>Vectorize is another component of our toolkit for AI applications. In open beta since September 2023, Vectorize allows developers to persist embeddings (vectors), like those generated from Workers AI <a href="https://developers.cloudflare.com/workers-ai/models/#text-embeddings">text embedding</a> models, and query for the closest match to support use cases like similarity search or recommendations. Without a vector database, model output is forgotten and can’t be recalled without extra costs to re-run a model.</p><p>Since Vectorize’s open beta, we’ve added <a href="https://developers.cloudflare.com/vectorize/reference/metadata-filtering/">metadata filtering</a>. Metadata filtering lets developers combine vector search with filtering for arbitrary metadata, supporting the query complexity in AI applications. We’re laser-focused on getting Vectorize ready for general availability, with an target launch date of June 2024, which will include support for multi-million vector indexes.</p>
            <pre><code>// Insert vectors with metadata
const vectors: Array&lt;VectorizeVector&gt; = [
  {
    id: "1",
    values: [32.4, 74.1, 3.2],
    metadata: { url: "/products/sku/13913913", streaming_platform: "netflix" }
  },
  {
    id: "2",
    values: [15.1, 19.2, 15.8],
    metadata: { url: "/products/sku/10148191", streaming_platform: "hbo" }
  },
...
];
let upserted = await env.YOUR_INDEX.upsert(vectors);

// Query with metadata filtering
let metadataMatches = await env.YOUR_INDEX.query(&lt;queryVector&gt;, { filter: { streaming_platform: "netflix" }} )</code></pre>
            
    <div>
      <h3>The most comprehensive Developer Platform to build AI applications</h3>
      <a href="#the-most-comprehensive-developer-platform-to-build-ai-applications">
        
      </a>
    </div>
    <p>On Cloudflare’s Developer Platform, we believe that all developers should be able to quickly build and ship full-stack applications  – and that includes AI experiences as well. With our GA of Workers AI, announcements for Python support in Workers, AI Gateway, and Vectorize, and our partnership with Hugging Face, we’ve expanded the world of possibilities for what you can build with AI on our platform. We hope you are as excited as we are – take a look at all our <a href="https://developers.cloudflare.com">Developer Docs</a> to get started, and <a href="https://discord.cloudflare.com/">let us know</a> what you build.</p> ]]></content:encoded>
            <category><![CDATA[Developer Week]]></category>
            <category><![CDATA[Developers]]></category>
            <category><![CDATA[Workers AI]]></category>
            <category><![CDATA[General Availability]]></category>
            <category><![CDATA[Developer Platform]]></category>
            <category><![CDATA[Cloudflare Workers]]></category>
            <guid isPermaLink="false">6ItPe1u2j71C4DTSxJdccB</guid>
            <dc:creator>Michelle Chen</dc:creator>
            <dc:creator>Jesse Kipp</dc:creator>
            <dc:creator>Syona Sarma</dc:creator>
            <dc:creator>Brendan Irvine-Broque</dc:creator>
            <dc:creator>Vy Ton</dc:creator>
        </item>
        <item>
            <title><![CDATA[Building D1: a Global Database]]></title>
            <link>https://blog.cloudflare.com/building-d1-a-global-database/</link>
            <pubDate>Mon, 01 Apr 2024 13:00:41 GMT</pubDate>
            <description><![CDATA[ D1, Cloudflare’s SQL database, is now generally available.  ]]></description>
            <content:encoded><![CDATA[ <p></p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/76hMKeBHewbCLm4XlVS4zL/92271c25576185cad1ab5e70e29ede58/image2-33.png" />
            
            </figure><p>Developers who build Worker applications focus on what they're creating, not the infrastructure required, and benefit from the global reach of <a href="https://www.cloudflare.com/network/">Cloudflare's network</a>. Many applications require persistent data, from personal projects to business-critical workloads. Workers offer various <a href="https://developers.cloudflare.com/workers/platform/storage-options/">database and storage options</a> tailored to developer needs, such as key-value and <a href="https://www.cloudflare.com/learning/cloud/what-is-object-storage/">object storage</a>.</p><p>Relational databases are the backbone of many applications today. <a href="https://developers.cloudflare.com/d1/">D1</a>, Cloudflare's relational database complement, is now generally available. Our journey from alpha in late 2022 to GA in April 2024 focused on enabling developers to build production workloads with the familiarity of relational data and SQL.</p>
    <div>
      <h3>What’s D1?</h3>
      <a href="#whats-d1">
        
      </a>
    </div>
    <p>D1 is Cloudflare's built-in, serverless relational database. For Worker applications, D1 offers SQL's expressiveness, leveraging SQLite's SQL dialect, and developer tooling integrations, including object-relational mappers (ORMs) like <a href="https://orm.drizzle.team/docs/connect-cloudflare-d1">Drizzle ORM</a>. D1 is accessible via <a href="https://developers.cloudflare.com/d1/build-with-d1/d1-client-api/">Workers</a> or an <a href="https://developers.cloudflare.com/api/operations/cloudflare-d1-create-database">HTTP API</a>.</p><p>Serverless means no provisioning, default disaster recovery with <a href="https://developers.cloudflare.com/d1/reference/time-travel/">Time Travel</a>, and <a href="https://developers.cloudflare.com/d1/platform/pricing/">usage-based pricing</a>. D1 includes a generous free tier that allows developers to experiment with D1 and then graduate those trials to production.</p>
    <div>
      <h3>How to make data global?</h3>
      <a href="#how-to-make-data-global">
        
      </a>
    </div>
    <p>D1 GA has focused on reliability and developer experience. Now, we plan on extending D1 to better support globally-distributed applications.</p><p>In the Workers model, an incoming request invokes serverless execution in the closest data center. A Worker application can scale globally with user requests. Application data, however, remains stored in centralized databases, and global user traffic must account for access round trips to data locations. For example, a D1 database today resides in a single location.</p><p>Workers support <a href="https://developers.cloudflare.com/workers/configuration/smart-placement">Smart Placement</a> to account for frequently accessed data locality. Smart Placement invokes a Worker closer to centralized backend services like databases to lower latency and improve application performance. We’ve addressed Workers placement in global applications, but need to solve data placement.</p><p>The question, then, is how can D1, as Cloudflare’s <a href="https://www.cloudflare.com/developer-platform/products/d1/">built-in database solution</a>, better support data placement for global applications? The answer is asynchronous read replication.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1I58tQFeSOcIyqrGfv9UnB/c1bc267b8cd2eb09332ae909429aeb5b/image4-30.png" />
            
            </figure>
    <div>
      <h3>What is asynchronous read replication?</h3>
      <a href="#what-is-asynchronous-read-replication">
        
      </a>
    </div>
    <p>In a server-based database management system, like Postgres, MySQL, SQL Server, or Oracle, a <b><i>read replica</i></b> is a separate database server that serves as a read-only, almost up-to-date copy of the primary database server. An administrator creates a read replica by starting a new server from a snapshot of the primary server and configuring the primary server to send updates asynchronously to the replica server. Since the updates are asynchronous, the read replica may be behind the current state of the primary server. The difference between the primary server and a replica is called <b><i>replica lag</i></b>. It's possible to have more than one read replica.</p><p>Asynchronous read replication is a time-proven solution for improving the performance of databases:</p><ul><li><p>It's possible to increase throughput by distributing load across multiple replicas.</p></li><li><p>It's possible to lower query latency when the replicas are close to the users making queries.</p></li></ul><p>Note that some database systems also offer synchronous replication. In a synchronous replicated system, writes must wait until all replicas have confirmed the write. Synchronous replicated systems can run only as fast as the slowest replica and come to a halt when a replica fails. If we’re trying to improve performance on a global scale, we want to avoid synchronous replication as much as possible!</p>
    <div>
      <h3>Consistency models &amp; read replicas</h3>
      <a href="#consistency-models-read-replicas">
        
      </a>
    </div>
    <p>Most database systems provide <a href="https://jepsen.io/consistency/models/read-committed">read committed</a>, <a href="https://jepsen.io/consistency/models/snapshot-isolation">snapshot isolation</a>, or <a href="https://jepsen.io/consistency/models/serializable">serializable</a> consistency models, depending on their configuration. For example, Postgres <a href="https://jepsen.io/consistency/models/read-committed">defaults to read committed</a> but can be configured to use stronger modes. SQLite provides <a href="https://www.sqlite.org/draft/isolation.html">snapshot isolation in WAL mode</a>. Stronger modes like snapshot isolation or serializable are easier to program against because they limit the permitted system concurrency scenarios and the kind of concurrency race conditions the programmer has to worry about.</p><p>Read replicas are updated independently, so each replica's contents may differ at any moment. If all of your queries go to the same server, whether the primary or a read replica, your results should be consistent according to whatever <a href="https://jepsen.io/consistency">consistency model</a> your underlying database provides. If you're using a read replica, the results may just be a little old.</p><p>In a server-based database with read replicas, it's important to stick with the same server for all of the queries in a session. If you switch among different read replicas in the same session, you compromise the consistency model provided by your application, which may violate your assumptions about how the database acts and cause your application to return incorrect results!</p><p><b>Example</b>For example, there are two replicas, A and B. Replica A lags the primary database by 100ms, and replica B lags the primary database by 2s. Suppose a user wishes to:</p><ol><li><p>Execute query 11a. Do some computation based on query 1 results</p></li><li><p>Execute query 2 based on the results of the computation in (1a)</p></li></ol><p>At time t=10s, query 1 goes to replica A and returns. Query 1 sees what the primary database looked like at t=9.9s. Suppose it takes 500ms to do the computation, so at t=10.5s, query 2 goes to replica B. Remember, replica B lags the primary database by 2s, so at t=10.5s, query 2 sees what the database looks like at t=8.5s. As far as the application is concerned, the results of query 2 look like the database has gone backwards in time!</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2R1p29j20c7szuRmY2Sjlp/52e4982c6c45e18c4d0c18835931b016/image3-34.png" />
            
            </figure><p>Formally, this is <a href="https://jepsen.io/consistency/models/read-committed">read committed</a> consistency since your queries will only see committed data, but there’s no other guarantee - not even that you can read your own writes. While read committed is a valid consistency model, it’s hard to reason about all of the possible race conditions the read committed model allows, making it difficult to write applications correctly.</p>
    <div>
      <h3>D1’s consistency model &amp; read replicas</h3>
      <a href="#d1s-consistency-model-read-replicas">
        
      </a>
    </div>
    <p>By default, D1 provides the <a href="https://jepsen.io/consistency/models/snapshot-isolation">snapshot isolation</a> that SQLite provides.</p><p>Snapshot isolation is a familiar consistency model that most developers find easy to use. We implement this consistency model in D1 by ensuring at most one active copy of the D1 database and routing all HTTP requests to that single database. While ensuring that there's at most one active copy of the D1 database is a gnarly distributed systems problem, it's one that we’ve solved by building D1 using <a href="https://developers.cloudflare.com/durable-objects/">Durable Objects</a>. Durable Objects guarantee global uniqueness, so once we depend on Durable Objects, routing HTTP requests is easy: just send them to the D1 Durable Object.</p><p>This trick doesn't work if you have multiple active copies of the database since there's no 100% reliable way to look at a generic incoming HTTP request and route it to the same replica 100% of the time. Unfortunately, as we saw in the previous section's example, if we don't route related requests to the same replica 100% of the time, the best consistency model we can provide is read committed.</p><p>Given that it's impossible to route to a particular replica consistently, another approach is to route requests to any replica and ensure that the chosen replica responds to requests according to a consistency model that "makes sense" to the programmer. If we're willing to include a <a href="https://en.wikipedia.org/wiki/Lamport_timestamp">Lamport timestamp</a> in our requests, we can implement <a href="https://jepsen.io/consistency/models/sequential">sequential consistency</a> using any replica. The sequential consistency model has important properties like "<a href="https://jepsen.io/consistency/models/read-your-writes">read my own writes</a>" and "<a href="https://jepsen.io/consistency/models/writes-follow-reads">writes follow reads</a>," as well as a total ordering of writes. The total ordering of writes means that every replica will see transactions commit in the same order, which is exactly the behavior we want in a transactional system. Sequential consistency comes with the caveat that any individual entity in the system may be arbitrarily out of date, but that caveat is a feature for us because it allows us to consider replica lag when designing our APIs.</p><p>The idea is that if D1 gives applications a Lamport timestamp for every database query and those applications tell D1 the last Lamport timestamp they've seen, we can have each replica determine how to make queries work according to the sequential consistency model.</p><p>A robust, yet simple, way to implement sequential consistency with replicas is to:</p><ul><li><p>Associate a Lamport timestamp with every single request to the database. A monotonically increasing commit token works well for this.</p></li><li><p>Send all write queries to the primary database to ensure the total ordering of writes.</p></li><li><p>Send read queries to any replica, but have the replica delay servicing the query until the replica receives updates from the primary database that are later than the Lamport timestamp in the query.</p></li></ul><p>What's nice about this implementation is that it's fast in the common case where a read-heavy workload always goes to the same replica and will work even if requests get routed to different replicas.</p>
    <div>
      <h3><b><i>Sneak Preview:</i></b> bringing read replication to D1 with Sessions</h3>
      <a href="#sneak-preview-bringing-read-replication-to-d1-with-sessions">
        
      </a>
    </div>
    <p>To bring read replication to D1, we will expand the D1 API with a new concept: <b>Sessions</b>. A Session encapsulates all the queries representing one logical session for your application. For example, a Session might represent all requests coming from a particular web browser or all requests coming from a mobile app. If you use Sessions, your queries will use whatever copy of the D1 database makes the most sense for your request, be that the primary database or a nearby replica. D1's Sessions implementation will ensure sequential consistency for all queries in the Session.</p><p>Since the Sessions API changes D1's consistency model, developers must opt-in to the new API. Existing D1 API methods are unchanged and will still have the same snapshot isolation consistency model as before. However, only queries made using the new Sessions API will use replicas.</p><p>Here’s an example of the D1 Sessions API:</p>
            <pre><code>export default {
  async fetch(request: Request, env: Env) {
    // When we create a D1 Session, we can continue where we left off
    // from a previous Session if we have that Session's last commit
    // token.  This Worker will return the commit token back to the
    // browser, so that it can send it back on the next request to
    // continue the Session.
    //
    // If we don't have a commit token, make the first query in this
    // session an "unconditional" query that will use the state of the
    // database at whatever replica we land on.
    const token = request.headers.get('x-d1-token') ?? 'first-unconditional'
    const session = env.DB.withSession(token)


    // Use this Session for all our Workers' routes.
    const response = await handleRequest(request, session)


    if (response.status === 200) {
      // Set the token so we can continue the Session in another request.
      response.headers.set('x-d1-token', session.latestCommitToken)
    }
    return response
  }
}


async function handleRequest(request: Request, session: D1DatabaseSession) {
  const { pathname } = new URL(request.url)


  if (pathname === '/api/orders/list') {
    // This statement is a read query, so it will execute on any
    // replica that has a commit equal or later than `token` we used
    // to create the Session.
    const { results } = await session.prepare('SELECT * FROM Orders').all()


    return Response.json(results)
  } else if (pathname === '/api/orders/add') {
    const order = await request.json&lt;Order&gt;()


    // This statement is a write query, so D1 will send the query to
    // the primary, which always has the latest commit token.
    await session
      .prepare('INSERT INTO Orders VALUES (?, ?, ?)')
      .bind(order.orderName, order.customer, order.value)
      .run()


    // In order for the application to be correct, this SELECT
    // statement must see the results of the INSERT statement above.
    // The Session API keeps track of commit tokens for queries
    // within the session and will ensure that we won't execute this
    // query until whatever replica we're using has seen the results
    // of the INSERT.
    const { results } = await session
      .prepare('SELECT COUNT(*) FROM Orders')
      .all()


    return Response.json(results)
  }


  return new Response('Not found', { status: 404 })
}</code></pre>
            <p>D1’s implementation of Sessions makes use of commit tokens.  Commit tokens identify a particular committed query to the database.  Within a session, D1 will use commit tokens to ensure that queries are sequentially ordered.  In the example above, the D1 session ensures that the “SELECT COUNT(*)” query happens <i>after</i> the “INSERT” of the new order, <i>even if</i> we switch replicas between the awaits.  </p><p>There are several options on how you want to start a session in a Workers fetch handler.  <code>db.withSession(&lt;condition&gt;)</code> accepts these arguments:</p><table><colgroup><col></col><col></col></colgroup><tbody><tr><td><p><span><b><code>condition</code> argument</b></span></p></td><td><p><span><b>Behavior</b></span></p></td></tr><tr><td><p><span><code>&lt;commit_token&gt;</code></span></p></td><td><p><span>(1) starts Session as of given commit token</span></p><p><span>(2) subsequent queries have sequential consistency</span></p></td></tr><tr><td><p><span><code>first-unconditional</code></span></p></td><td><p><span>(1) if the first query is read, read whatever current replica has and use the commit token of that read as the basis for subsequent queries.  If the first query is a write, forward the query to the primary and use the commit token of the write as the basis for subsequent queries.</span></p><p><span>(2) subsequent queries have sequential consistency</span></p></td></tr><tr><td><p><span><code>first-primary</code></span></p></td><td><p><span>(1) runs first query, read or write, against the primary</span></p><p><span>(2) subsequent queries have sequential consistency</span></p></td></tr><tr><td><p><span><code>null</code> or missing argument</span></p></td><td><p><span>treated like <code>first-unconditional</code> </span></p></td></tr></tbody></table><p>It’s possible to have a session span multiple requests by “round-tripping” the commit token from the last query of the session and using it to start a new session.  This enables individual user agents, like a web app or a mobile app, to make sure that all of the queries the user sees are sequentially consistent.</p><p>D1’s read replication will be built-in, will not incur extra usage or storage costs, and will require no replica configuration. Cloudflare will <a href="https://www.cloudflare.com/application-services/solutions/app-performance-monitoring/">monitor</a> an application’s D1 traffic and automatically create database replicas to spread user traffic across multiple servers in locations closer to users. Aligned with our serverless model, D1 developers shouldn’t worry about replica provisioning and management. Instead, developers should focus on designing applications for replication and data consistency tradeoffs.</p><p>We’re actively working on global read replication and realizing the above proposal (share feedback In the <a href="https://discord.cloudflare.com/">#d1 channel</a> on our Developer Discord). Until then, D1 GA includes several exciting new additions.</p>
    <div>
      <h3>Check out D1 GA</h3>
      <a href="#check-out-d1-ga">
        
      </a>
    </div>
    <p>Since D1’s open beta in October 2023, we’ve focused on D1’s reliability, scalability, and developer experience demanded of critical services. We’ve invested in several new features that allow developers to build and debug applications faster with D1.</p><p><b>Build bigger with larger databases</b>We’ve listened to developers who requested larger databases. D1 now supports up to 10 GB databases, with 50K databases on the Workers Paid plan. With D1’s horizontal scaleout, applications can model database-per-business-entity use cases. Since beta, new D1 databases process 40x more requests than D1 alpha databases in a given period.</p><p><b>Import &amp; export bulk data</b>Developers import and export data for multiple reasons:</p><ul><li><p>Database migration testing to/from different database systems</p></li><li><p>Data copies for local development or testing</p></li><li><p>Manual backups for custom requirements like compliance</p></li></ul><p>While you could execute SQL files against D1 before, we’re improving <code>wrangler d1 execute –file=&lt;filename&gt;</code> to ensure large imports are atomic operations, never leaving your database in a halfway state. <code>wrangler d1 execute</code> also now defaults to local-first to protect your remote production database.</p><p>To import our <a href="https://github.com/cloudflare/d1-northwind/tree/main">Northwind Traders</a> demo database, you can download the <a href="https://github.com/cloudflare/d1-northwind/blob/main/db/schema.sql">schema</a> &amp; <a href="https://github.com/cloudflare/d1-northwind/blob/main/db/data.sql">data</a> and execute the SQL files.</p>
            <pre><code>npx wrangler d1 create northwind-traders

# omit --remote to run on a local database for development
npx wrangler d1 execute northwind-traders --remote --file=./schema.sql

npx wrangler d1 execute northwind-traders --remote --file=./data.sql</code></pre>
            <p>D1 database data &amp; schema, schema-only, or data-only can be exported to a SQL file using:</p>
            <pre><code># database schema &amp; data
npx wrangler d1 export northwind-traders --remote --output=./database.sql

# single table schema &amp; data
npx wrangler d1 export northwind-traders --remote --table='Employee' --output=./table.sql

# database schema only
npx wrangler d1 export &lt;database_name&gt; --remote --output=./database-schema.sql --no-data=true</code></pre>
            <p><b>Debug query performance</b>Understanding SQL query performance and debugging slow queries is a crucial step for production workloads. We’ve added the experimental <a href="https://developers.cloudflare.com/d1/observability/metrics-analytics/#query-insights"><code>wrangler d1 insights</code></a> to help developers analyze query performance metrics also available via <a href="https://developers.cloudflare.com/d1/observability/metrics-analytics/">GraphQL API</a>.</p>
            <pre><code># To find top 10 queries by average execution time:
npx wrangler d1 insights &lt;database_name&gt; --sort-type=avg --sort-by=time --count=10</code></pre>
            <p><b>Developer tooling</b>Various <a href="https://developers.cloudflare.com/d1/reference/community-projects">community developer projects</a> support D1. New additions include <a href="https://developers.cloudflare.com/d1/tutorials/d1-and-prisma-orm">Prisma ORM</a>, in version 5.12.0, which now supports Workers and D1.</p>
    <div>
      <h3>Next steps</h3>
      <a href="#next-steps">
        
      </a>
    </div>
    <p>The features available now with GA and our global read replication design are just the start of delivering the SQL database needs for developer applications. If you haven’t yet used D1, you can <a href="https://developers.cloudflare.com/d1/get-started/">get started</a> right now, visit D1’s <a href="https://developers.cloudflare.com/d1/">developer documentation</a> to spark some ideas, or <a href="https://discord.cloudflare.com/">join the #d1 channel</a> on our Developer Discord to talk to other D1 developers and our product engineering team.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2dTCMeWMaQjhBd1SM8hM6O/2cbe9ec1a7a4fb0c061afe0e1c0bf666/image1-35.png" />
            
            </figure><p></p> ]]></content:encoded>
            <category><![CDATA[Developer Week]]></category>
            <category><![CDATA[Developers]]></category>
            <category><![CDATA[Developer Platform]]></category>
            <category><![CDATA[D1]]></category>
            <category><![CDATA[Database]]></category>
            <guid isPermaLink="false">6y8LbpExPriYEVMgzCDp4B</guid>
            <dc:creator>Vy Ton</dc:creator>
            <dc:creator>Justin Mazzola Paluska</dc:creator>
        </item>
    </channel>
</rss>