
<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:media="http://search.yahoo.com/mrss/">
    <channel>
        <title><![CDATA[ The Cloudflare Blog ]]></title>
        <description><![CDATA[ Get the latest news on how products at Cloudflare are built, technologies used, and join the teams helping to build a better Internet. ]]></description>
        <link>https://blog.cloudflare.com</link>
        <atom:link href="https://blog.cloudflare.com/" rel="self" type="application/rss+xml"/>
        <language>en-us</language>
        
        <lastBuildDate>Mon, 13 Apr 2026 20:17:43 GMT</lastBuildDate>
        <item>
            <title><![CDATA[Connect any React application to an MCP server in three lines of code]]></title>
            <link>https://blog.cloudflare.com/connect-any-react-application-to-an-mcp-server-in-three-lines-of-code/</link>
            <pubDate>Wed, 18 Jun 2025 13:00:00 GMT</pubDate>
            <description><![CDATA[ We're open-sourcing use-mcp, a React library that connects to any MCP server in just 3 lines of code, as well as our AI Playground, a complete chat interface that can connect to remote MCP servers.  ]]></description>
            <content:encoded><![CDATA[ <p>You can <a href="https://developers.cloudflare.com/agents/guides/remote-mcp-server/"><u>deploy</u></a> a <a href="https://blog.cloudflare.com/remote-model-context-protocol-servers-mcp/"><u>remote Model Context Protocol (MCP) server</u></a> on Cloudflare in just one-click. Don’t believe us? Click the button below. </p><a href="https://deploy.workers.cloudflare.com/?url=https://github.com/cloudflare/ai/tree/main/demos/remote-mcp-authless"><img src="https://deploy.workers.cloudflare.com/button" /></a>
<p></p><p>This will get you started with a remote MCP server that supports the latest MCP standards and is the reason why thousands of remote MCP servers have been deployed on Cloudflare, including ones from companies like <a href="https://blog.cloudflare.com/mcp-demo-day/"><u>Atlassian, Linear, PayPal, and more</u></a>. </p><p>But deploying servers is only half of the equation — we also wanted to make it just as easy to build and deploy remote MCP clients that can connect to these servers to enable new AI-powered service integrations. That's why we built <code>use-mcp</code>, a React library for connecting to remote MCP servers, and we're excited to contribute it to the MCP ecosystem to enable more developers to build remote MCP clients.</p><p>Today, we're open-sourcing two tools that make it easy to build and deploy MCP clients:</p><ol><li><p><a href="https://github.com/modelcontextprotocol/use-mcp"><u>use-mcp</u></a> — A React library that connects to any remote MCP server in just 3 lines of code, with transport, authentication, and session management automatically handled. We're excited to contribute this library to the <a href="https://github.com/modelcontextprotocol"><u>MCP ecosystem</u></a> to enable more developers to build remote MCP clients. </p></li><li><p><a href="https://github.com/cloudflare/ai/tree/main/playground/ai"><u>The AI Playground</u></a> — Cloudflare’s <a href="https://playground.ai.cloudflare.com/"><u>AI chat interface</u></a> platform that uses a number of LLM models to interact with remote MCP servers, with support for the latest MCP standard, which you can now deploy yourself. </p></li></ol><p>Whether you're building an AI-powered chat bot, a support agent, or an internal company interface, you can leverage these tools to connect your AI agents and applications to external services via MCP. </p><p>Ready to get started? Click on the button below to deploy your own instance of Cloudflare’s AI Playground to see it in action. </p><a href="https://deploy.workers.cloudflare.com/?url=https://github.com/cloudflare/ai/tree/main/playground/ai"><img src="https://deploy.workers.cloudflare.com/button" /></a>
<p></p>
    <div>
      <h2>use-mcp: a React library for building remote MCP clients</h2>
      <a href="#use-mcp-a-react-library-for-building-remote-mcp-clients">
        
      </a>
    </div>
    <p><a href="https://github.com/modelcontextprotocol/use-mcp"><u>use-mcp</u></a> is a <a href="https://www.npmjs.com/package/use-mcp"><u>React library</u></a> that abstracts away all the complexity of building MCP clients. Add the <code>useMCP()</code> hook into any React application to connect to remote MCP servers that users can interact with. </p><p>Here’s all the code you need to add to connect to a remote MCP server: </p>
            <pre><code>mport { useMcp } from 'use-mcp/react'
function MyComponent() {
  const { state, tools, callTool } = useMcp({
    url: 'https://mcp-server.example.com'
  })
  return &lt;div&gt;Your actual UI code&lt;/div&gt;
}</code></pre>
            <p>Just specify the URL, and you're instantly connected. </p><p>Behind the scenes, <code>use-mcp</code> handles the transport protocols (both Streamable HTTP and Server-Sent Events), authentication flows, and session management. It also includes a number of features to help you build reliable, scalable, and production-ready MCP clients. </p>
    <div>
      <h3>Connection management </h3>
      <a href="#connection-management">
        
      </a>
    </div>
    <p>Network reliability shouldn’t impact user experience. <code>use-mcp </code>manages connection retries and reconnections with a backoff schedule to ensure your client can recover the connection during a network issue and continue where it left off. The hook exposes real-time <a href="https://github.com/modelcontextprotocol/use-mcp/tree/main?tab=readme-ov-file#return-value"><u>connection states</u></a> ("connecting", "ready", "failed"), allowing you to build responsive UIs that keep users informed without requiring you to write any custom connection handling logic. </p>
            <pre><code>const { state } = useMcp({ url: 'https://mcp-server.example.com' })

if (state === 'connecting') {
  return &lt;div&gt;Establishing connection...&lt;/div&gt;
}
if (state === 'ready') {
  return &lt;div&gt;Connected and ready!&lt;/div&gt;
}
if (state === 'failed') {
  return &lt;div&gt;Connection failed&lt;/div&gt;
}</code></pre>
            
    <div>
      <h3>Authentication &amp; authorization</h3>
      <a href="#authentication-authorization">
        
      </a>
    </div>
    <p>Many MCP servers require some form of authentication in order to make tool calls. <code>use-mcp</code> supports <a href="https://oauth.net/2.1/"><u>OAuth 2.1</u></a> and handles the entire OAuth flow.  It redirects users to the login page, allows them to grant access, securely stores the access token returned by the OAuth provider, and uses it for all subsequent requests to the server. The library also provides <a href="https://github.com/modelcontextprotocol/use-mcp/tree/main?tab=readme-ov-file#api-reference"><u>methods</u></a> for users to revoke access and clear stored credentials. This gives you a complete authentication system that allows you to securely connect to remote MCP servers, without writing any of the logic. </p>
            <pre><code>const { clearStorage } = useMcp({ url: 'https://mcp-server.example.com' })

// Revoke access and clear stored credentials
const handleLogout = () =&gt; {
  clearStorage() // Removes all stored tokens, client info, and auth state
}</code></pre>
            
    <div>
      <h3>Dynamic tool discovery</h3>
      <a href="#dynamic-tool-discovery">
        
      </a>
    </div>
    <p>When you connect to an MCP server, <code>use-mcp</code> fetches the tools it exposes. If the server adds new capabilities, your app will see them without any code changes. Each tool provides type-safe metadata about its required inputs and functionality, so your client can automatically validate user input and make the right tool calls.</p>
    <div>
      <h3>Debugging &amp; monitoring capabilities</h3>
      <a href="#debugging-monitoring-capabilities">
        
      </a>
    </div>
    <p>To help you troubleshoot MCP integrations, <code>use-mcp </code>exposes a <code>log</code> array containing structured messages at debug, info, warn, and error levels, with timestamps for each one. You can enable detailed logging with the <code>debug</code> option to track tool calls, authentication flows, connection state changes, and errors. This real-time visibility makes it easier to diagnose issues during development and production. </p>
    <div>
      <h3>Future-proofed &amp; backwards compatible</h3>
      <a href="#future-proofed-backwards-compatible">
        
      </a>
    </div>
    <p>MCP is evolving rapidly, with recent updates to transport mechanisms and upcoming changes to authorization. <code>use-mcp</code> supports both Server-Sent Events (SSE) and the newer Streamable HTTP transport, automatically detecting and upgrading to newer protocols, when supported by the MCP server. </p><p>As the MCP specification continues to evolve, we'll keep the library updated with the latest standards, while maintaining backwards compatibility. We are also excited to contribute <code>use-mcp</code> to the <a href="https://github.com/modelcontextprotocol/"><u>MCP project</u></a>, so it can grow with help from the wider community.</p>
    <div>
      <h3>MCP Inspector, built with use-mcp</h3>
      <a href="#mcp-inspector-built-with-use-mcp">
        
      </a>
    </div>
    <p>In use-mcp’s <a href="https://github.com/modelcontextprotocol/use-mcp/tree/main/examples"><u>examples directory</u></a>, you’ll see a minimal <a href="https://inspector.use-mcp.dev/"><u>MCP Inspector</u></a> that was built with the <code>use-mcp</code> hook. . Enter any MCP server URL to test connections, see available tools, and monitor interactions through the debug logs. It's a great starting point for building your own MCP clients or something you can use to debug connections to your MCP server. </p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6PmPcZicaO39x9SuRqzqSX/b6caa6c7af1d6b03f17c41771598d1b5/image1.png" />
          </figure><a href="https://deploy.workers.cloudflare.com/?url=https://github.com/modelcontextprotocol/use-mcp/tree/main/examples/inspector"><img src="https://deploy.workers.cloudflare.com/button" /></a>
<p></p>
    <div>
      <h2>Open-sourcing the AI Playground </h2>
      <a href="#open-sourcing-the-ai-playground">
        
      </a>
    </div>
    <p>We initially built the <a href="https://playground.ai.cloudflare.com/"><u>AI Playground</u></a> to give users a chat interface for testing different AI models supported by Workers AI. We then added MCP support, so it could be used as a remote MCP client to connect to and test MCP servers. Today, we're open-sourcing the playground, giving you the complete chat interface with the MCP client built in, so you can deploy it yourself and customize it to fit your needs. </p><a href="https://deploy.workers.cloudflare.com/?url=https://github.com/cloudflare/ai/tree/main/playground/ai"><img src="https://deploy.workers.cloudflare.com/button" /></a>
<p></p><p>The playground comes with built-in support for the latest MCP standards, including both Streamable HTTP and Server-Sent Events transport methods, OAuth authentication flows that allow users to sign-in and grant permissions, as well as support for bearer token authentication for direct MCP server connections.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5iaUzuxBZafrH1q0VYHTJf/a7585da38f75818111b3521c9a5ef4e3/image2.png" />
          </figure>
    <div>
      <h3>How the AI Playground works</h3>
      <a href="#how-the-ai-playground-works">
        
      </a>
    </div>
    <p>The AI Playground is built on Workers AI, giving you access to a full catalog of large language models (LLMs) running on Cloudflare's network, combined with the Agents SDK and <code>use-mcp</code> library for MCP server connections.</p><p>The AI Playground uses the <code>use-mcp</code> library to manage connections to remote MCP servers. When the playground starts up, it initializes the MCP connection system with <code>const{tools: mcpTools} = useMcp()</code>, which provides access to all tools from connected servers. At first, this list is empty because it’s not connected to any MCP servers, but once a connection to a remote MCP server is established, the tools are automatically discovered and populated into the list. </p><p>Once <a href="https://github.com/cloudflare/ai/blob/af1ce8be87d6a4e6bc10bb83f7959e63b28c1c8e/playground/ai/src/McpServers.tsx#L550"><u>connected</u></a>, the playground immediately has access to any tools that the MCP server exposes. The <code>use-mcp</code> library handles all the protocol communication and tool discovery, and maintains the connection state. If the MCP server requires authentication, the playground handles OAuth flows through a dedicated callback page that uses <code>onMcpAuthorization </code>from <code>use-mcp</code> to complete the authentication process.</p><p>When a user sends a chat message, the playground takes the <code>mcpTools</code> from the <code>use-mcp</code> hook and passes them directly to Workers AI, enabling the model to understand what capabilities are available and invoke them as needed. </p>
            <pre><code>const stream = useChat({
  api: "/api/inference",
  body: {
    model: params.model,
    tools: mcpTools, // Tools from connected MCP servers
    max_tokens: params.max_tokens,
    system_message: params.system_message,
  },
})</code></pre>
            
    <div>
      <h3>Debugging and monitoring</h3>
      <a href="#debugging-and-monitoring">
        
      </a>
    </div>
    <p>To monitor and debug connections to MCP servers, we’ve added a Debug Log interface to the playground. This displays real-time information about the MCP server connections, including connection status, authentication state, and any connection errors. </p><p>During the chat interactions, the debug interface will show the raw message exchanged between the playground and the MCP server, including the tool invocation and its result. This allows you to monitor the JSON payload being sent to the MCP server, the raw response returned, and track whether the tool call succeeded or failed. This is especially helpful for anyone building remote MCP servers, as it allows you to see how your tools are behaving when integrated with different language models. </p>
    <div>
      <h2>Contributing to the MCP ecosystem</h2>
      <a href="#contributing-to-the-mcp-ecosystem">
        
      </a>
    </div>
    <p>One of the reasons why MCP has evolved so quickly is that it's an open source project, powered by the community. We're excited to contribute the <code>use-mcp</code> library to the <a href="https://github.com/modelcontextprotocol"><u>MCP ecosystem</u></a> to enable more developers to build remote MCP clients. </p><p>If you're looking for examples of MCP clients or MCP servers to get started with, check out the<a href="https://github.com/cloudflare/ai"> <u>Cloudflare AI GitHub repository</u></a> for working examples you can deploy and modify. This includes the complete AI Playground <a href="https://github.com/cloudflare/ai/tree/main/playground/ai"><u>source code,</u></a> a number of remote MCP servers that use different authentication &amp; authorization providers, and the <a href="https://github.com/cloudflare/ai/tree/main/demos/use-mcp-inspector"><u>MCP Inspector</u></a>. </p><p>We’re also building the <a href="https://github.com/cloudflare/mcp-server-cloudflare"><u>Cloudflare MCP servers</u></a> in public and welcome contributions to help make them better. </p><p>Whether you're building your first MCP server, integrating MCP into an existing application, or contributing to the broader ecosystem, we'd love to hear from you. If you have any questions, feedback, or ideas for collaboration, you can reach us via email at <a href="#"><u>1800-mcp@cloudflare.com</u></a>. </p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7sYYS9c45orRX6SaUw5qTx/b975c5221ab538cc8f1167b706da375f/image3.png" />
          </figure><p></p> ]]></content:encoded>
            <category><![CDATA[AI]]></category>
            <category><![CDATA[Agents]]></category>
            <category><![CDATA[MCP]]></category>
            <category><![CDATA[Cloudflare Workers]]></category>
            <category><![CDATA[Developers]]></category>
            <guid isPermaLink="false">4gk3k2ZiTN6DZoHu3e090r</guid>
            <dc:creator>Dina Kozlov</dc:creator>
            <dc:creator>Glen Maddern</dc:creator>
            <dc:creator>Sunil Pai</dc:creator>
        </item>
        <item>
            <title><![CDATA[Build and deploy Remote Model Context Protocol (MCP) servers to Cloudflare]]></title>
            <link>https://blog.cloudflare.com/remote-model-context-protocol-servers-mcp/</link>
            <pubDate>Tue, 25 Mar 2025 13:59:00 GMT</pubDate>
            <description><![CDATA[ You can now build and deploy remote MCP servers to Cloudflare, and we handle the hard parts of building remote MCP servers for you. ]]></description>
            <content:encoded><![CDATA[ <p>It feels like almost everyone building AI applications and <a href="https://www.cloudflare.com/learning/ai/what-is-agentic-ai/">agents</a> is talking about the <a href="https://www.cloudflare.com/learning/ai/what-is-model-context-protocol-mcp/">Model Context Protocol</a> (MCP), as well as building MCP servers that you install and run locally on your own computer.</p><p>You can now <a href="https://developers.cloudflare.com/agents/guides/remote-mcp-server/"><u>build and deploy remote MCP servers</u></a> to Cloudflare. We’ve added four things to Cloudflare that handle the hard parts of building remote MCP servers for you:</p><ol><li><p><a href="https://developers.cloudflare.com/agents/model-context-protocol/authorization"><u>workers-oauth-provider</u></a> — an <a href="https://www.cloudflare.com/learning/access-management/what-is-oauth/"><u>OAuth</u></a> Provider that makes authorization easy</p></li><li><p><a href="https://developers.cloudflare.com/agents/model-context-protocol/tools/"><u>McpAgent</u></a> — a class built into the <a href="https://developers.cloudflare.com/agents/"><u>Cloudflare Agents SDK</u></a> that handles remote transport</p></li><li><p><a href="https://developers.cloudflare.com/agents/guides/test-remote-mcp-server/"><u>mcp-remote</u></a> — an adapter that lets MCP clients that otherwise only support local connections work with remote MCP servers</p></li><li><p><a href="https://playground.ai.cloudflare.com/"><u>AI playground as a remote MCP client</u></a> — a chat interface that allows you to connect to remote MCP servers, with the authentication check included</p></li></ol><p>The button below, or the <a href="https://developers.cloudflare.com/agents/guides/remote-mcp-server/"><u>developer docs</u></a>, will get you up and running in production with <a href="https://github.com/cloudflare/ai/tree/main/demos/remote-mcp-server"><u>this example MCP server</u></a> in less than two minutes:</p><a href="https://deploy.workers.cloudflare.com/?url=https://github.com/cloudflare/ai/tree/main/demos/remote-mcp-server"><img src="https://deploy.workers.cloudflare.com/button" /></a>
<p></p><p>Unlike the local MCP servers you may have previously used, remote MCP servers are accessible on the Internet. People simply sign in and grant permissions to MCP clients using familiar authorization flows. We think this is going to be a massive deal — connecting coding agents to MCP servers has blown developers’ minds over the past few months, and remote MCP servers have the same potential to open up similar new ways of working with LLMs and agents to a much wider audience, including more everyday consumer use cases.</p>
    <div>
      <h2>From local to remote — bringing MCP to the masses</h2>
      <a href="#from-local-to-remote-bringing-mcp-to-the-masses">
        
      </a>
    </div>
    <p>MCP is quickly becoming the common protocol that enables LLMs to go beyond <a href="https://www.cloudflare.com/learning/ai/inference-vs-training/"><u>inference</u></a> and <a href="https://developers.cloudflare.com/reference-architecture/diagrams/ai/ai-rag/"><u>RAG</u></a>, and take actions that require access beyond the AI application itself (like sending an email, deploying a code change, publishing blog posts, you name it). It enables AI agents (MCP clients) to access tools and resources from external services (MCP servers).</p><p>To date, MCP has been limited to running locally on your own machine — if you want to access a tool on the web using MCP, it’s up to you to set up the server locally. You haven’t been able to use MCP from web-based interfaces or mobile apps, and there hasn’t been a way to let people authenticate and grant the MCP client permission. Effectively, MCP servers haven’t yet been brought online.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1EyiTXzB4FvBs2zEfzuNTp/5ce4b55457348e9ab83e6d9cf35d8c3c/image7.png" />
          </figure><p>Support for <a href="https://spec.modelcontextprotocol.io/specification/draft/basic/transports/#streamable-http"><u>remote MCP connections</u></a> changes this. It creates the opportunity to reach a wider audience of Internet users who aren’t going to install and run MCP servers locally for use with desktop apps. Remote MCP support is like the transition from desktop software to web-based software. People expect to continue tasks across devices and to login and have things just work. Local MCP is great for developers, but remote MCP connections are the missing piece to reach everyone on the Internet.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7bI7rJtLh89jmZaibSgiLl/e426f93616a8210d80b979c47d89dc75/image4.png" />
          </figure>
    <div>
      <h2>Making authentication and authorization just work with MCP</h2>
      <a href="#making-authentication-and-authorization-just-work-with-mcp">
        
      </a>
    </div>
    <p>Beyond just changing the transport layer — from <a href="https://modelcontextprotocol.io/docs/concepts/transports#standard-input%2Foutput-stdio"><u>stdio</u></a> to <a href="https://github.com/modelcontextprotocol/specification/pull/206"><u>streamable HTTP</u></a> — when you build a remote MCP server that uses information from the end user’s account, you need <a href="https://www.cloudflare.com/learning/access-management/authn-vs-authz/"><u>authentication and authorization</u></a>. You need a way to allow users to login and prove who they are (authentication) and a way for users to control what the AI agent will be able to access when using a service (authorization).</p><p>MCP does this with <a href="https://oauth.net/2/"><u>OAuth</u></a>, which has been the standard protocol that allows users to grant applications to access their information or services, without sharing passwords. Here, the MCP Server itself acts as the OAuth Provider. However, OAuth with MCP is hard to implement yourself, so when you build MCP servers on Cloudflare we provide it for you.</p>
    <div>
      <h3>workers-oauth-provider — an OAuth 2.1 Provider library for Cloudflare Workers</h3>
      <a href="#workers-oauth-provider-an-oauth-2-1-provider-library-for-cloudflare-workers">
        
      </a>
    </div>
    <p>When you <a href="https://developers.cloudflare.com/agents/guides/remote-mcp-server/"><u>deploy an MCP Server</u></a> to Cloudflare, your Worker acts as an OAuth Provider, using <a href="https://github.com/cloudflare/workers-oauth-provider"><u>workers-oauth-provider</u></a>, a new TypeScript library that wraps your Worker’s code, adding authorization to API endpoints, including (but not limited to) MCP server API endpoints.</p><p>Your MCP server will receive the already-authenticated user details as a parameter. You don’t need to perform any checks of your own, or directly manage tokens. You can still fully control how you authenticate users: from what UI they see when they log in, to which provider they use to log in. You can choose to bring your own third-party authentication and authorization providers like Google or GitHub, or integrate with your own.</p><p>The complete <a href="https://spec.modelcontextprotocol.io/specification/draft/basic/authorization/"><u>MCP OAuth flow</u></a> looks like this:</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/VTPBfZ4hRPdq2TWE5VOjS/00abc97e4beedf59a4101957612fd503/image5.png" />
          </figure><p>Here, your MCP server acts as both an OAuth client to your upstream service, <i>and</i> as an OAuth server (also referred to as an OAuth “provider”) to MCP clients. You can use any upstream authentication flow you want, but workers-oauth-provider guarantees that your MCP server is <a href="https://spec.modelcontextprotocol.io/specification/draft/basic/authorization"><u>spec-compliant</u></a> and able to work with the full range of client apps &amp; websites. This includes support for Dynamic Client Registration (<a href="https://datatracker.ietf.org/doc/html/rfc7591"><u>RFC 7591</u></a>) and Authorization Server Metadata (<a href="https://datatracker.ietf.org/doc/html/rfc8414"><u>RFC 8414</u></a>).</p>
    <div>
      <h3>A simple, pluggable interface for OAuth</h3>
      <a href="#a-simple-pluggable-interface-for-oauth">
        
      </a>
    </div>
    <p>When you build an MCP server with Cloudflare Workers, you provide an instance of the OAuth Provider paths to your authorization, token, and client registration endpoints, along with <a href="https://developers.cloudflare.com/workers/runtime-apis/handlers/fetch/"><u>handlers</u></a> for your MCP Server, and for auth:</p>
            <pre><code>import OAuthProvider from "@cloudflare/workers-oauth-provider";
import MyMCPServer from "./my-mcp-server";
import MyAuthHandler from "./auth-handler";

export default new OAuthProvider({
  apiRoute: "/sse", // MCP clients connect to your server at this route
  apiHandler: MyMCPServer.mount('/sse'), // Your MCP Server implmentation
  defaultHandler: MyAuthHandler, // Your authentication implementation
  authorizeEndpoint: "/authorize",
  tokenEndpoint: "/token",
  clientRegistrationEndpoint: "/register",
});</code></pre>
            <p>This abstraction lets you easily plug in your own authentication. Take a look at <a href="https://github.com/cloudflare/ai/blob/main/demos/remote-mcp-github-oauth/src/github-handler.ts"><u>this example</u></a> that uses GitHub as the identity provider for an MCP server, in less than 100 lines of code, by implementing /callback and /authorize routes.</p>
    <div>
      <h3>Why do MCP servers issue their own tokens?</h3>
      <a href="#why-do-mcp-servers-issue-their-own-tokens">
        
      </a>
    </div>
    <p>You may have noticed in the authorization diagram above, and in the <a href="https://spec.modelcontextprotocol.io/specification/draft/basic/authorization"><u>authorization section</u></a> of the MCP spec, that the MCP server issues its own token to the MCP client.</p><p>Instead of passing the token it receives from the upstream provider directly to the MCP client, your Worker stores an encrypted access token in <a href="https://developers.cloudflare.com/kv/"><u>Workers KV</u></a>. It then issues its own token to the client. As shown in the <a href="https://github.com/cloudflare/ai/blob/main/demos/remote-mcp-github-oauth/src/github-handler.ts"><u>GitHub example</u></a> above, this is handled on your behalf by the workers-oauth-provider — your code never directly handles writing this token, preventing mistakes. You can see this in the following code snippet from the <a href="https://github.com/cloudflare/ai/blob/main/demos/remote-mcp-github-oauth/src/github-handler.ts"><u>GitHub example</u></a> above:</p>
            <pre><code>  // When you call completeAuthorization, the accessToken you pass to it
  // is encrypted and stored, and never exposed to the MCP client
  // A new, separate token is generated and provided to the client at the /token endpoint
  const { redirectTo } = await c.env.OAUTH_PROVIDER.completeAuthorization({
    request: oauthReqInfo,
    userId: login,
    metadata: { label: name },
    scope: oauthReqInfo.scope,
    props: {
      accessToken,  // Stored encrypted, never sent to MCP client
    },
  })

  return Response.redirect(redirectTo)</code></pre>
            <p>On the surface, this indirection might sound more complicated. Why does it work this way?</p><p>By issuing its own token, MCP Servers can restrict access and enforce more granular controls than the upstream provider. If a token you issue to an MCP client is compromised, the attacker only gets the limited permissions you've explicitly granted through your MCP tools, not the full access of the original token.</p><p>Let’s say your MCP server requests that the user authorize permission to read emails from their Gmail account, using the <a href="https://developers.google.com/identity/protocols/oauth2/scopes#gmail"><u>gmail.readonly scope</u></a>. The tool that the MCP server exposes is more narrow, and allows reading travel booking notifications from a limited set of senders, to handle a question like “What’s the check-out time for my hotel room tomorrow?” You can enforce this constraint in your MCP server, and if the token you issue to the MCP client is compromised, because the token is to your MCP server — and not the raw token to the upstream provider (Google) — an attacker cannot use it to read arbitrary emails. They can only call the tools your MCP server provides. OWASP calls out <a href="https://genai.owasp.org/llmrisk/llm062025-excessive-agency/"><u>“Excessive Agency”</u></a> as one of the top risk factors for building AI applications, and by issuing its own token to the client and enforcing constraints, your MCP server can limit tools access to only what the client needs.</p><p>Or building off the earlier GitHub example, you can enforce that only a specific user is allowed to access a particular tool. In the example below, only users that are part of an allowlist can see or call the generateImage tool, that uses <a href="https://developers.cloudflare.com/workers-ai/"><u>Workers AI</u></a> to generate an image based on a prompt:</p>
            <pre><code>import { McpAgent } from "agents/mcp";
import { McpServer } from "@modelcontextprotocol/sdk/server/mcp.js";
import { z } from "zod";

const USER_ALLOWLIST = ["geelen"];

export class MyMCP extends McpAgent&lt;Props, Env&gt; {
  server = new McpServer({
    name: "Github OAuth Proxy Demo",
    version: "1.0.0",
  });

  async init() {
    // Dynamically add tools based on the user's identity
    if (USER_ALLOWLIST.has(this.props.login)) {
      this.server.tool(
        'generateImage',
        'Generate an image using the flux-1-schnell model.',
        {
          prompt: z.string().describe('A text description of the image you want to generate.')
        },
        async ({ prompt }) =&gt; {
          const response = await this.env.AI.run('@cf/black-forest-labs/flux-1-schnell', { 
            prompt, 
            steps: 8 
          })
          return {
            content: [{ type: 'image', data: response.image!, mimeType: 'image/jpeg' }],
          }
        }
      )
    }
  }
}
</code></pre>
            
    <div>
      <h2>Introducing McpAgent: remote transport support that works today, and will work with the revision to the MCP spec</h2>
      <a href="#introducing-mcpagent-remote-transport-support-that-works-today-and-will-work-with-the-revision-to-the-mcp-spec">
        
      </a>
    </div>
    <p>The next step to opening up MCP beyond your local machine is to open up a remote transport layer for communication. MCP servers you run on your local machine just communicate over <a href="https://modelcontextprotocol.io/docs/concepts/transports#standard-input%2Foutput-stdio"><u>stdio</u></a>, but for an MCP server to be callable over the Internet, it must implement <a href="https://spec.modelcontextprotocol.io/specification/draft/basic/transports/#http-with-sse"><u>remote transport</u></a>.</p><p>The <a href="https://github.com/cloudflare/agents/blob/2f82f51784f4e27292249747b5fbeeef94305552/packages/agents/src/mcp.ts"><u>McpAgent</u></a> class we introduced today as part of our <a href="https://github.com/cloudflare/agents"><u>Agents SDK</u></a> handles this for you, using <a href="https://developers.cloudflare.com/durable-objects/"><u>Durable Objects</u></a> behind the scenes to hold a persistent connection open, so that the MCP client can send <a href="https://modelcontextprotocol.io/docs/concepts/transports#server-sent-events-sse"><u>server-sent events (SSE)</u></a> to your MCP server. You don’t have to write code to deal with transport or serialization yourself. A minimal MCP server in 15 lines of code can look like this:</p>
            <pre><code>import { McpAgent } from "agents/mcp";
import { McpServer } from "@modelcontextprotocol/sdk/server/mcp.js";
import { z } from "zod";

export class MyMCP extends McpAgent {
  server = new McpServer({
    name: "Demo",
    version: "1.0.0",
  });
  async init() {
    this.server.tool("add", { a: z.number(), b: z.number() }, async ({ a, b }) =&gt; ({
      content: [{ type: "text", text: String(a + b) }],
    }));
  }
}</code></pre>
            <p>After much <a href="https://github.com/modelcontextprotocol/specification/discussions/102"><u>discussion</u></a>, remote transport in the MCP spec is changing, with <a href="https://github.com/modelcontextprotocol/specification/pull/206"><u>Streamable HTTP replacing HTTP+SSE</u></a> This allows for stateless, pure HTTP connections to MCP servers, with an option to upgrade to SSE, and removes the need for the MCP client to send messages to a separate endpoint than the one it first connects to. The McpAgent class will change with it and just work with streamable HTTP, so that you don’t have to start over to support the revision to how transport works.</p><p>This applies to future iterations of transport as well. Today, the vast majority of MCP servers only expose tools, which are simple <a href="https://en.wikipedia.org/wiki/Remote_procedure_call"><u>remote procedure call (RPC)</u></a> methods that can be provided by a stateless transport. But more complex human-in-the-loop and agent-to-agent interactions will need <a href="https://modelcontextprotocol.io/docs/concepts/prompts"><u>prompts</u></a> and <a href="https://modelcontextprotocol.io/docs/concepts/sampling"><u>sampling</u></a>. We expect these types of chatty, two-way interactions will need to be real-time, which will be challenging to do well without a bidirectional transport layer. When that time comes, Cloudflare, the <a href="https://developers.cloudflare.com/agents/"><u>Agents SDK</u></a>, and Durable Objects all natively support <a href="https://developers.cloudflare.com/durable-objects/best-practices/websockets/"><u>WebSockets</u></a>, which enable full-duplex, bidirectional real-time communication. </p>
    <div>
      <h2>Stateful, agentic MCP servers</h2>
      <a href="#stateful-agentic-mcp-servers">
        
      </a>
    </div>
    <p>When you build MCP servers on Cloudflare, each MCP client session is backed by a Durable Object, via the <a href="https://developers.cloudflare.com/agents/"><u>Agents SDK</u></a>. This means each session can manage and persist its own state, <a href="https://developers.cloudflare.com/agents/api-reference/store-and-sync-state/"><u>backed by its own SQL database</u></a>.</p><p>This opens the door to building stateful MCP servers. Rather than just acting as a stateless layer between a client app and an external API, MCP servers on Cloudflare can themselves be stateful applications — games, a shopping cart plus checkout flow, a <a href="https://github.com/modelcontextprotocol/servers/tree/main/src/memory"><u>persistent knowledge graph</u></a>, or anything else you can dream up. When you build on Cloudflare, MCP servers can be much more than a layer in front of your REST API.</p><p>To understand the basics of how this works, let’s look at a minimal example that increments a counter:</p>
            <pre><code>import { McpAgent } from "agents/mcp";
import { McpServer } from "@modelcontextprotocol/sdk/server/mcp.js";
import { z } from "zod";

type State = { counter: number }

export class MyMCP extends McpAgent&lt;Env, State, {}&gt; {
  server = new McpServer({
    name: "Demo",
    version: "1.0.0",
  });

  initialState: State = {
    counter: 1,
  }

  async init() {
    this.server.resource(`counter`, `mcp://resource/counter`, (uri) =&gt; {
      return {
        contents: [{ uri: uri.href, text: String(this.state.counter) }],
      }
    })

    this.server.tool('add', 'Add to the counter, stored in the MCP', { a: z.number() }, async ({ a }) =&gt; {
      this.setState({ ...this.state, counter: this.state.counter + a })

      return {
        content: [{ type: 'text', text: String(`Added ${a}, total is now ${this.state.counter}`) }],
      }
    })
  }

  onStateUpdate(state: State) {
    console.log({ stateUpdate: state })
  }

}</code></pre>
            <p>For a given session, the MCP server above will remember the state of the counter across tool calls.</p><p>From within an MCP server, you can use Cloudflare’s whole developer platform, and have your MCP server <a href="https://developers.cloudflare.com/agents/api-reference/browse-the-web/"><u>spin up its own web browser</u></a>, <a href="https://developers.cloudflare.com/agents/api-reference/run-workflows/"><u>trigger a Workflow</u></a>, <a href="https://developers.cloudflare.com/agents/api-reference/using-ai-models/"><u>call AI models</u></a>, and more. We’re excited to see the MCP ecosystem evolve into more advanced use cases.</p>
    <div>
      <h2>Connect to remote MCP servers from MCP clients that today only support local MCP</h2>
      <a href="#connect-to-remote-mcp-servers-from-mcp-clients-that-today-only-support-local-mcp">
        
      </a>
    </div>
    <p>Cloudflare is supporting remote MCP early — before the most prominent MCP client applications support remote, authenticated MCP, and before other platforms support remote MCP. We’re doing this to give you a head start building for where MCP is headed.</p><p>But if you build a remote MCP server today, this presents a challenge — how can people start using your MCP server if there aren’t MCP clients that support remote MCP?</p><p>We have two new tools that allow you to test your remote MCP server and simulate how users will interact with it in the future:</p><p>We updated the <a href="https://playground.ai.cloudflare.com/"><u>Workers AI Playground</u></a> to be a fully remote MCP client that allows you to connect to any remote MCP server with built-in authentication support. This online chat interface lets you immediately test your remote MCP servers without having to install anything on your device. Instead, just enter the remote MCP server’s URL (e.g. https://remote-server.example.com/sse) and click Connect.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4N64nJHJiQygmMdSK7clIs/c0bf8c64f1607674f81be10c3871a64b/image1.png" />
          </figure><p>Once you click Connect, you’ll go through the authentication flow (if you set one up) and after, you will be able to interact with the MCP server tools directly from the chat interface.</p><p>If you prefer to use a client like Claude Desktop or Cursor that already supports MCP but doesn’t yet handle remote connections with authentication, you can use <a href="https://www.npmjs.com/package/mcp-remote"><u>mcp-remote</u></a>. mcp-remote is an adapter that  lets MCP clients that otherwise only support local connections to work with remote MCP servers. This gives you and your users the ability to preview what interactions with your remote MCP server will be like from the tools you’re already using today, without having to wait for the client to support remote MCP natively. </p><p>We’ve <a href="https://developers.cloudflare.com/agents/guides/test-remote-mcp-server/"><u>published a guide</u></a> on how to use mcp-remote with popular MCP clients including Claude Desktop, Cursor, and Windsurf. In Claude Desktop, you add the following to your configuration file:</p>
            <pre><code>{
  "mcpServers": {
    "remote-example": {
      "command": "npx",
      "args": [
        "mcp-remote",
        "https://remote-server.example.com/sse"
      ]
    }
  }
}</code></pre>
            
    <div>
      <h2>1800-mcp@cloudflare.com — start building remote MCP servers today</h2>
      <a href="#1800-mcp-cloudflare-com-start-building-remote-mcp-servers-today">
        
      </a>
    </div>
    <p>Remote Model Context Protocol (MCP) is coming! When client apps support remote MCP servers, the audience of people who can use them opens up from just us, developers, to the rest of the population — who may never even know what MCP is or stands for. </p><p>Building a remote MCP server is the way to bring your service into the AI assistants and tools that millions of people use. We’re excited to see many of the biggest companies on the Internet are busy building MCP servers right now, and we are curious about the businesses that pop-up in an agent-first, MCP-native way.</p><p>On Cloudflare, <a href="https://developers.cloudflare.com/agents/guides/remote-mcp-server/"><u>you can start building today</u></a>. We’re ready for you, and ready to help build with you. Email us at <a href="#"><u>1800-mcp@cloudflare.com</u></a>, and we’ll help get you going. There’s lots more to come with MCP, and we’re excited to see what you build.</p> ]]></content:encoded>
            <category><![CDATA[AI]]></category>
            <category><![CDATA[Developers]]></category>
            <category><![CDATA[MCP]]></category>
            <category><![CDATA[Agents]]></category>
            <guid isPermaLink="false">4e3J8mxEIN24iNKfw9ToEH</guid>
            <dc:creator>Brendan Irvine-Broque</dc:creator>
            <dc:creator>Dina Kozlov</dc:creator>
            <dc:creator>Glen Maddern</dc:creator>
        </item>
        <item>
            <title><![CDATA[Hi Claude, build an MCP server on Cloudflare Workers]]></title>
            <link>https://blog.cloudflare.com/model-context-protocol/</link>
            <pubDate>Fri, 20 Dec 2024 14:50:00 GMT</pubDate>
            <description><![CDATA[ Want Claude to interact with your app directly? Build an MCP server on Cloudflare Workers, enabling you to connect your service directly, allowing Claude to understand and run tasks on your behalf. ]]></description>
            <content:encoded><![CDATA[ <p>In late November 2024, Anthropic <a href="https://www.anthropic.com/news/model-context-protocol"><u>announced</u></a> a new way to interact with AI, called Model Context Protocol (MCP). Today, we’re excited to show you how to use MCP in combination with Cloudflare to extend the capabilities of Claude to build applications, generate images and more. You’ll learn how to build an MCP server on Cloudflare to make any service accessible through an AI assistant like Claude with just a few lines of code using Cloudflare Workers. </p>
    <div>
      <h2>A quick primer on the Model Context Protocol (MCP)</h2>
      <a href="#a-quick-primer-on-the-model-context-protocol-mcp">
        
      </a>
    </div>
    <p>MCP is an open standard that provides a universal way for LLMs to interact with services and applications. As the introduction on the <a href="https://modelcontextprotocol.io/introduction"><u>MCP website</u></a> puts it, </p><blockquote><p><i>“Think of MCP like a USB-C port for AI applications. Just as USB-C provides a standardized way to connect your devices to various peripherals and accessories, MCP provides a standardized way to connect AI models to different data sources and tools.”</i> </p></blockquote><p>From an architectural perspective, MCP is comprised of several components:</p><ul><li><p><b>MCP hosts</b>: Programs or tools (like Claude) where AI models operate and interact with different services</p></li><li><p><b>MCP clients</b>: Client within an AI assistant that initiates requests and communicates with MCP servers to perform tasks or access resources</p></li><li><p><b>MCP servers</b>: Lightweight programs that each expose the capabilities of a service</p></li><li><p><b>Local data sources</b>: Files, databases, and services on your computer that MCP servers can securely access</p></li><li><p><b>Remote services</b>: External Internet-connected systems that MCP servers can connect to through APIs</p></li></ul><p>Imagine you ask Claude to send a message in a Slack channel. Before Claude can do this, Slack must communicate which tools are available. It does this by defining tools — such as “list channels”, “post messages”, and “reply to thread” — in the MCP server. Once the MCP client knows what tools it should invoke, it can complete the task. All you have to do is tell it what you need, and it will get it done. </p>
    <div>
      <h2>Allowing AI to not just generate, but deploy applications for you</h2>
      <a href="#allowing-ai-to-not-just-generate-but-deploy-applications-for-you">
        
      </a>
    </div>
    <p>What makes MCP so powerful? As a quick example, by combining it with a platform like Cloudflare Workers, it allows Claude users to deploy a Cloudflare Worker in just one sentence, resulting in a site like <a href="https://joke-site.dinas.workers.dev/"><u>this</u></a>: </p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6JNebermBM0YNwpxqoMTj2/65224c915a3d12c4f8d11a4228855bf7/image1.png" />
          </figure>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2KImc4ydvEzg8Rf0I0KRkQ/b87b4cca33df242eeabcae9e237e9fb5/image3.png" />
          </figure><p>But that’s just one example. Today, we’re excited to show you how you can build and deploy your own MCP server to allow your users to interact with your application directly from an LLM like Claude, and how you can do that just by writing a Cloudflare Worker.</p>
    <div>
      <h2>Simplifying your MCP Server deployment with workers-mcp</h2>
      <a href="#simplifying-your-mcp-server-deployment-with-workers-mcp">
        
      </a>
    </div>
    <p>The new <a href="https://github.com/cloudflare/workers-mcp"><u>workers-mcp</u></a> tooling handles the translation between your code and the MCP standard, so that you don’t have to do the maintenance work to get it set up.</p><p>Once you create your Worker and install the MCP tooling, you’ll get a worker-mcp template set up for you. This boilerplate removes the overhead of configuring the MCP server yourself:</p>
            <pre><code>import { WorkerEntrypoint } from 'cloudflare:workers'
import { ProxyToSelf } from 'workers-mcp'
export default class MyWorker extends WorkerEntrypoint&lt;Env&gt; {
  /**
   * A warm, friendly greeting from your new Workers MCP server.
   * @param name {string} the name of the person we are greeting.
   * @return {string} the contents of our greeting.
   */
  sayHello(name: string) {
    return `Hello from an MCP Worker, ${name}!`
  }
  /**
   * @ignore
   **/
  async fetch(request: Request): Promise&lt;Response&gt; {
    return new ProxyToSelf(this).fetch(request)
  }
}</code></pre>
            <p>Let’s unpack what’s happening here. This provides a direct link to MCP. The ProxyToSelf logic ensures that your Worker is wired up to respond as an MCP server, without any complex routing or schema definitions. </p><p>It also provides tool definition with <a href="https://jsdoc.app/"><u>JSDoc</u></a>. You’ll notice that the `sayHello` method is annotated with JSDoc comments describing what it does, what arguments it takes, and what it returns. These comments aren’t just for human readers, but they’re also used to generate documentation that your AI assistant (Claude) can understand. </p>
    <div>
      <h2>Adding image generation to Claude</h2>
      <a href="#adding-image-generation-to-claude">
        
      </a>
    </div>
    <p>When you build an MCP server using Workers, adding custom functionality to an LLM is easy. Instead of setting up the server infrastructure, defining request schemas, all you have to do is write the code. Above, all we did was generate a “hello world”, but now let’s power up Claude to generate an image, using Workers AI:</p>
            <pre><code>import { WorkerEntrypoint } from 'cloudflare:workers'
import { ProxyToSelf } from 'workers-mcp'

export default class ClaudeImagegen extends WorkerEntrypoint&lt;Env&gt; {
 /**
   * Generate an image using the flux-1-schnell model.
   * @param prompt {string} A text description of the image you want to generate.
   * @param steps {number} The number of diffusion steps; higher values can improve quality but take longer.
   */
  async generateImage(prompt: string, steps: number): Promise&lt;string&gt; {
    const response = await this.env.AI.run('@cf/black-forest-labs/flux-1-schnell', {
      prompt,
      steps,
    });
        // Convert from base64 string
        const binaryString = atob(response.image);
        // Create byte representation
        const img = Uint8Array.from(binaryString, (m) =&gt; m.codePointAt(0)!);
        
        return new Response(img, {
          headers: {
            'Content-Type': 'image/jpeg',
          },
        });
      }
  /**
   * @ignore
   */
  async fetch(request: Request): Promise&lt;Response&gt; {
    return new ProxyToSelf(this).fetch(request)
  }
}</code></pre>
            <p>Once you update the code and redeploy the Worker, Claude will now be able to use the new image generation tool. All you have to say is: <i>“Hey! Can you create an image of a lava lamp wall that lives in San Francisco?”</i></p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/Izb6iVs8xPNSOnATJfK9D/9942ddc7b8787cfb1c2f7f3b9959be0b/image2.png" />
          </figure><p>If you’re looking for some inspiration, here are a few examples of what you can build with MCP and Workers: </p><ul><li><p>Let Claude send follow-up emails on your behalf using <a href="https://developers.cloudflare.com/email-routing/"><u>Email Routing</u></a></p></li><li><p>Ask Claude to capture and share website previews via <a href="https://developers.cloudflare.com/browser-rendering/"><u>Browser Automation</u></a></p></li><li><p>Store and manage sessions, user data, or other persistent information with <a href="https://developers.cloudflare.com/durable-objects/"><u>Durable Objects</u></a></p></li><li><p>Query and update data from your <a href="https://developers.cloudflare.com/d1/"><u>D1</u></a> database </p></li><li><p>…or call any of your existing Workers directly!</p></li></ul>
    <div>
      <h2>Why use Workers for building your MCP server?</h2>
      <a href="#why-use-workers-for-building-your-mcp-server">
        
      </a>
    </div>
    <p>To build out an MCP server without access to Cloudflare’s tooling, you would have to: initialize an instance of the server, define your APIs by creating explicit schemas for every interaction, handle request routing, ensure that the responses are formatted correctly, write handlers for every action, configure how the server will communicate, and more… As shown above, we do all of this for you.</p><p>For reference, an <a href="https://github.com/modelcontextprotocol/typescript-sdk?tab=readme-ov-file#creating-a-server"><u>implementation</u></a> may look something like this:</p>
            <pre><code>import { Server } from "@modelcontextprotocol/sdk/server/index.js";
import { StdioServerTransport } from "@modelcontextprotocol/sdk/server/stdio.js";

const server = new Server({ name: "example-server", version: "1.0.0" }, {
  capabilities: { resources: {} }
});

server.setRequestHandler(ListResourcesRequestSchema, async () =&gt; {
  return {
    resources: [{ uri: "file:///example.txt", name: "Example Resource" }]
  };
});

server.setRequestHandler(ReadResourceRequestSchema, async (request) =&gt; {
  if (request.params.uri === "file:///example.txt") {
    return {
      contents: [{
        uri: "file:///example.txt",
        mimeType: "text/plain",
        text: "This is the content of the example resource."
      }]
    };
  }
  throw new Error("Resource not found");
});

const transport = new StdioServerTransport();
await server.connect(transport);</code></pre>
            <p>While this works, it requires quite a bit of code just to get started. Not only do you need to be familiar with the MCP protocol, but you need to complete a fair amount of set up work (e.g. defining schemas) for every action. Doing it through Workers removes all these barriers, allowing you to spin up an MCP server without the complexity.</p><p>We’re always looking for ways to simplify developer workflows, and we’re excited about this new standard to open up more possibilities for interacting with LLMs, and building agents.</p><div>
  
</div><p>If you’re interested in setting this up, check out this <a href="https://www.youtube.com/watch?v=cbeOWKANtj8&amp;feature=youtu.be"><u>tutorial</u></a> which walks you through these examples. We’re excited to see what you build. Be sure to share your MCP server creations with us on <a href="https://discord.com/invite/cloudflaredev"><u>Discord</u></a>, <a href="https://x.com/CloudflareDev"><u>X</u></a>, or <a href="https://bsky.app/profile/cloudflare.social"><u>Bluesky</u></a>!</p> ]]></content:encoded>
            <category><![CDATA[MCP]]></category>
            <category><![CDATA[AI]]></category>
            <category><![CDATA[Cloudflare Workers]]></category>
            <guid isPermaLink="false">aWV4m3ZRWKcTPXMFuhumH</guid>
            <dc:creator>Dina Kozlov</dc:creator>
            <dc:creator>Glen Maddern</dc:creator>
        </item>
        <item>
            <title><![CDATA[D1: We turned it up to 11]]></title>
            <link>https://blog.cloudflare.com/d1-turning-it-up-to-11/</link>
            <pubDate>Fri, 19 May 2023 13:05:00 GMT</pubDate>
            <description><![CDATA[ We've been heads down iterating on D1, and we've just shipped a major new version that's substantially faster, more reliable, and introduces Time Travel: the ability to restore a D1 database to any point in time. ]]></description>
            <content:encoded><![CDATA[ <p></p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4yviS5FtF4KbuDnCWmxXmU/a99a32be9b00040a1d2cd20dc284e364/image1-51.png" />
            
            </figure><p>We’re not going to bury the lede: we’re excited to launch a major update to our D1 database, with dramatic improvements to performance and scalability. Alpha users (which includes <i>any</i> Workers user) can create new databases using the new storage backend right now with the following command:</p>
            <pre><code>$ wrangler d1 create your-database --experimental-backend</code></pre>
            <p>In the coming weeks, it’ll be the default experience for everyone, but we want to invite developers to start experimenting with the new version of D1 immediately. We’ll also be sharing more about how we built D1’s new storage subsystem, and how it benefits from Cloudflare’s distributed network, very soon.</p>
    <div>
      <h3>Remind me: What’s D1?</h3>
      <a href="#remind-me-whats-d1">
        
      </a>
    </div>
    <p>D1 is Cloudflare’s <a href="https://www.cloudflare.com/developer-platform/products/d1/">native serverless database</a>, which we <a href="/d1-open-alpha/">launched into alpha</a> in November last year. Developers have been building complex applications with Workers, KV, Durable Objects, and more recently, Queues &amp; R2, but they’ve also been consistently asking us for one thing: a database they can query.</p><p>We also heard consistent feedback that it should be SQL-based, scale-to-zero, and (just like Workers itself), take a Region: Earth approach to replication. And so we took that feedback and set out to build D1, with SQLite giving us a familiar SQL dialect, robust query engine and one of the most battle tested code-bases to build on.</p><p>We shipped the first version of D1 as a “real” alpha: a way for us to develop in the open, gather feedback directly from developers, and better prioritize what matters. And living up to the alpha moniker, there were bugs, performance issues and a fairly narrow “happy path”.</p><p>Despite that, we’ve seen developers spin up thousands of databases, make billions of queries, popular ORMs like <a href="https://github.com/drizzle-team/drizzle-orm">Drizzle</a> and <a href="https://github.com/kysely-org/kysely">Kysely</a> add support for D1 (already!), and <a href="https://github.com/jose-donato/race-stack">Remix</a> and <a href="https://github.com/Atinux/nuxt-todos-edge">Nuxt</a> templates build directly around it, as well.</p>
    <div>
      <h3>Turning it up to 11</h3>
      <a href="#turning-it-up-to-11">
        
      </a>
    </div>
    <p>If you’ve used D1 in its alpha state to date: forget everything you know. D1 is now substantially faster: up to 20x faster on the well-known <a href="https://github.com/cloudflare/d1-northwind">Northwind Traders Demo</a>, which we’ve <a href="https://northwind.d1sql.com/">just migrated</a> to use our new storage backend:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4sD4jP1YsKJQN81B4R2Mfv/2747c22b32fc38d1f02b4e47c61ed683/image5-12.png" />
            
            </figure><p>Our new architecture also increases write performance: a simple benchmark inserting 1,000 rows (each row about 200 bytes wide) is approximately 6.8x faster than the previous version of D1.</p><p>Larger batches (10,000 rows at ~200 bytes wide) see an even larger improvement: between 10-11x, with the new storage backend’s <a href="https://www.cloudflare.com/learning/performance/glossary/what-is-latency/">latency</a> also being significantly more consistent. We’ve also not yet started to optimize our overall write throughput, and so expect D1 to only get faster here.</p><p>With our new storage backend, we also want to make clear that D1 is not a toy, and we’re constantly benchmarking our performance against other serverless databases. A query against a 500,000 row key-value table (recognizing that benchmarks are inherently synthetic) sees D1 perform about 3.2x faster than a popular serverless Postgres provider:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2RQ3n0MEB7ZOGlrsg7rU0S/d536f7c25b66099897be571baa2be372/download-14.png" />
            
            </figure><p>We ran the Postgres queries several times to prime the page cache and then took the median query time, as measured by the server. We’ll continue to sharpen our performance edge as we go forward.</p><p>Developers with existing databases can import data into a new database backed by the storage engine by following the steps to <a href="https://developers.cloudflare.com/d1/learning/backups/#downloading-a-backup-locally">export their database</a> and then <a href="https://developers.cloudflare.com/d1/learning/importing-data/#import-an-existing-database">import it</a> in our docs.</p>
    <div>
      <h3>What did I miss?</h3>
      <a href="#what-did-i-miss">
        
      </a>
    </div>
    <p>We’ve also been working on a number of improvements to D1’s developer experience:</p><ul><li><p>A new console interface that allows you to issue queries directly from the dashboard, making it easier to get started and/or issue one-shot queries.</p></li><li><p>Formal <a href="https://developers.cloudflare.com/d1/learning/querying-json/">support for JSON functions</a> that query over JSON directly in your database.</p></li><li><p><a href="https://developers.cloudflare.com/d1/learning/data-location/">Location Hints</a>, allowing you to influence where your leader (which is responsible for writes) is located globally.</p></li></ul><p>Although D1 is designed to work natively within Cloudflare Workers, we realize that there’s often a need to quickly issue one-shot queries via CLI or a web editor when prototyping or just exploring a database. On top of the <a href="https://developers.cloudflare.com/workers/wrangler/commands/#execute">support in wrangler for executing queries</a> (and files), we’ve also introduced a console editor that allows you to issue queries, inspect tables, and even edit data on the fly:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2rZ9d0Hqrq10a8tfkJ4atE/b541e1c27513569c560d4da2a26b8d16/image3-20.png" />
            
            </figure><p><a href="https://developers.cloudflare.com/d1/learning/querying-json/#extracting-values">JSON functions</a> allow you to query JSON stored in TEXT columns in D1: allowing you to be flexible about what data is associated strictly with your relational database schema and what isn’t, whilst still being able to query all of it via SQL (before it reaches your app).</p><p>For example, suppose you store the last login timestamps as a JSON array in a login_history TEXT column: I can query (and extract) sub-objects or array items directly by providing a path to their key:</p>
            <pre><code>SELECT user_id, json_extract(login_history, '$.[0]') as latest_login FROM users</code></pre>
            <p>D1’s support for JSON functions is extremely flexible, and leverages the SQLite core that D1 builds on.</p><p>When you create a database for the first time with D1, we automatically infer the location based on where you’re currently connecting from. There are some cases, however, where you might want to influence that — maybe you’re traveling, or you have a distributed team that’s distinct from the region you expect the majority of your writes to come from.</p><p>D1’s support for Location Hints makes that easy:</p>
            <pre><code># Automatically inferred based your location
$ wrangler d1 create user-prod-db --experimental-backend

# Indicate a preferred location to create your database
$ wrangler d1 create eu-users-db --location=weur --experimental-backend</code></pre>
            <p><a href="https://developers.cloudflare.com/r2/buckets/data-location/">Location Hints</a> are also now available in the Cloudflare dashboard:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/12JpcimRiOP51cuhDFpmg4/a71d83d15d3277152775cb5a96f963dc/image4-20.png" />
            
            </figure><p>We’ve also published <a href="https://developers.cloudflare.com/d1/">more documentation</a> to help developers not only get started, but make use of D1’s advanced features. Expect D1’s documentation to continue to grow substantially over the coming months.</p>
    <div>
      <h3>Not going to burn a hole in your wallet</h3>
      <a href="#not-going-to-burn-a-hole-in-your-wallet">
        
      </a>
    </div>
    <p>We’ve had many, many developers ask us about how we’ll be pricing D1 since we announced the alpha, and we’re ready to share what it’s going to look like. We know it’s important to understand what something might cost <i>before</i> you start building on it, so you’re not surprised six months later.</p><p>In a nutshell:</p><ul><li><p>We’re announcing pricing so that you can start to model how much D1 will cost for your use-case ahead of time. Final pricing may be subject to change, although we expect changes to be relatively minor.</p></li><li><p>We won’t be enabling billing until later this year, and we’ll notify existing D1 users via email ahead of that change. Until then, D1 will remain free to use.</p></li><li><p>D1 will include an always-free tier, included usage as part of our $5/mo Workers subscription, and charge based on reads, writes and storage.</p></li></ul><p>If you’re already subscribed to Workers, then you don’t have to lift a finger: your existing subscription will have D1 usage included when we enable billing in the future.</p><p>Here’s a summary (we’re keeping it intentionally simple):</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3R6k7Iu1ARM90hAytxmGkK/7a909968043acb71cec31f73d7485484/Screenshot-2023-05-19-at-10.14.58.png" />
            
            </figure><p>Importantly, <b>when we enable global read replication, you won’t have to pay extra for it, nor will replication multiply your storage consumption</b>. We think built-in, automatic replication is important, and we don’t think developers should have to pay multiplicative costs (replicas x storage fees) in order to make their database fast <i>everywhere</i>.</p><p>Beyond that, we wanted to ensure D1 took the best parts of serverless pricing — scale-to-zero and pay-for-what-you-use — so that you’re not trying to figure out how many CPUs and/or how much memory you need for your workload or writing scripts to scale down your infrastructure during quieter hours.</p><p>D1’s read pricing is based on the familiar concept of a read unit (per 4KB read), and a write unit (per 1KB written). A query that reads (scans) ~10,000 rows of 64 bytes each would consume 160 read units. Write a big 3KB row in a “blog_posts” table that has a lot of <a href="https://blog.cloudflare.com/markdown-for-agents/">Markdown</a>, and that’s three write units. And if you <a href="https://developers.cloudflare.com/d1/learning/using-indexes/">create indexes for your most popular queries</a> to improve performance and reduce how much data those queries need to scan, you’ll also reduce how much we bill you. We think making the fast path more cost-efficient by default is the right approach.</p><p>Importantly: we’ll continue to take feedback on our pricing before we flip the switch.</p>
    <div>
      <h3>Time Travel</h3>
      <a href="#time-travel">
        
      </a>
    </div>
    <p>We’re also introducing new backup functionality: point-in-time-recovery, and we’re calling this Time Travel, because it feels just like it. <b>Time Travel allows you to restore your D1 database to any minute within the last 30 days, and will be built into D1 databases</b> <b>using our new storage system</b>. We expect to turn on Time Travel for new D1 databases in the very near future.</p><p>What makes Time Travel really powerful is that you <i>no longer need to panic</i> and wonder “oh wait, did I take a backup before I made this major change?!” — because we do it for you. We retain a stream of all changes to your database (the <a href="https://www.sqlite.org/wal.html">Write-Ahead Log</a>), allowing us to restore your database to a <i>point in time</i> by replaying those changes in sequence up until that point.</p><p>Here’s an example (subject to some minor API changes):</p>
            <pre><code># Using a precise Unix timestamp (in UTC):
$ wrangler d1 time-travel my-database --before-timestamp=1683570504

# Alternatively, restore prior to a specific transaction ID:
$ wrangler d1 time-travel my-database --before-tx-id=01H0FM2XHKACETEFQK2P5T6BWD</code></pre>
            <p>And although the idea of point-in-time recovery is not new, it’s often a paid add-on, if it is even available at all. Realizing you should have had it turned on after you’ve deleted or otherwise made a mistake means it’s often too late.</p><p>For example, imagine if I made the classic mistake of forgetting a WHERE on an UPDATE statement:</p>
            <pre><code>-- Don't do this at home
UPDATE users SET email = 'matt@example.com' -- missing: WHERE id = "abc123"</code></pre>
            <p>Without Time Travel, I’d have to hope that either a scheduled backup ran recently, or that I remembered to make a manual backup just prior. With Time Travel, I can restore to a point a minute or so before that mistake (and hopefully learn a lesson for next time).</p><p>We’re also exploring features that can surface larger changes to your database state, including making it easier to identify schema changes, the number of tables, large deltas in data stored <i>and even specific queries</i> (via transaction IDs) — to help you better understand exactly what point in time to restore your database to.</p>
    <div>
      <h3>On the roadmap</h3>
      <a href="#on-the-roadmap">
        
      </a>
    </div>
    <p>So what’s next for D1?</p><ul><li><p><b>Open beta</b>: we’re ensuring we’ve observed our new storage subsystem under load (and real-world usage) prior to making it the default for all `d1 create` commands. We hold a high bar for durability and availability, even for a “beta”, and we also recognize that access to backups (Time Travel) is important for folks to trust a new database. Keep an eye on the Cloudflare blog in the coming weeks for more news here!</p></li><li><p><b>Bigger databases</b>: we know this is a big ask from many, and we’re extremely close. Developers on the <a href="https://developers.cloudflare.com/workers/platform/pricing/#workers">Workers Paid plan</a> will get access to 1GB databases in the very near future, and we’ll be continuing to ramp up the maximum per-database size over time.</p></li><li><p><b>Metrics &amp; observability</b>: you’ll be able to inspect overall query volume by database, failing queries, storage consumed and read/write units via both the D1 dashboard and <a href="https://developers.cloudflare.com/analytics/graphql-api/">our GraphQL API</a>, so that it’s easier to debug issues and track spend.</p></li><li><p><b>Automatic read replication</b>: our new storage subsystem is built with replication in mind, and we’re working on ensuring our replication layer is both fast &amp; reliable before we roll it out to developers. Read replication is not only designed to improve query latency by storing copies — replicas — of your data in multiple locations, close to your users, but will also allow us to scale out D1 databases horizontally for those with larger workloads.</p></li></ul><p>In the meantime, you can <a href="https://developers.cloudflare.com/d1/get-started/">start prototyping and experimenting with D1</a> right now, explore our D1 + Drizzle + Remix <a href="https://github.com/rozenmd/d1-drizzle-remix-example">example project</a>, or join the <a href="https://discord.cloudflare.com/">#d1 channel</a> on the Cloudflare Developers Discord server to engage directly with the D1 team and others building on D1.</p>
    <div>
      <h3>Watch on Cloudflare TV</h3>
      <a href="#watch-on-cloudflare-tv">
        
      </a>
    </div>
    <div></div><p></p> ]]></content:encoded>
            <category><![CDATA[Developer Week]]></category>
            <category><![CDATA[Developers]]></category>
            <guid isPermaLink="false">7Cf3wNyDzlMT9quIsuxTYs</guid>
            <dc:creator>Matt Silverlock</dc:creator>
            <dc:creator>Glen Maddern</dc:creator>
        </item>
        <item>
            <title><![CDATA[UPDATE Supercloud SET status = 'open alpha' WHERE product = 'D1';]]></title>
            <link>https://blog.cloudflare.com/d1-open-alpha/</link>
            <pubDate>Wed, 16 Nov 2022 14:01:00 GMT</pubDate>
            <description><![CDATA[ As we continue down the road to making D1 production ready, it wouldn’t be “the Cloudflare way” unless we stopped for feedback first. D1 is now in Open Alpha! ]]></description>
            <content:encoded><![CDATA[ 
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7w9UvQOVgrNbxPrz1tOWJz/611cdc1253d0c6971709f5dddacc0811/image1-48.png" />
            
            </figure><p>In May 2022, we <a href="/introducing-d1/">announced</a> our quest to simplify databases – building them, maintaining them, integrating them. Our goal is to empower you with the tools to run a database that is <a href="https://www.cloudflare.com/developer-platform/products/d1/">powerful, scalable, with world-beating performance</a> without any hassle. And we first set our sights on reimagining the database development experience for every type of user – not just database experts.</p><p>Over the past couple of months, we’ve <a href="/whats-new-with-d1/">been working</a> to create just that, while learning some very important lessons along the way. As it turns out, building a global relational database product on top of Workers pushes the boundaries of the developer platform to their absolute limit, and often beyond them, but in a way that’s absolutely thrilling to us at Cloudflare. It means that while our progress might seem slow from outside, every improvement, bug fix or stress test helps lay down a path for <i>all</i> of our customers to build the world’s most <a href="/welcome-to-the-supercloud-and-developer-week-2022/">ambitious serverless application</a>.</p><p>However, as we continue down the road to making D1 production ready, it wouldn’t be “the Cloudflare way” unless we stopped for feedback first – even though it’s not <i>quite</i> finished yet. In the spirit of Developer Week, <b>there is no better time to introduce the D1 open alpha</b>!</p><p>An “open alpha” is a new concept for us. You'll likely hear the term “open beta” on various announcements at Cloudflare, and while it makes sense for many products here, it wasn’t quite right for D1. There are still some crucial pieces that are still in active development and testing, so before we release the fully-formed D1 as a public beta for you to start building real-world apps with, we want to make sure everybody can start to get a feel for the product on their hobby apps or side-projects.</p>
    <div>
      <h2>What’s included in the alpha?</h2>
      <a href="#whats-included-in-the-alpha">
        
      </a>
    </div>
    <p>While a lot is still changing behind the scenes with D1, we’ve put a lot of thought into how you, as a developer, interact with it – even if you’re new to databases.</p>
    <div>
      <h3>Using the D1 dashboard</h3>
      <a href="#using-the-d1-dashboard">
        
      </a>
    </div>
    <p>In a few clicks you can get your D1 database up and running right from within your dashboard. In our D1 interface, you can create, maintain and view your database as you please. Changes made in the UI are instantly available to your Worker - no redeploy required!</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6vOzmnP9cvUYbJanSvprvl/b4a01d4edcc3dcada5a326e352b5f0e2/image2-30.png" />
            
            </figure>
    <div>
      <h3>Use Wrangler</h3>
      <a href="#use-wrangler">
        
      </a>
    </div>
    <p>If you’re looking to get your hands a little dirty, you can also work with your database using our Wrangler CLI. Create your database and begin adding your data manually or bootstrap your database with one of two ways:</p><p><b>1.  Execute an SQL file</b></p>
            <pre><code>$ wrangler d1 execute my-database-name --file ./customers.sql</code></pre>
            <p>where your <code>.sql</code> file looks something like this:</p><p>customers.sql</p>
            <pre><code>DROP TABLE IF EXISTS Customers;
CREATE TABLE Customers (CustomerID INT, CompanyName TEXT, ContactName TEXT, PRIMARY KEY (`CustomerID`));
INSERT INTO Customers (CustomerID, CompanyName, ContactName) 
VALUES (1, 'Alfreds Futterkiste', 'Maria Anders'),(4, 'Around the Horn', 'Thomas Hardy'),(11, 'Bs Beverages', 'Victoria Ashworth'),(13, 'Bs Beverages', 'Random Name');</code></pre>
            <p><b>2. Create and run migrations</b></p><p>Migrations are a way to version your database changes. With D1, you can <a href="https://developers.cloudflare.com/d1/migrations/">create a migration</a> and then apply it to your database.</p><p>To create the migration, execute:</p>
            <pre><code>wrangler d1 migrations create &lt;my-database-name&gt; &lt;short description of migration&gt;</code></pre>
            <p>This will create an SQL file in a <code>migrations</code> folder where you can then go ahead and add your queries. Then apply the migrations to your database by executing:</p>
            <pre><code>wrangler d1 migrations apply &lt;my-database-name&gt;</code></pre>
            
    <div>
      <h3>Access D1 from within your Worker</h3>
      <a href="#access-d1-from-within-your-worker">
        
      </a>
    </div>
    <p>You can attach your D1 to a Worker by adding the D1 binding to your <code>wrangler.toml</code> configuration file. Then interact with D1 by executing queries inside your Worker like so:</p>
            <pre><code>export default {
 async fetch(request, env) {
   const { pathname } = new URL(request.url);

   if (pathname === "/api/beverages") {
     const { results } = await env.DB.prepare(
       "SELECT * FROM Customers WHERE CompanyName = ?"
     )
       .bind("Bs Beverages")
       .all();
     return Response.json(results);
   }

   return new Response("Call /api/beverages to see Bs Beverages customers");
 },
};</code></pre>
            
    <div>
      <h3>Or access D1 from within your Pages Function</h3>
      <a href="#or-access-d1-from-within-your-pages-function">
        
      </a>
    </div>
    <p>In this Alpha launch, D1 also supports integration with <a href="https://pages.cloudflare.com/">Cloudflare Pages</a>! You can add a D1 binding inside the Pages dashboard, and write your queries inside a Pages Function to build a full-stack application! Check out the <a href="https://developers.cloudflare.com/pages/platform/functions/bindings/#d1-database">full documentation</a> to get started with Pages and D1.</p>
    <div>
      <h2>Community built tooling</h2>
      <a href="#community-built-tooling">
        
      </a>
    </div>
    <p>During our private alpha period, the excitement behind D1 led to some valuable contributions to the D1 ecosystem and developer experience by members of the community. Here are some of our favorite projects to date:</p>
    <div>
      <h3>d1-orm</h3>
      <a href="#d1-orm">
        
      </a>
    </div>
    <p>An Object Relational Mapping (ORM) is a way for you to query and manipulate data by using JavaScript. Created by a Cloudflare Discord Community Champion, the <code>d1-orm</code> seeks to provide a strictly typed experience while using D1:</p>
            <pre><code>const users = new Model(
    // table name, primary keys, indexes etc
    tableDefinition,
    // column types, default values, nullable etc
    columnDefinitions
)

// TS helper for typed queries
type User = Infer&lt;type of users&gt;;

// ORM-style query builder
const user = await users.First({
    where: {
        id: 1,
    },
});</code></pre>
            <p>You can check out the <a href="https://docs.interactions.rest/d1-orm/">full documentation</a>, and provide feedback by making an issue on the <a href="https://github.com/Interactions-as-a-Service/d1-orm/issues">GitHub repository</a>.</p>
    <div>
      <h3>workers-qb</h3>
      <a href="#workers-qb">
        
      </a>
    </div>
    <p>This is a zero-dependency query builder that provides a simple standardized interface while keeping the benefits and speed of using raw queries over a traditional ORM. While not intended to provide ORM-like functionality, <code>workers-qb</code> makes it easier to interact with the database from code for direct SQL access:</p>
            <pre><code>const qb = new D1QB(env.DB)

const fetched = await qb.fetchOne({
  tableName: 'employees',
  fields: 'count(*) as count',
  where: {
    conditions: 'department = ?1',
    params: ['HQ'],
  },
})</code></pre>
            <p>You can read more about the query builder <a href="https://workers-qb.massadas.com/">here</a>.</p>
    <div>
      <h3>d1-console</h3>
      <a href="#d1-console">
        
      </a>
    </div>
    <p>Instead of running the <code>wrangler d1 execute</code> command in your terminal every time you want to interact with your database, you can interact with D1 from within the <code>d1-console</code>. Created by a Discord Community Champion, this gives the benefit of executing multi-line queries, obtaining command history, and viewing a cleanly formatted table output.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4QR9Tf5DXnp3brBVvlvgJq/7f5b5083198492190dfc9f24e4fb70e0/image3-23.png" />
            
            </figure><p>While this is a community project today, we plan to natively support a “D1 Console” in the future. For now, get started by checking out the <code>d1-console</code> package <a href="https://github.com/isaac-mcfadyen/d1-console">here</a>.</p>
    <div>
      <h3>D1 adapter for <a href="https://github.com/koskimas/kysely">Kysely</a></h3>
      <a href="#d1-adapter-for">
        
      </a>
    </div>
    <p>Kysely is a type-safe and autocompletion-friendly typescript SQL query builder. With this adapter you can interact with D1 with the familiar Kysely interface:</p>
            <pre><code>// Create Kysely instance with kysely-d1
const db = new Kysely&lt;Database&gt;({ 
  dialect: new D1Dialect({ database: env.DB })
});
    
// Read row from D1 table
const result = await db
  .selectFrom('kv')
  .selectAll()
  .where('key', '=', key)
  .executeTakeFirst();</code></pre>
            <p>Check out the project <a href="https://github.com/aidenwallis/kysely-d1">here</a>.</p>
    <div>
      <h2>What’s still in testing?</h2>
      <a href="#whats-still-in-testing">
        
      </a>
    </div>
    <p>The biggest pieces that have been disabled for this alpha release are replication and JavaScript transaction support. While we’ll be rolling out these changes gradually, we want to call out some limitations that exist today that we’re actively working on testing:</p><ul><li><p><b>Database location:</b> Each D1 database only runs a single instance. It’s created close to where you, as the developer, create the database, and does not currently move regions based on access patterns. Workers running elsewhere in the world will see higher latency as a result.</p></li><li><p><b>Concurrency limitations:</b> Under high load, read and write queries may be queued rather than triggering new replicas to be created. As a result, the performance &amp; throughput characteristics of the open alpha won’t be representative of the final product.</p></li><li><p><b>Availability limitations:</b> Backups will block access to the DB while they’re running. In most cases this should only be a second or two, and any requests that arrive during the backup will be queued.</p></li></ul><p>You can also check out a more detailed, up-to-date list on <a href="https://developers.cloudflare.com/d1/platform/limits/">D1 alpha Limitations</a>.</p>
    <div>
      <h2>Request for feedback</h2>
      <a href="#request-for-feedback">
        
      </a>
    </div>
    <p>While we can make all sorts of guesses and bets on the kind of databases you want to use D1 for, we are not the users – you are! We want developers from all backgrounds to preview the D1 tech at its early stages, and let us know where we need to improve to make it suitable for your production apps.</p><p>For general feedback about your experience and to interact with other folks in the alpha, join our <a href="https://discord.com/channels/595317990191398933/992060581832032316">#d1-open-alpha</a> channel in the <a href="https://discord.gg/cloudflaredev">Cloudflare Developers Discord</a>. We plan to make any important announcements and changes in this channel as well as on our <a href="https://discord.com/channels/595317990191398933/832698219824807956">monthly community calls</a>.</p><p>To file more specific feature requests (no matter how wacky) and report any bugs, create a thread in the <a href="https://community.cloudflare.com/c/developers/d1">Cloudflare Community forum</a> under the D1 category. We will be maintaining this forum as a way to plan for the months ahead!</p>
    <div>
      <h2>Get started</h2>
      <a href="#get-started">
        
      </a>
    </div>
    <p>Want to get started right away? Check out our <a href="https://developers.cloudflare.com/d1/">D1 documentation</a> to get started today. <a href="https://github.com/cloudflare/d1-northwind">Build</a> our classic <a href="https://northwind.d1sql.com/">Northwind Traders demo</a> to explore the D1 experience and deploy your first D1 database!</p> ]]></content:encoded>
            <category><![CDATA[Developer Week]]></category>
            <category><![CDATA[Cloudflare Workers]]></category>
            <category><![CDATA[Database]]></category>
            <category><![CDATA[Developers]]></category>
            <category><![CDATA[Supercloud]]></category>
            <category><![CDATA[D1]]></category>
            <category><![CDATA[Developer Platform]]></category>
            <guid isPermaLink="false">1rFO7pAwS1HGnsa6rhrIXa</guid>
            <dc:creator>Nevi Shah</dc:creator>
            <dc:creator>Glen Maddern</dc:creator>
            <dc:creator>Sven Sauleau</dc:creator>
        </item>
        <item>
            <title><![CDATA[D1: our quest to simplify databases]]></title>
            <link>https://blog.cloudflare.com/whats-new-with-d1/</link>
            <pubDate>Tue, 27 Sep 2022 13:00:00 GMT</pubDate>
            <description><![CDATA[ Get an inside look on the D1 experience today, what the team is currently working on and what’s coming up!  ]]></description>
            <content:encoded><![CDATA[ <p><i></i></p><p><i>This blog post references a feature which has updated documentation. For the latest reference content, visit </i><a href="https://developers.cloudflare.com/d1/best-practices/read-replication/"><i>D1 read replication documentation</i></a><i>.</i></p><p>When we announced D1 in May of this year, we knew it would be the start of something new – our first SQL database with Cloudflare Workers. Prior to D1 we’ve announced storage options like KV (key-value store), Durable Objects (single location, strongly consistent data storage) and <a href="https://www.cloudflare.com/learning/cloud/what-is-blob-storage/">R2 (blob storage)</a>. But the question always remained “How can I store and query relational data without latency concerns and an easy API?”</p><p>The long awaited “Cloudflare Database'' was the true missing piece to build your application <b>entirely</b> on Cloudflare’s global network, going from a blank canvas in VSCode to a full stack application in seconds. Compatible with the popular SQLite API, D1 empowers developers to build out their databases without getting bogged down by complexity and having to manage every underlying layer.</p><p>Since our launch announcement in May and private beta in June, we’ve made great strides in building out our vision of a <a href="https://www.cloudflare.com/developer-platform/products/d1/">serverless database</a>. With D1 still in <a href="https://www.cloudflare.com/lp/d1/">private beta</a> but an open beta on the horizon, we’re excited to show and tell our journey of building D1 and what’s to come.</p>
    <div>
      <h2>The D1 Experience</h2>
      <a href="#the-d1-experience">
        
      </a>
    </div>
    <p>We knew from Cloudflare Workers feedback that using Wrangler as the mechanism to create and deploy applications is loved and preferred by many. That’s why when <a href="/10-things-i-love-about-wrangler/">Wrangler 2.0</a> was announced this past May alongside D1, we took advantage of the new and improved CLI for every part of the experience from data creation to every update and iteration. Let’s take a quick look on how to get set up in a few easy steps.</p>
    <div>
      <h3>Create your database</h3>
      <a href="#create-your-database">
        
      </a>
    </div>
    <p>With the latest version of <a href="https://github.com/cloudflare/wrangler2">Wrangler</a> installed, you can create an initialized empty database with a quick</p><p><code>npx wrangler d1 create my_database_name</code></p><p>To get your database up and running! Now it’s time to add your data.</p>
    <div>
      <h3>Bootstrap it</h3>
      <a href="#bootstrap-it">
        
      </a>
    </div>
    <p>It wouldn’t be the “Cloudflare way” if you had to sit through an agonizingly long process to get set up. So we made it easy and painless to bring your existing data from an old database and bootstrap your new D1 database.  You can run</p><p><code>wrangler d1 execute my_database-name --file ./filename.sql</code></p><p>and pass through an existing SQLite .sql file of your choice. Your database is now ready for action.</p>
    <div>
      <h3>Develop &amp; Test Locally</h3>
      <a href="#develop-test-locally">
        
      </a>
    </div>
    <p>With all the improvements we’ve made to Wrangler since version 2 launched <a href="/wrangler-v2-beta/">a few months ago</a>, we’re pleased to report that D1 has full remote &amp; local wrangler dev support:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7JRGM62yWrL3h7BKLhj5Jf/6d324ce4a2b19691ef4ec39095d2e43b/image2-43.png" />
            
            </figure><p>When running <code>wrangler dev -–local -–persist</code>, an SQLite file will be created inside <code>.wrangler/state</code>. You can then use a local GUI program for managing it, like SQLiteFlow (<a href="https://www.sqliteflow.com/">https://www.sqliteflow.com/</a>) or Beekeeper (<a href="https://www.beekeeperstudio.io/">https://www.beekeeperstudio.io/</a>).</p><p>Or you can simply use SQLite directly with the SQLite command line by running <code>sqlite3 .wrangler/state/d1/DB.sqlite3</code>:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7wSVzsxnFxKpJDbF5pO4hs/89aab6231071b6cd8cc657a9fd2bd24b/image6-8.png" />
            
            </figure>
    <div>
      <h3>Automatic backups &amp; one-click restore</h3>
      <a href="#automatic-backups-one-click-restore">
        
      </a>
    </div>
    <p>No matter how much you test your changes, sometimes things don’t always go according to plan. But with Wrangler you can create a backup of your data, view your list of backups or restore your database from an existing backup. In fact, during the beta, we’re taking backups of your data every hour automatically and storing them in R2, so you will have the option to rollback if needed.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7BgC81NRBtxLJAl4Gf09oz/03084ab36894c484675f0ec7e58e9462/image1-53.png" />
            
            </figure><p>And the best part – if you want to use a production snapshot for local development or to reproduce a bug, simply copy it into the .wrangler/state directory and <code>wrangler dev –-local –-persist</code> will pick it up!</p><p>Let’s download a D1 backup to our local disk. It’s SQLite compatible.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4lGMre56aSosKozuHmITRD/51b282602897ed9af9d0813461f81732/image4-14.png" />
            
            </figure><p>Now let’s run our D1 worker locally, from the backup.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4t10c5k7VKcT2tF4CjW9Dw/eb5ba21817f6a38b1d1f450d6e2e2c3a/image5-16.png" />
            
            </figure>
    <div>
      <h3>Create and Manage from the dashboard</h3>
      <a href="#create-and-manage-from-the-dashboard">
        
      </a>
    </div>
    <p>However, we realize that CLIs are not everyone’s jam. In fact, we believe databases should be accessible to every kind of developer – even those without much database experience! D1 is available right from the Cloudflare dashboard giving you near total command parity with Wrangler in just a few clicks. Bootstrapping your database, creating tables, updating your database, viewing tables and triggering backups are all accessible right at your fingertips.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2mXkO7uRDs4lVvgjm8VC4r/c32f96a738980294dfea5db7f2ea8794/image3-32.png" />
            
            </figure><p>Changes made in the UI are instantly available to your Worker — no deploy required!</p><p>We’ve told you about some of the improvements we’ve landed since we first announced D1, but as always, we also wanted to give you a small taste (with some technical details) of what’s ahead. One really important functionality of a database is transactions — something D1 wouldn’t be complete without.</p>
    <div>
      <h2>Sneak peek: how we're bringing JavaScript transactions to D1</h2>
      <a href="#sneak-peek-how-were-bringing-javascript-transactions-to-d1">
        
      </a>
    </div>
    <p>With D1, we strive to present a dramatically simplified interface to creating and querying relational data, which for the most part is a good thing. But simplification occasionally introduces drawbacks, where a use-case is no longer easily supported without introducing some new concepts. D1 transactions are one example.</p>
    <div>
      <h3>Transactions are a unique challenge</h3>
      <a href="#transactions-are-a-unique-challenge">
        
      </a>
    </div>
    <p>You don't need to specify where a Cloudflare Worker or a D1 database run—they simply run everywhere they need to. For Workers, that is as close as possible to the users that are hitting your site right this second. For D1 today, we don't try to run a copy in every location worldwide, but dynamically manage the number and location of read-only replicas based on how many queries your database is getting, and from where. However, for queries that make changes to a database (which we generally call "writes" for short), they all have to travel back to the single Primary D1 instance to do their work, to ensure consistency.</p><p>But what if you need to do a series of updates at once? While you can send multiple SQL queries with <code>.batch()</code> (which does in fact use database transactions under the hood), it's likely that, at some point, you'll want to interleave database queries &amp; JS code in a single unit of work.</p><p>This is exactly what database transactions were invented for, but if you try running <code>BEGIN TRANSACTION</code> in D1 you'll get an error. Let's talk about why that is.</p><p><b>Why native transactions don't work</b>The problem arises from SQL statements and JavaScript code running in dramatically different places—your SQL executes inside your D1 database (primary for writes, nearest replica for reads), but your Worker is running near the user, which might be on the other side of the world. And because D1 is built on SQLite, only one write transaction can be open at once. Meaning that, if we permitted <code>BEGIN TRANSACTION</code>, any one Worker request, anywhere in the world, could effectively block your whole database! This is a quite dangerous thing to allow:</p><ul><li><p>A Worker could start a transaction then crash due to a software bug, without calling <code>ROLLBACK</code>. The primary would be blocked, waiting for more commands from a Worker that would never come (until, probably, some timeout).</p></li><li><p>Even without bugs or crashes, transactions that require multiple round-trips between JavaScript and SQL could end up blocking your whole system for multiple seconds, dramatically limiting how high an application built with Workers &amp; D1 could scale.</p></li></ul><p>But allowing a developer to define transactions that mix both SQL and JavaScript makes building applications with Workers &amp; D1 so much more flexible and powerful. We need a new solution (or, in our case, a new version of an old solution).</p><p><b>A way forward: stored procedures</b>Stored procedures are snippets of code that are uploaded to the database, to be executed directly next to the data. Which, at first blush, sounds exactly like what we want.</p><p>However, in practice, stored procedures in traditional databases are notoriously frustrating to work with, as anyone who's developed a system making heavy use of them will tell you:</p><ul><li><p>They're often written in a different language to the rest of your application. They're usually written in (a specific dialect of) SQL or an embedded language like Tcl/Perl/Python. And while it's technically possible to write them in JavaScript (using an embedded V8 engine), they run in such a different environment to your application code it still requires significant context-switching to maintain them.</p></li><li><p>Having both application code and in-database code affects every part of the development lifecycle, from authoring, testing, deployment, rollbacks and debugging. But because stored procedures are usually introduced to solve a specific problem, not as a general purpose application layer, they're often managed completely manually. You can end up with them being written once, added to the database, then never changed for fear of breaking something.</p></li></ul><p>With D1, we can do better.</p><p>The <i>point</i> of a stored procedure was to execute directly next to the data—uploading the code and executing it inside the database was simply a means to that end. But we're using Workers, a global JavaScript execution platform, can we use them to solve this problem?</p><p>It turns out, absolutely! But here we have a few options of exactly how to make it work, and we're working with our private beta users to find the right <a href="https://www.cloudflare.com/learning/security/api/what-is-an-api/">API</a>. In this section, I'd like to share with you our current leading proposal, and invite you all to give us your feedback.</p><p>When you connect a Worker project to a D1 database, you add the section like the following to your <code>wrangler.toml</code>:</p>
            <pre><code>[[ d1_databases ]]
# What binding name to use (e.g. env.DB):
binding = "DB"
# The name of the DB (used for wrangler d1 commands):
database_name = "my-d1-database"
# The D1's ID for deployment:
database_id = "48a4224e-...3b09"
# Which D1 to use for `wrangler dev`:
# (can be the same as the previous line)
preview_database_id = "48a4224e-...3b09"

# NEW: adding "procedures", pointing to a new JS file:
procedures = "./src/db/procedures.js"</code></pre>
            <p>That D1 Procedures file would contain the following (note the new <code>db.transaction()</code> API, that is only available within a file like this):</p>
            <pre><code>export default class Procedures {
  constructor(db, env, ctx) {
    this.db = db
  }

  // any methods you define here are available on env.DB.Procedures
  // inside your Worker
  async Checkout(cartId: number) {
    // Inside a Procedure, we have a new db.transaction() API
    const result = await this.db.transaction(async (txn) =&gt; {
      
      // Transaction has begun: we know the user can't add anything to
      // their cart while these actions are in progress.
      const [cart, user] = Helpers.loadCartAndUser(cartId)

      // We can update the DB first, knowing that if any of the later steps
      // fail, all these changes will be undone.
      await this.db
        .prepare(`UPDATE cart SET status = ?1 WHERE cart_id = ?2`)
        .bind('purchased', cartId)
        .run()
      const newBalance = user.balance - cart.total_cost
      await this.db
        .prepare(`UPDATE user SET balance = ?1 WHERE user_id = ?2`)
        // Note: the DB may have a CHECK to guarantee 'user.balance' can not
        // be negative. In that case, this statement may fail, an exception
        // will be thrown, and the transaction will be rolled back.
        .bind(newBalance, cart.user_id)
        .run()

      // Once all the DB changes have been applied, attempt the payment:
      const { ok, details } = await PaymentAPI.processPayment(
        user.payment_method_id,
        cart.total_cost
      )
      if (!ok) {
        // If we throw an Exception, the transaction will be rolled back
        // and result.error will be populated:
        // throw new PaymentFailedError(details)
        
        // Alternatively, we can do both of those steps explicitly
        await txn.rollback()
        // The transaction is rolled back, our DB is now as it was when we
        // started. We can either move on and try something new, or just exit.
        return { error: new PaymentFailedError(details) }
      }

      // This is implicitly called when the .transaction() block finishes,
      // but you can explicitly call it too (potentially committing multiple
      // times in a single db.transaction() block).
      await txn.commit()

      // Anything we return here will be returned by the 
      // db.transaction() block
      return {
        amount_charged: cart.total_cost,
        remaining_balance: newBalance,
      }
    })

    if (result.error) {
      // Our db.transaction block returned an error or threw an exception.
    }

    // We're still in the Procedure, but the Transaction is complete and
    // the DB is available for other writes. We can either do more work
    // here (start another transaction?) or return a response to our Worker.
    return result
  }
}</code></pre>
            <p>And in your Worker, your DB binding now has a “Procedures” property with your function names available:</p>
            <pre><code>const { error, amount_charged, remaining_balance } =
  await env.DB.Procedures.Checkout(params.cartId)

if (error) {
  // Something went wrong, `error` has details
} else {
  // Display `amount_charged` and `remaining_balance` to the user.
}</code></pre>
            <p>Multiple Procedures can be triggered at one time, but only one <code>db.transaction()</code> function can be active at once: any other write queries or other transaction blocks will be queued, but all read queries will continue to hit local replicas and run as normal. This API gives you the ability to ensure consistency when it’s essential but with the minimal impact on total overall performance worldwide.</p>
    <div>
      <h3>Request for feedback</h3>
      <a href="#request-for-feedback">
        
      </a>
    </div>
    <p>As with all our products, feedback from our users drives the roadmap and development. While the D1 API is in beta testing today, we're still seeking feedback on the specifics. However, we’re pleased that it solves both the problems with transactions that are specific to D1 and the problems with stored procedures described earlier:</p><ul><li><p>Code is executing as close as possible to the database, removing network latency while a transaction is open.</p></li><li><p>Any exceptions or cancellations of a transaction cause an instant rollback—there is no way to accidentally leave one open and block the whole D1 instance.</p></li><li><p>The code is in the same language as the rest of your Worker code, in the exact same dialect (e.g. same TypeScript config as it's part of the same build).</p></li><li><p>It's deployed seamlessly as part of your Worker. If two Workers bind to the same D1 instance but define different procedures, they'll only see their own code. If you want to share code between projects or databases, extract a library as you would with any other shared code.</p></li><li><p>In local development and test, the procedure works just like it does in production, but without the network call, allowing seamless testing and debugging as if it was a local function.</p></li><li><p>Because procedures and the Worker that define them are treated as a single unit, rolling back to an earlier version never causes a skew between the code in the database and the code in the Worker.</p></li></ul>
    <div>
      <h2>The D1 ecosystem: contributions from the community</h2>
      <a href="#the-d1-ecosystem-contributions-from-the-community">
        
      </a>
    </div>
    <p>We've told you about what we've been up to and what's ahead, but one of the unique things about this project is all the contributions from our users. One of our favorite parts of private betas is not only getting feedback and feature requests, but also seeing what ideas and projects come to fruition. While sometimes this means personal projects, with D1, we’re seeing some incredible contributions to the D1 ecosystem. Needless to say, the work on D1 hasn’t just been coming from within the D1 team, but also from the wider community and other developers at Cloudflare. Users have been showing off their D1 additions within our Discord private beta channel and giving others the opportunity to use them as well. We wanted to take a moment to highlight them.</p>
    <div>
      <h3>workers-qb</h3>
      <a href="#workers-qb">
        
      </a>
    </div>
    <p>Dealing with raw SQL syntax is powerful (and using the D1 .bind() API, safe against <a href="https://www.cloudflare.com/learning/security/threats/how-to-prevent-sql-injection/">SQL injections</a>) but it can be a little clumsy. On the other hand, most existing query builders assume direct access to the underlying DB, and so aren’t suitable to use with D1. So Cloudflare developer Gabriel Massadas designed a small, zero-dependency query builder called <code>workers-qb</code>:</p>
            <pre><code>import { D1QB } from 'workers-qb'
const qb = new D1QB(env.DB)

const fetched = await qb.fetchOne({
    tableName: "employees",
    fields: "count(*) as count",
    where: {
      conditions: "active = ?1",
      params: [true]
    },
})</code></pre>
            <p>Check out the project homepage for more information: <a href="https://workers-qb.massadas.com/">https://workers-qb.massadas.com/</a>.</p>
    <div>
      <h3>D1 console</h3>
      <a href="#d1-console">
        
      </a>
    </div>
    <p>While you can interact with D1 through both Wrangler and the dashboard, Cloudflare Community champion, Isaac McFadyen created the very first D1 console where you can quickly execute a series of queries right through your terminal. With the D1 console, you don’t need to spend time writing the various Wrangler commands we’ve created – just execute your queries.</p><p>This includes all bells and whistles you would expect from a modern database console including multiline input, command history, validation for things D1 may not yet support, and ability to save your Cloudflare credentials for later use.</p><p>Check out the full project on <a href="https://github.com/isaac-mcfadyen/d1-console">GitHub</a> or <a href="https://www.npmjs.com/package/d1-console">NPM</a> for more information.</p>
    <div>
      <h3>Miniflare test Integration</h3>
      <a href="#miniflare-test-integration">
        
      </a>
    </div>
    <p>The <a href="https://miniflare.dev/">Miniflare project,</a> which powers Wrangler’s local development experience, also provides fully-fledged test environments for popular JavaScript test runners, <a href="https://miniflare.dev/testing/jest">Jest</a> and <a href="https://miniflare.dev/testing/vitest">Vitest</a>. With this comes the concept of <a href="https://miniflare.dev/testing/jest#isolated-storage"><i>Isolated Storage</i></a>, allowing each test to run independently, so that changes made in one don’t affect the others. Brendan Coll, creator of Miniflare, guided the D1 test implementation to give the same benefits:</p>
            <pre><code>import Worker from ‘../src/index.ts’
const { DB } = getMiniflareBindings();

beforeAll(async () =&gt; {
  // Your D1 starts completely empty, so first you must create tables
  // or restore from a schema.sql file.
  await DB.exec(`CREATE TABLE entries (id INTEGER PRIMARY KEY, value TEXT)`);
});

// Each describe block &amp; each test gets its own view of the data.
describe(‘with an empty DB’, () =&gt; {
  it(‘should report 0 entries’, async () =&gt; {
    await Worker.fetch(...)
  })
  it(‘should allow new entries’, async () =&gt; {
    await Worker.fetch(...)
  })
])

// Use beforeAll &amp; beforeEach inside describe blocks to set up particular DB states for a set of tests
describe(‘with two entries in the DB’, () =&gt; {
  beforeEach(async () =&gt; {
    await DB.prepare(`INSERT INTO entries (value) VALUES (?), (?)`)
            .bind(‘aaa’, ‘bbb’)
            .run()
  })
  // Now, all tests will run with a DB with those two values
  it(‘should report 2 entries’, async () =&gt; {
    await Worker.fetch(...)
  })
  it(‘should not allow duplicate entries’, async () =&gt; {
    await Worker.fetch(...)
  })
])</code></pre>
            <p>All the databases for tests are run in-memory, so these are lightning fast. And fast, reliable testing is a big part of building maintainable real-world apps, so we’re thrilled to extend that to D1.</p>
    <div>
      <h2>Want access to the private beta?</h2>
      <a href="#want-access-to-the-private-beta">
        
      </a>
    </div>
    <p>Feeling inspired?</p><p>We love to see what our beta users build or want to build especially when our products are at an early stage. As we march toward an open beta, we’ll be looking specifically for your feedback. We are slowly letting more folks into the beta, but if you haven’t received your “golden ticket” yet with access, sign up <a href="https://www.cloudflare.com/lp/d1/">here</a>! Once you’ve been invited in, you’ll receive an official welcome email.</p><p>As always, happy building!</p> ]]></content:encoded>
            <category><![CDATA[Birthday Week]]></category>
            <category><![CDATA[Serverless]]></category>
            <category><![CDATA[Database]]></category>
            <category><![CDATA[D1]]></category>
            <category><![CDATA[Cloudflare Workers]]></category>
            <category><![CDATA[Product News]]></category>
            <category><![CDATA[Developers]]></category>
            <category><![CDATA[Developer Platform]]></category>
            <guid isPermaLink="false">nODp0eoC5szCr7aW59sde</guid>
            <dc:creator>Nevi Shah</dc:creator>
            <dc:creator>Glen Maddern</dc:creator>
        </item>
        <item>
            <title><![CDATA[Announcing D1: our first SQL database]]></title>
            <link>https://blog.cloudflare.com/introducing-d1/</link>
            <pubDate>Wed, 11 May 2022 13:02:00 GMT</pubDate>
            <description><![CDATA[ Today, we’re excited to announce D1, Cloudflare’s first SQL database, designed for Cloudflare Workers ]]></description>
            <content:encoded><![CDATA[ <p></p><p>We <a href="/introducing-cloudflare-workers/">announced</a> Cloudflare Workers in 2017, giving developers access to compute on our network. We were excited about the possibilities this unlocked, but we quickly realized — most real world applications are stateful. Since then, we’ve delivered <a href="https://www.cloudflare.com/developer-platform/workers-kv/">KV</a>, <a href="https://www.cloudflare.com/developer-platform/durable-objects/">Durable Objects</a>, and <a href="https://www.cloudflare.com/developer-platform/r2/">R2</a>, giving developers access to various types of storage.</p><p>Today, we're excited to announce D1, our first SQL database.</p><p>While the wait on beta access shouldn’t be long — we’ll start letting folks in as early as June (<a href="https://www.cloudflare.com/lp/d1/">sign up here</a>), we’re excited to share some details of what’s to come.</p>
    <div>
      <h2>Meet D1, the database designed for Cloudflare Workers</h2>
      <a href="#meet-d1-the-database-designed-for-cloudflare-workers">
        
      </a>
    </div>
    <p>D1 is built on SQLite. Not only is SQLite the most ubiquitous database in the world, used by billions of devices a day, it’s also the <a href="https://www.cloudflare.com/developer-platform/products/d1/">first ever serverless database</a>. Surprised? SQLite was so ahead of its time, it dubbed itself “<a href="https://www.cloudflare.com/learning/serverless/what-is-serverless/">serverless</a>” before the term gained connotation with cloud services, and originally meant literally “not involving a server”.</p><p>Since Workers itself runs between the server and the client, and was inspired by technology built for the client, SQLite seemed like the perfect fit for our first entry into databases.</p><p>So what can you build with D1? The true answer is “almost anything!”, that might not be very helpful in triggering the imagination, so how about a live demo?</p>
    <div>
      <h2>D1 Demo: Northwind Traders</h2>
      <a href="#d1-demo-northwind-traders">
        
      </a>
    </div>
    <p>You can check out an example of D1 in action by trying out our demo running here: <a href="https://northwind.d1sql.com/">northwind.d1sql.com</a>.</p><p>If you’re wondering “Who are Northwind Traders?”, Northwind Traders is the “Hello, World!” of databases, if you will. A sample database that Microsoft would provide alongside Microsoft Access to use as their own tutorial. It first appeared 25 years ago in 1997, and you’ll find many examples of its use on the Internet.</p><p>It’s a typical business application, with a realistic schema, with many foreign keys, across many different tables — a truly timeless representation of data.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6rze39GwPcbi7rkFaoKTi4/e189b105a6ba5088fb3fb062e86c45e5/image3-13.png" />
            
            </figure><p>When was the recent order of Queso Cabrales shipped, and what ship was it on? You can quickly find out. Someone calling in about ordering some Chai? Good thing Exotic Liquids still has 39 units in stock, for just \$18 each.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5ytvt3wgKWXkTkp8aMDMDx/851f1b0333a820328663807c418e5f33/image2-14.png" />
            
            </figure><p>We welcome you to play and poke around, and answer any questions you have about Northwind Trading’s business.</p><p>The Northwind Traders demo also features a dashboard where you can find details and metrics about the D1 SQL queries happening behind the scenes.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7r5YpPnZLYMbqwJDW5v38Y/d1714df833387bfc5cacd3e6caa97913/image5-5.png" />
            
            </figure>
    <div>
      <h2>What can you build with D1?</h2>
      <a href="#what-can-you-build-with-d1">
        
      </a>
    </div>
    <p>Going back to our original question before the demo, however, what can you build with D1?</p><p>While you may not be running Northwind Traders yourself, you’re likely running a very similar piece of software somewhere. Even at the very core of Cloudflare’s service is a database. A SQL database filled with tables, materialized views and a plethora of stored procedures. Every time a customer interacts with our dashboard they end up changing state in that database.</p><p>The reality is that databases are everywhere. They are inside the web browser you’re reading this on, inside every app on your phone, and the storage for your bank transaction, travel reservations, business applications, and on and on. Our goal with D1 is to help you build anything from APIs to rich and powerful applications, including <a href="https://www.cloudflare.com/ecommerce/">eCommerce sites</a>, accounting software, <a href="https://www.cloudflare.com/saas/">SaaS solutions</a>, and CRMs.</p><p>You can even combine D1 with <a href="https://www.cloudflare.com/zero-trust/products/access/">Cloudflare Access</a> and create internal dashboards and admin tools that are securely locked to only the people in your organization. The world, truly, is your oyster.</p>
    <div>
      <h2>The D1 developer experience</h2>
      <a href="#the-d1-developer-experience">
        
      </a>
    </div>
    <p>We’ll talk about the capabilities, and upcoming features further down in the post, but at the core of it, the strength of D1 is the developer experience: allowing you to go from nothing to a full stack application in an instant. Think back to a tool you’ve used that made development feel magical — that’s exactly what we want developing with Workers and D1 to feel like.</p><p>To give you a sense of it, here’s what getting started with D1 will look like.</p>
    <div>
      <h3>Creating your first D1 database</h3>
      <a href="#creating-your-first-d1-database">
        
      </a>
    </div>
    <p>With D1, you will be able to create a database, in just a few clicks — define the tables, insert or upload some data, no need to memorize any commands unless you need to.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2T8rfy6OTt9UbegCxnLFWD/999f52d4fe1c0e4765acf72e62dc9390/image4-10.png" />
            
            </figure><p>Of course, if the command-line is your jam, earlier this week, we announced <a href="/10-things-i-love-about-wrangler/">the new and improved Wrangler 2</a>, the best tool for wrangling and deploying your Workers, and soon also your tool for deploying D1. Wrangler will also come with native D1 support, so you can create &amp; manage databases with a few simple commands:</p><div></div>
<p></p>
    <div>
      <h3>Accessing D1 from your Worker</h3>
      <a href="#accessing-d1-from-your-worker">
        
      </a>
    </div>
    <p>Attaching D1 to your Worker is as easy as creating a new binding. Each D1 database that you attach to your Worker gets attached with its own binding on the <code>env</code> parameter:</p>
            <pre><code>export default {
  async fetch(request, env, ctx) {
    const { pathname } = new URL(request.url)
    if (pathname === '/num-products') {
      const { result } = await env.DB.get(`SELECT count(*) AS num_products FROM Product;`)
      return new Response(`There are ${result.num_products} products in the D1 database!`)
    }
  }
}</code></pre>
            <p>Or, for a slightly more complex example, you can safely pass parameters from the URL to the database using a Router and parameterised queries:</p>
            <pre><code>import { Router } from 'itty-router';
const router = Router();

router.get('/product/:id', async ({ params }, env) =&gt; {
  const { result } = await env.DB.get(
    `SELECT * FROM Product WHERE ID = $id;`,
    { $id: params.id }
  )
  return new Response(JSON.stringify(result), {
    headers: {
      'content-type': 'application/json'
    }
  })
})

export default {
  fetch: router.handle,
}</code></pre>
            
    <div>
      <h2>So what can you expect from D1?</h2>
      <a href="#so-what-can-you-expect-from-d1">
        
      </a>
    </div>
    <p>First and foremost, we want you to be able to develop with D1, without having to worry about cost.</p><p>At Cloudflare, we don’t believe in keeping your data hostage, so D1, like R2, will be free of egress charges. Our plan is to price D1 like we price our storage products by charging for the base storage plus database operations performed.</p><p>But, again, we don’t want our customers worrying about the cost or what happens if their business takes off, and they need more storage or have more activity. We want you to be able to build applications as simple or complex as you can dream up. We will ensure that D1 costs less and performs better than comparable centralized solutions. The promise of serverless and a global network like Cloudflare’s is <a href="https://www.cloudflare.com/learning/performance/speed-up-a-website/">performance</a> and <a href="https://www.cloudflare.com/plans/">lower cost</a> driven by our architecture.</p><p>Here’s a small preview of the features in D1.</p>
    <div>
      <h3>Read replication</h3>
      <a href="#read-replication">
        
      </a>
    </div>
    <p>With D1, we want to make it easy to store your whole application's state in the one place, so you can perform arbitrary queries across the full data set. That’s what makes relational databases so powerful.</p><p>However, we don’t think powerful should be synonymous with cumbersome. Most relational databases are huge, monolithic things and configuring replication isn't trivial, so in general, most systems are designed so that all reads and writes flow back to a single instance. D1 takes a different approach.</p><p>With D1, we want to take configuration off your hands, and take advantage of Cloudflare's global network. D1 will create read-only clones of your data, close to where your users are, and constantly keep them up-to-date with changes.</p>
    <div>
      <h3>Batching</h3>
      <a href="#batching">
        
      </a>
    </div>
    <p>Many operations in an application don't just generate a single query. If your logic is running in a Worker near your user, but each of these queries needs to execute on the database, then sending them across the wire one-by-one is extremely inefficient.</p><p>D1’s API includes batching: anywhere you can send a single SQL statement you can also provide an array of them, meaning you only need a single HTTP round-trip to perform multiple operations. This is perfect for transactions that need to execute and commit atomically:</p>
            <pre><code>async function recordPurchase(userId, productId, amount) { 
  const result = await env.DB.exec([
    [
      `UPDATE users SET balance = balance - $amount WHERE user_id = $user_id`,
      { $amount: amount, $user_id: userId },
    ],
    [
      'UPDATE product SET total_sales = total_sales + $amount WHERE product_id = $product_id',
      { $amount: amount, $product_id: productId },
    ],
  ])
  return result
}</code></pre>
            
    <div>
      <h3>Embedded compute</h3>
      <a href="#embedded-compute">
        
      </a>
    </div>
    <p>But we're going further. With D1, it will be possible to define a chunk of your Worker code that runs directly next to the database, giving you total control and maximum performance—each request first hits your Worker near your users, but depending on the operation, can hand off to another Worker deployed alongside a replica or your primary D1 instance to complete its work.</p>
    <div>
      <h3>Backups and redundancy</h3>
      <a href="#backups-and-redundancy">
        
      </a>
    </div>
    <p>There are few things as critical as the data stored in your main application's database, so D1 will automatically save snapshots of your database to Cloudflare's cloud storage service, R2, at regular intervals, with a one-click restoration process. And, since we're building on the redundant storage of Durable Objects, your database can physically move locations as needed, resulting in self-healing from even the most catastrophic problems in seconds.</p>
    <div>
      <h3>Importing and exporting data</h3>
      <a href="#importing-and-exporting-data">
        
      </a>
    </div>
    <p>While D1 already supports the SQLite API, making it easy for you to write your queries, you might also need data to run them on. If you’re not creating a brand-new application, you may want to import an existing dataset from another source or database, which is why we’ll be working on allowing you to bring your own data to D1.</p><p>Likewise, one of SQLite’s advantages is its portability. If your application has a dedicated staging environment, say, you’ll be able to clone a snapshot of that data down to your local machine to develop against. And we’ll be adding more flexibility, such as the ability to create a new database with a set of test data for each new pull request on your Pages project.</p>
    <div>
      <h2>What’s next?</h2>
      <a href="#whats-next">
        
      </a>
    </div>
    <p>This wouldn’t be a Cloudflare announcement if we didn’t conclude on “we’re just getting started!” — and it’s true! We are really excited about all the powerful possibilities our database on our global network opens up.</p><p>Are you already thinking about what you’re going to build with D1 and Workers? Same. <a href="https://www.cloudflare.com/lp/d1/">Give us your details</a>, and we’ll give you access as soon as we can — look out for a beta invite from us starting as early as June 2022!</p><p>If you want to read more, refer to our <a href="https://developers.cloudflare.com/d1/">documentation</a>.</p><p>
</p> ]]></content:encoded>
            <category><![CDATA[Platform Week]]></category>
            <category><![CDATA[Product News]]></category>
            <category><![CDATA[Developers]]></category>
            <guid isPermaLink="false">354OnvEDxBcSZ40eSlALjt</guid>
            <dc:creator>Rita Kozlov</dc:creator>
            <dc:creator>Glen Maddern</dc:creator>
        </item>
        <item>
            <title><![CDATA[Cloudflare Pages Goes Full Stack]]></title>
            <link>https://blog.cloudflare.com/cloudflare-pages-goes-full-stack/</link>
            <pubDate>Wed, 17 Nov 2021 13:59:32 GMT</pubDate>
            <description><![CDATA[ Cloudflare Pages with Functions is now in open beta! ]]></description>
            <content:encoded><![CDATA[ <p></p><p>When we announced Cloudflare Pages as <a href="/cloudflare-pages-ga/#:~:text=In%20December%2C%20we%20announced%20the,powerful%20tool%20in%20developers'%20hands.">generally available</a> in April, we promised you it was just the beginning. The journey of our platform started with support for static sites with small bits of dynamic functionality like setting <a href="/custom-headers-for-pages/">redirects and custom headers</a>. But we wanted to give even more power to you and your teams to begin building the unimaginable. We envisioned a future where your entire application — frontend, APIs, storage, data — could all be deployed with a single commit, easily testable in staging and requiring a single merge to deploy to production. So in the spirit of “Full Stack” Week, we’re bringing you the tools to do just that.</p><p>Welcome to the future, everyone. We’re thrilled to announce that Pages is now a Full Stack platform with help from <a href="https://workers.cloudflare.com/?&amp;_bt=521144407143&amp;_bk=&amp;_bm=b&amp;_bn=g&amp;_bg=123914288844&amp;_placement=&amp;_target=&amp;_loc=9067609&amp;_dv=c&amp;awsearchcpc=1&amp;gclid=Cj0KCQiAsqOMBhDFARIsAFBTN3eyQsvbPzy3y3BOeCnYZMDVjSd8QkaoPbOfFiFWxSK8zEm9lSCNAJsaAnfkEALw_wcB&amp;gclsrc=aw.ds">Cloudflare Workers</a>!</p>
    <div>
      <h2>But how?</h2>
      <a href="#but-how">
        
      </a>
    </div>
    <p>It works the exact same way Pages always has: write your code, <code>git push</code> to your git provider (<a href="/cloudflare-pages-partners-with-gitlab/">now supporting GitLab</a>!) and we’ll deploy your entire site for you. The only difference is, it won’t just be your frontend but your backend too using Cloudflare Workers to help deploy serverless functions.</p>
    <div>
      <h3>The integration you’ve been waiting for</h3>
      <a href="#the-integration-youve-been-waiting-for">
        
      </a>
    </div>
    <p>Cloudflare Workers provides a serverless execution environment that allows you to create entirely new applications or augment existing ones without configuring or maintaining infrastructure. Before today, it was <i>possible</i> to connect Workers to a Pages project—installing Wrangler and manually deploying a Worker by writing your app in both Pages and Workers. But we didn’t just want “possible”, we wanted something that came as second nature to you so you wouldn’t have to think twice about adding dynamic functionality to your site.</p>
    <div>
      <h2>How it works</h2>
      <a href="#how-it-works">
        
      </a>
    </div>
    <p>By using your repo’s filesystem convention and exporting one or more function handlers, Pages can leverage Workers to deploy serverless functions on your behalf. To begin, simply add a <code>./functions</code> directory in the root of your project, and inside a JavaScript or TypeScript file, export a function handler. For example, let’s say in your <code>./functions</code> directory, you have a file, <code>hello.js</code>, containing:</p>
            <pre><code>// GET requests to /filename would return "Hello, world!"
export const onRequestGet = () =&gt; {
  return new Response("Hello, world!")
}

// POST requests to /filename with a JSON-encoded body would return "Hello, &lt;name&gt;!"
export const onRequestPost = async ({ request }) =&gt; {
  const { name } = await request.json()
  return new Response(`Hello, ${name}!`)
}</code></pre>
            <p>If you perform a <code>git commit</code>, it will trigger a new Pages build to deploy your dynamic site! During the build pipeline, Pages traverses your directory, mapping the filenames to URLs relative to your repo structure.</p><p>Under the hood, Pages generates Workers which include all your routing and functionality from the source.  Functions supports deeply-nested routes, wildcard matching, middleware for things like authentication and error-handling, and more! To demonstrate all of its bells and whistles, we’ve created a blog post to walk through an <a href="/building-full-stack-with-pages">example full stack application</a>.</p>
    <div>
      <h2>Letting you do what you do best</h2>
      <a href="#letting-you-do-what-you-do-best">
        
      </a>
    </div>
    <p>As your site grows in complexity, with Pages’ new full stack functionality, your developer experience doesn’t have to. You can enjoy the workflow you know and love while unlocking even more depth to your site.</p>
    <div>
      <h3>Seamlessly build</h3>
      <a href="#seamlessly-build">
        
      </a>
    </div>
    <p>In the same way we’ve handled builds and deployments with your static sites — with a <code>git commit</code> and <code>git push</code> — we’ll deploy your functions for you automatically. As long as your directory follows the proper structure, Pages will identify and deploy your functions to our network with your site.</p>
    <div>
      <h3>Define your bindings</h3>
      <a href="#define-your-bindings">
        
      </a>
    </div>
    <p>While bringing your Workers to Pages, bindings are a big part of what makes your application a <b><i>full stack</i></b> application**.** We’re so excited to bring to Pages all the bindings you’ve previously used with regular Workers!</p><ul><li><p><b>KV namespace:</b> Our serverless and globally accessible key-value storage solution. Within Pages, you can integrate with any of the KV namespaces you set in your Workers dashboard for your Pages project.</p></li><li><p><b>Durable Object namespace:</b> Our strongly consistent coordination primitive that makes connecting WebSockets, handling state and building entire applications a breeze. As with KV, you can set your namespaces within the Workers dashboard and choose from that list within the Pages interface.</p></li><li><p><b>R2 (coming soon!):</b> Our <a href="https://www.cloudflare.com/developer-platform/solutions/s3-compatible-object-storage/">S3-compatible Object Storage solution</a> that’s slashing <a href="https://www.cloudflare.com/learning/cloud/what-are-data-egress-fees/">egress fees</a> to zero.</p></li><li><p><b>Environment Variable: </b> An injected value that can be accessed by your functions and is stored as plain-text. You can set your environment variables directly within the Pages interface for both your production and preview environments at build-time and run-time.</p></li><li><p><b>Secret (coming soon!): </b> An encrypted environment variable, which cannot be viewed by wrangler or any dashboard interfaces. Secrets are a great home for sensitive data including passwords and API tokens.</p></li></ul>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6AIqWYs5VMYZhxNRAGoLvY/3b3c1572fba8a42b73f4712091d1c7a7/image2-15.png" />
            
            </figure>
    <div>
      <h3>Preview deployments — now for your backend too</h3>
      <a href="#preview-deployments-now-for-your-backend-too">
        
      </a>
    </div>
    <p>With the deployment of your serverless functions, you can still enjoy the ease of collaboration and testing like you did previously. Before you deploy to production, you can easily deploy your project to a preview environment to stage your changes. Even with your functions, Pages lets you keep a version history of every commit with a unique URL for each, making it easy to gather feedback whether it’s from a fellow developer, PM, designer or marketer! You can also enjoy the same infinite staging privileges that you did for static sites, with a consistent URL for the latest changes.</p>
    <div>
      <h3>Develop and preview locally too</h3>
      <a href="#develop-and-preview-locally-too">
        
      </a>
    </div>
    <p>However, we realize that building and deploying with every small change just to stage your changes can be cumbersome at times if you’re iterating quickly. You can now develop full stack Pages applications with the latest release of our wrangler CLI. Backed by Miniflare, you can run your entire application locally with support for mocked secrets, environment variables, and KV (Durable Objects support coming soon!). Point wrangler at a directory of static assets, or seamlessly connect to your existing tools:</p>
            <pre><code># Install wrangler v2 beta
npm install wrangler@beta

# Serve a folder of static assets
npx wrangler pages dev ./dist

# Or automatically proxy your existing tools
npx wrangler pages dev -- npx react-scripts start</code></pre>
            <p>This is just the beginning of Pages' integrations with wrangler. Stay tuned as we continue to enhance your developer experience.</p>
    <div>
      <h2>What else can you do?</h2>
      <a href="#what-else-can-you-do">
        
      </a>
    </div>
    <p>Everything you can do with HTTP Workers today!</p><p>When deploying a Pages application with functions, Pages is compiling and deploying first class Workers on your behalf. This means there is zero functionality loss when deploying a Worker within your Pages application — instead, there are only new benefits to be gained!</p>
    <div>
      <h3>Integrate with SvelteKit — out of the box!</h3>
      <a href="#integrate-with-sveltekit-out-of-the-box">
        
      </a>
    </div>
    <p><a href="https://github.com/sveltejs/kit">SvelteKit</a> is a web framework for building Svelte applications. It’s built and maintained by the Svelte team, which makes it the Svelte user’s go-to solution for all their application needs. Out of the box, SvelteKit allows users to build projects with complex API backends.</p><p>As of today, SvelteKit projects can attach and configure the <a href="https://github.com/sveltejs/kit"><code>@sveltejs/adapter-cloudflare</code></a> package. After doing this, the project can be added to Pages and is ready for its first deployment! With Pages, your SvelteKit project(s) can deploy with API endpoints and full server-side rendering support. Better yet, the entire project — including the API endpoints — can enjoy the benefits of preview deployments, too! This, even on its own, is a huge victory for advanced projects that were previously on the Workers adapter. Check out this <a href="http://github.com/lukeed/pages-fullstack">example to see the SvelteKit adapter</a> for Pages in action!</p>
    <div>
      <h3>Use server-side rendering</h3>
      <a href="#use-server-side-rendering">
        
      </a>
    </div>
    <p>You are now able to intercept any request that comes into your Pages project. This means that you can define Workers logic that will receive incoming URLs and, instead of serving static HTML, your Worker can render fresh HTML responses with dynamic data.</p><p>For example, an application with a product page can define a single <code>product/[id].js</code> file that will receive the <code>id</code> parameter, retrieve the product information from a Workers KV binding, and then generate an HTML response for that page. Compared to a static-site generator approach, this is more succinct and easier to maintain over time since you do not need to build a static HTML page <i>per product</i> at build-time… which may potentially be tens or even hundreds of thousands of pages!</p>
    <div>
      <h2>Already have a Worker? We’ve got you!</h2>
      <a href="#already-have-a-worker-weve-got-you">
        
      </a>
    </div>
    <p>If you already have a single Worker and want to bring it right on over to Pages to reap the developer experience benefits of our platform, our announcement today also enables you to do precisely that. Your build can generate an ES module Worker called <code>_worker.js</code> in the output directory of your project, perform your git commands to deploy, and we’ll take care of the rest! This can be especially advantageous to you if you’re a framework author or have a more complex use case that doesn’t follow our provided file structure.</p>
    <div>
      <h2>Try it at no cost — for a limited time only</h2>
      <a href="#try-it-at-no-cost-for-a-limited-time-only">
        
      </a>
    </div>
    <p>We’re thrilled to be releasing our open beta today for everyone to try at no additional cost to your Cloudflare plan. While we will still have <a href="https://developers.cloudflare.com/pages/platform/functions#pricing-and-limits">limits</a> in place, we are using this open beta period to learn more about how you and your teams are deploying functions with your Pages projects. For the time being, we encourage you to lean into your creativity and build out that site you’ve been thinking about for a long time — without the worry of getting billed.</p><p>In just a few short months, when we announce General Availability, you can expect our billing to reflect that of the Workers Bundled plan — after all, these are just Workers under the hood!</p>
    <div>
      <h2>Coming up…</h2>
      <a href="#coming-up">
        
      </a>
    </div>
    <p>As we’re only announcing this release as an open beta, we have some really exciting things planned for the coming weeks and months. We want to improve on the quick and easy Pages developer experience that you're already familiar with by adding support for integrated logging and more analytics for your deployed functions.</p><p>Beyond that, we'll be expanding our first-class support for the next generation of frontend frameworks. As we've shown with SvelteKit, Pages' ability to seamlessly deploy both static and dynamic code together enables unbeatable end-user performance &amp; developer ease, and we're excited to unlock that for more people. Fans of similar frameworks &amp; technologies, such as NextJS, NuxtJS, React Server Components, Remix, Hydrogen, etc., stay tuned to this blog for more announcements. Or better yet, come <a href="https://www.cloudflare.com/careers/">join us</a> and help make it happen!</p><p>Additionally, as we’ve done with SvelteKit, we’re looking to include more first-class integration with existing frameworks, so Pages can become the primary home for your preferred frameworks of choice. Work is underway on making NextJS, NuxtJS, React Server Components, Shopify Hydrogen and more integrate seamlessly as you develop your full stack apps.</p><p>Finally, we’re working to speed up those build times, so you can focus on pushing changes and iterating quickly — without the wait!</p>
    <div>
      <h2>Getting started</h2>
      <a href="#getting-started">
        
      </a>
    </div>
    <p>To get started head over to our <a href="https://developers.cloudflare.com/pages/platform/functions">Pages docs</a> and check out our <a href="/building-full-stack-with-pages">demo blog</a> to learn more about how to deploy serverless functions to Pages using Cloudflare Workers.</p><p>Of course, what we love most is seeing what you build! Pop into <a href="https://discord.com/invite/cloudflaredev">our Discord</a> and show us how you’re using Pages to build your full stack apps.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5TghCmJGOGyp3tdbWO6xmV/9cb9456f183c182837b9b0fd19d6b692/image3-21.png" />
            
            </figure>
    <div>
      <h3>Watch on Cloudflare TV</h3>
      <a href="#watch-on-cloudflare-tv">
        
      </a>
    </div>
    <div></div><p></p> ]]></content:encoded>
            <category><![CDATA[Full Stack Week]]></category>
            <category><![CDATA[Cloudflare Pages]]></category>
            <category><![CDATA[Cloudflare Workers]]></category>
            <category><![CDATA[Full Stack]]></category>
            <category><![CDATA[Developers]]></category>
            <category><![CDATA[Developer Platform]]></category>
            <guid isPermaLink="false">34bHu85gjHcwSanuNv6VQ8</guid>
            <dc:creator>Nevi Shah</dc:creator>
            <dc:creator>Glen Maddern</dc:creator>
            <dc:creator>Cina Saffary</dc:creator>
        </item>
        <item>
            <title><![CDATA[Cloudflare Acquires Linc]]></title>
            <link>https://blog.cloudflare.com/cloudflare-acquires-linc/</link>
            <pubDate>Tue, 22 Dec 2020 13:00:00 GMT</pubDate>
            <description><![CDATA[ Today, we’re excited to announce the acquisition of Linc, an automation platform to help front-end developers collaborate and build powerful applications. ]]></description>
            <content:encoded><![CDATA[ <p></p><p>Cloudflare has always been about democratizing the Internet. For us, that means bringing the most powerful tools used by the largest of enterprises to the smallest development shops. Sometimes that looks like putting our global network to work defending against large-scale attacks. Other times it looks like giving Internet users simple and reliable privacy services like 1.1.1.1.  Last week, it looked like <a href="https://pages.cloudflare.com/">Cloudflare Pages</a> — a fast, secure and free way to build and host your <a href="https://www.cloudflare.com/learning/performance/what-is-jamstack/">JAMstack sites</a>.</p><p>We see a huge opportunity with Cloudflare Pages. It goes beyond making it as easy as possible to deploy static sites, and extending that same ease of use to building full dynamic applications. By creating a seamless integration between Pages and <a href="https://workers.cloudflare.com/">Cloudflare Workers</a>, we will be able to host the frontend and backend together, at the edge of the Internet and close to your users. The <a href="https://linc.sh/">Linc</a> team is joining Cloudflare to help us do just that.</p><p>Today, we’re excited to announce the acquisition of <a href="https://linc.sh/">Linc</a>, an automation platform to help front-end developers collaborate and build powerful applications. Linc has done amazing work with <a href="https://fab.dev/">Frontend Application Bundles</a> (FABs), making dynamic backends more accessible to frontend developers. Their approach offers a straightforward path to building end-to-end applications on Pages, with both frontend logic and powerful backend logic in one bundle. With the addition of Linc, we will accelerate Pages to enable richer and more powerful full-stack applications.</p><p>Combining Cloudflare’s edge network with Linc’s approach to server-side rendering, we’re able to set a new standard for performance on the web by delivering the speed of powerful servers close to users. Now, I’ll hand it over to Glen Maddern, who was the CTO of Linc, to share why they joined Cloudflare.</p><hr /><p>Linc and the Frontend Application Bundle (FAB) specification were designed with a single goal in mind: to give frontend developers the best possible tools to build, review, refine, and deploy their applications. An important piece of that is making server-side logic and rendering much more accessible, regardless of what type of app you're building.</p>
    <div>
      <h3>Static vs Dynamic frontends</h3>
      <a href="#static-vs-dynamic-frontends">
        
      </a>
    </div>
    <p>One of the biggest problems in frontend web development today is the dramatic difference in complexity when moving from generating static sites (e.g. building a directory full of HTML, JS, and CSS files) to hosting a full application (traditionally using NodeJS and a web server like Express). While you gain the flexibility of being able to render everything on-demand and customised for the current user, you increase your maintenance cost — you now have servers that you need to keep running. And unless you're operating at a global scale already, you'll often see worse end-user performance as your requests are only being served from one or maybe a couple of locations worldwide.</p><p>While serverless platforms have arisen to solve these problems for backend services and can be brought to bear on frontend apps, they're much less cost-effective than using static hosting, especially if the bulk of your frontend assets are static. As such, we've seen a rise of technologies under the umbrella term of "<a href="https://www.cloudflare.com/the-net/jamstack-websites/">JAMstack</a>"; they aim at making static sites more powerful (like rebuilding based off CMS updates), or at making it possible to deploy small pieces of server-side APIs as "cloud functions", along with each update of your app. But it's still fundamentally a limited architecture — you always have a static layer between you and your users, so the more dynamic your needs, the more complex your build pipeline becomes, or the more you're forced to rely on client-side logic.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3nHs2UUyT6chrUyhl6xyw0/c83dec72860b29808ddc81d5b23cc3e6/image4-26.png" />
            
            </figure><p>FABs took a different approach: a deployment artefact that could support the full range of server-side needs, from entirely static sites, apps with some API routes or cloud functions, all the way to full server-side streaming rendering. We also made it compatible with all the cloud hosting providers you might want, so that deploying becomes as easy as uploading a ZIP file. Then, as your needs change, as dynamic content becomes more important, as new frameworks arise that offer increasing performance or you look at moving which provider you're hosting with, you never need to change your tooling and deployment processes.</p>
    <div>
      <h3>The FAB approach</h3>
      <a href="#the-fab-approach">
        
      </a>
    </div>
    <p>Regardless of what framework you're working with, the FAB compiler generates a fab.zip file that has two components: a server.js file that acts as a server-side entry point, and an _assets directory that stores the HTML, CSS, JS, images, and fonts that are sent to the client.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6hCbaeFsNVeN579nyOCVfZ/e24bb3feb541c696d50cee8736cb629a/image3-43.png" />
            
            </figure><p>This simple structure gives us enough flexibility to handle all kinds of apps. For example, a static site will have a server.js of only a few auto-generated lines of server-side code, just enough to add redirects for any files outside the _assets directory. On the other end of the spectrum, an app with full server rendering looks and works exactly the same. It just has a lot more code inside its server.js file.</p><p>On a server running NodeJS, serving a compiled FAB is as easy as fab serve fab.zip, but FABs are really designed with production class hosting in mind. They make use of world-class <a href="https://www.cloudflare.com/learning/cdn/what-is-a-cdn/">CDNs</a> and the best serverless hosting platforms around.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5iIRJAFAloTN8spJ8zbEIT/082f423dc57e5a03ba6a312e6675cbd7/image5-27.png" />
            
            </figure><p>When a FAB is deployed, it's often split into these component parts and deployed separately. Assets are sent to a <a href="https://www.cloudflare.com/developer-platform/products/r2/">low-cost object storage platform</a> with a CDN in front of it, and the server component is sent to dedicated serverless hosting. It's all deployed in an atomic, idempotent manner that feels as simple as uploading static files, but completely unlocks dynamic server-side code as part of your architecture.</p><p>That generic architecture works great and is compatible with virtually every hosting platform around, but it works slightly differently on Cloudflare Workers.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5xdv1Txo6E6IuxLGuBUj6d/735efdbf5f13d90225e4e4dfa11e2c04/image2-43.png" />
            
            </figure><p>Workers, unlike other serverless platforms, truly runs at the edge: there is no CDN or load balancer in front of it to split off /_assets routes and send them directly to the Assets storage. This means that every request hits the worker, whether it's triggering a full page render or piping through the bytes for an image file. It might feel like a downside, but with Workers' performance and cost profile, it's quite the opposite — it actually gives us much more flexibility in what we end up building, and gets us closer to the goal of fully unlocking server-side code.</p><p>To give just one example, we no longer need to store our asset files on a dedicated static file host — instead, we can use Cloudflare's global key-value storage: Workers KV. Our server.js running inside a Worker can then map /_assets requests directly into the KV store and stream the result to the user. This results in significantly better performance than proxying to a third-party asset host.</p><p>What we've found is that Cloudflare offered the most "FAB-native" hosting option, and so it's very exciting to have the opportunity to further develop what they can do.</p>
    <div>
      <h3>Linc + Cloudflare</h3>
      <a href="#linc-cloudflare">
        
      </a>
    </div>
    <p>As we stated above, Linc's goal was to give frontend developers the best tooling to build and refine their apps, regardless of which hosting they were using. But we started to notice an important trend —  if a team had a free choice for where to host their frontend, they inevitably chose Cloudflare Workers. In some cases, for a period, teams even used Linc to deploy a FAB to Workers alongside their existing hosting to demonstrate the performance improvement before migrating permanently.</p><p>At the same time, we started to see more and more opportunities to fully embrace edge-rendering and make global serverless hosting more powerful and accessible. But the most exciting ideas required deep integration with the hosting providers themselves. Which is why, when we started talking to Cloudflare, everything fell into place.</p><p>We're so excited to join the Cloudflare effort and work on expanding Cloudflare Pages to cover the full spectrum of applications. Not only do they share our goal of bringing sophisticated technology to every development team, but with innovations like Durable Objects starting to offer new storage paradigms, the potential for a truly next-generation deployment, review &amp;  hosting platform is tantalisingly close.</p> ]]></content:encoded>
            <category><![CDATA[Serverless]]></category>
            <category><![CDATA[Cloudflare Workers]]></category>
            <category><![CDATA[Cloudflare Pages]]></category>
            <category><![CDATA[JAMstack]]></category>
            <category><![CDATA[Product News]]></category>
            <category><![CDATA[Developers]]></category>
            <category><![CDATA[Developer Platform]]></category>
            <category><![CDATA[Acquisitions]]></category>
            <guid isPermaLink="false">5qB64EMTsm3aXN4gkJN1yx</guid>
            <dc:creator>Aly Cabral</dc:creator>
            <dc:creator>Glen Maddern</dc:creator>
        </item>
    </channel>
</rss>